-
-
-
v0.7.0-rc46e31d514 · ·
-
v0.7.0-rc33b75a612 · ·
-
v0.7.0-rc20e61a601 · ·
-
v0.7.0-rc1e25b8308 · ·
-
-
v0.6.05c2d12ea · ·
Change function calling to use a more complex decision making process. This should both reduce token consumption & improve function selection accuracy. Context trimming only affects function selection, full context is still always used when forming responses.
-
v0.5.0505f7cfa · ·
Refactored code to support multiple platform with generalized patterns. Added experimental support for Discord.
-
v0.4.231ef37ce · ·
Optimizations for token usage and context handling in various cases like image analysis.
-
v0.4.17b3e3b81 · ·
Fixed thread/context handling for various use cases with the new, refactored code. Google Search function removed for now because it was getting in the way too often in its unfinished state.
-
-
v0.3.219576197 · ·
Fix problems arising from `the JSON object must be str, bytes or bytearray, not OpenAIObject` errors
-
v0.3.16470a7c5 · ·
Code refactored so that function call second stage replies can be streamed. Channel summaries converted to use this streaming logic. First replies without function calling still lack streaming support.
-
v0.3.075b2eac3 · ·
Image generation logic reworked massively, now using https://gitlab.psychedelic.fi/neuralvisor/middleware/gpu-resources-parallelization for load balancing request to Stable Diffusion web UI
-
v0.2.16a0c3555 · ·
Refactoring continues to streamline the project. Image generation function call now fixed to use only supported SDXL resolutions.
-
v0.2.0dd085f69 · ·
Major refactoring, code now leverages OpenAI function calling for various tasks and it's now much easier to integrate new features.
-
v0.1.3e1968163 · ·
- Added time-based buffering of streaming responses - Fixed bugs that prevented captioner and image generation from working due to switching to a non-privileged user in deployment - Switched CI/CD pipeline to use shell runner, sped up pylint by using --jobs=8. Deployments are now lightning fast! - Refactor system message & code analysis logic
-
v0.1.224c13d90 · ·
Added logging module & support for auto-switching to gpt-3.5-turbo-16k if the context limit of gpt-4 is reached.
-
v0.1.184ae708e · ·
Main improvement is that context_manager now delegates post handlers to their own tasks, allowing for parallel processing of user messages.