-
-
-
-
-
-
-
v0.6.1
Add AI-generated documentation
-
v0.6.0
Change function calling to use a more complex decision making process. This should both reduce token consumption & improve function selection accuracy. Context trimming only affects function selection, full context is still always used when forming responses.
-
v0.5.0
Refactored code to support multiple platform with generalized patterns. Added experimental support for Discord.
-
v0.4.2
Optimizations for token usage and context handling in various cases like image analysis.
-
v0.4.1
Fixed thread/context handling for various use cases with the new, refactored code. Google Search function removed for now because it was getting in the way too often in its unfinished state.
-
v0.4.0
Migrated to OpenAI Python SDK v1.2 & added analyze_images function
-
v0.3.2
Fix problems arising from `the JSON object must be str, bytes or bytearray, not OpenAIObject` errors
-
v0.3.1
Code refactored so that function call second stage replies can be streamed. Channel summaries converted to use this streaming logic. First replies without function calling still lack streaming support.
-
v0.3.0
Image generation logic reworked massively, now using https://gitlab.psychedelic.fi/neuralvisor/middleware/gpu-resources-parallelization for load balancing request to Stable Diffusion web UI
-
v0.2.1
Refactoring continues to streamline the project. Image generation function call now fixed to use only supported SDXL resolutions.
-
v0.2.0
Major refactoring, code now leverages OpenAI function calling for various tasks and it's now much easier to integrate new features.
-
v0.1.3
- Added time-based buffering of streaming responses - Fixed bugs that prevented captioner and image generation from working due to switching to a non-privileged user in deployment - Switched CI/CD pipeline to use shell runner, sped up pylint by using --jobs=8. Deployments are now lightning fast! - Refactor system message & code analysis logic
-
v0.1.2
Added logging module & support for auto-switching to gpt-3.5-turbo-16k if the context limit of gpt-4 is reached.
-
v0.1.1
Main improvement is that context_manager now delegates post handlers to their own tasks, allowing for parallel processing of user messages.