Releases: provos/planai
v0.6.1
Release Notes - v0.6.1
Bug Fixes
- JoinedTaskWorker: Fixed infinite loop issue when using get_worker_state() in JoinedTaskWorker implementations (src/planai/joined_task.py:153)
Features
- CLI: Added version subcommand to display PlanAI version (planai version)
- API: Exposed tool and Tool from llm_interface in main planai namespace for easier access to function calling decorators
Improvements
- Provenance System: Improved notification system with callback support to avoid state management conflicts
- watch() and unwatch() methods now accept optional callback parameters
- Internal state cleanup no longer triggers notify() calls, preventing exceptions or infinite loops
v0.6.0
This release introduces tool support for LLMTaskWorker and ChatTaskWorker.
Features
- Added support for LLMTaskWorker and ChatTaskWorker to call tools. Tools are provided on a per worker basis.
- Graphs can now have a dedicated exit worker. This makes it easier to programmatically create sub graphs.
Testing
- Removed Python 3.9 from the testing matrix as PlanAI claims to support only python 3.10 and greater
- Increased artificial sleep in a Dispatcher mutli-threaded tests that was flakey
v0.5.1
This just fixes a single bug in formatting prompts for models that do not support structured outputs.
Bug Fixes
- a conditional expression was capturing the format instructions for JSON output for models that don't support structured outputs. This led to the models not knowing how to structure their output.
v0.5.0
This release introduces media input support for tasks and improvements to user input handling.
Features
- Added a MediaTask class and extended LLMTaskWorker to support and process tasks with image inputs, enabling the use of both text and image data in language model workflows.
- Refactored user input handling by introducing the UserInputRequest system, which supports external callbacks and offers increased flexibility for integrating user input throughout the task workflow.
Improvements
- Configured ChatTaskWorker so that it automatically disabled JSON mode which simplifies initialization logic.
Dependency Updates
- Updated the llm-interface dependency to version 0.1.10 to ensure compatibility with new features and fixes.
v0.4.1
New Features
- Exposed a new helper function,
add_input_provenance, to simplify the management of input provenance in test tasks. - Implemented a new
MergedTaskWorkerthat aggregates results from multiple upstream tasks; it's closely related to JoinedTaskWorker.
Improvements
- Prevent XSS by escaping input provenance in the status dashboard.
- Improved task worker testing helpers with stricter type checks, pre- and post-consume hooks.
- Changed the cache key generation in
CachedLLMTaskWorkerto use the full prompt rather than the prompt template
v0.4.0
Release Notes
Summary
This release enhances task and graph management, improves thread safety and error handling.
Features
- Enable multiple consumers per task type by switching consumer storage from a single object to a list, requiring explicit consumer specification.
- Introduce a SimplePlanAdaptor and create_simple_planning_graph function to support a simplified planning workflow when no iterations are required.
- Extend the task notification interface by adding optional parameters and support for an additional BaseModel context in status updates.
- Add helper methods in the Task class, including an 'is_type' method and a new find_input_tasks method, to improve type verification and provenance retrieval.
- Improve handling of multiple graphs and allow selection of specific graphs within the dashboard.
- Improve TaskWorker by incorporating provenance-based user state management. This state will get automatically cleaned when the provenance leaves the graph.
- Support multiple sink workers in a single graph.
Bug Fixes
- Eliminated a race condition in the dispatcher when there were multiple outstanding notifications but no other work left in the graph
- Resolved a race condition in work publishing and provenance tracking by separating check and removal phases.
- Fix XML serialization issues by modifying list item conversion and implementing recursive sanitization to correctly handle various data types.
Documentation
- Add an "Example: Deep Research" section in the README with Docker Compose instructions and updated licensing details.
- Include a detailed command-line example for automatic prompt optimization
Tests
- Enhance InvokeTaskWorker tests by integrating a MockProvenanceTracker and MockGraph, disabling debug_mode when detected, and ensuring caching functionality is initialized.
v0.3.0
Release Notes
Summary
In this release (v0.3), we present significant enhancements across multiple components, introducing the new DeepSearch example application, improved session and task management, extensive documentation updates, and critical bug fixes for subgraph handling.
Features
New DeepSearch Example Application
- Implemented a new example application that enables deep research on the internet by collecting and summarizing related web pages
- Added comprehensive chat functionality to discuss and analyze collected research materials
- Integrated session management for persistent chat interactions
- Added robust session handling across chat interfaces
New Release Notes Generator Example Application
- An example to use PlanAI to automatically generate comprehensive release notes from Git history
Core Improvements
- Task Management:
- Enhanced provenance tracking within tasks
- Fixed critical bugs in subgraph handling and task execution
- Task abortion by initial provenance which allows ongoing work to be terminated when the initial task has been abandoned
- New Frameworks:
- A search-fetch pattern that returns collated web pages based on a web search
- ChatTaskWorker which allows the creation of a traditional chat completion flow with an LLM
Interface Improvements
- Log Console: Added system log display within the interface
Bug Fixes
- Subgraph Handling: Fixed critical issues with subgraph execution and provenance tracking
- Provenance Handling: Corrected logic for task provenance tracking and cleanup
Documentation
- Enhanced Guides: Updated README and detailed guides for new features
- API Documentation: Expanded documentation on core components and testing
- Improved Clarity: Enhanced docstrings and documentation
Task Processing
- Debugging: New features for capturing and replaying task execution flows
Integration and Testing
- Tool Integration: Refined integrations with LLM providers
- Testing Coverage: Increased test coverage with new regression tests
Miscellaneous
- Memory Management: Enhanced memory tracking and visibility
- Refactored LLM Interface: LLM Interface has been refactored into a separate package: llm-interface
For a complete list of all changes, please refer to the project's commit history.
v0.2.0
PlanAI v0.2 Release Notes
The latest release of PlanAI (v0.2) brings a number new features and important fixes aimed at improving functionality and usability. Here's what this update brings to you:
New Features
- Enhanced Logging and Monitoring: The system now logs OpenAI prompt usage tokens, providing better insight into resource utilization and planning (#524c513).
- Model Support Expansion: PlanAI has support for o1-mini and o1-preview models, even with limitations like lack of JSON mode and structured outputs (#c242067).
- Interactive User Input: PlanAI can now prompt users for input as required by tasks, e.g. when it's not possible to automatically fetch the required content (#1dd0ee4).
- Social Media Example App: Uses a profile and queries of interests to suggest topics for new social media posts
- Serper Search Integration: To facilitate frequent usage scenarios, PlanAI now has a simple Serper Search integration.
Enhancements
- Task Management Improvements: A significant overhaul in how tasks are managed includes the introduction of a TaskWorker capable of running whole graphs as sub-graphs. This can reduce complexity when running larger graphs (#dbfce667).
Bug Fixes
- Robustness Improvements: Addressed several critical bugs, including removing race conditions and making the web UI more efficient (#4cd1ee3, #8a78954).
- Enhanced Error Handling: Improved the way OpenAIWrapper manages content filter errors and structured output retries to ensure error scenarios are gracefully handled (#6ea9c4e, #3b67142).
Refactoring
- Codebase Streamlining: Many parts of the codebase, like the provenance tracking module have been refactored for maintainability (#33aa727, #3852f038).
We encourage users to explore these updates and provide feedback through our GitHub repository. Detailed guides and documentation are available on our documentation page.
Thank you for choosing PlanAI and supporting our journey to excellence!
v0.1.5
PlanAI v0.1.5 Release Notes
PlanAI v0.1.5 introduces significant updates to enhance prompt optimization capabilities, along with key new features, enhancements, and bug fixes to improve the overall functionality and reliability of the system.
New Features
-
Automated Prompt Optimization: The release features an automatic prompt optimization tool designed to enhance the effectiveness of prompts for LLMTaskWorkers. This tool leverages real production data to iteratively refine prompts, supported by a refined scoring mechanism and automated iteration process (#f4338f7, #4116210).
-
Worker Statistics Tracking: A new
WorkerStatclass has been added to provide improved tracking and analysis of worker statistics, facilitating better monitoring of task execution metrics (#c8ef37f). -
LLM Provider Options: Added options for specifying LLM providers and models during prompt optimization, enabling more flexible use of different language models (#7b6b8b0).
Enhancements
-
Testing and Code Refactoring: Expanded unit test coverage for the
Graphclass and other system components to ensure robustness and reliability. Code refactoring efforts focused on simplifying task management, provenance tracking, and prompt storage, contributing to increased maintainability of the codebase (#d6d4648, #16760b5). -
Terminal Dashboard: Introduced a terminal dashboard to visually display task progress and worker statistics, enhancing user engagement and interaction during task execution (#3bfaef7).
Bug Fixes
-
Notification Handling: Addressed various bugs associated with graph notifiers to ensure notifications are accurately delivered to the intended workers and improve provenance management in
JoinedTaskWorker(#571c561, #03cbf72). -
JSON Parsing Enhancement: Improved JSON parsing by incorporating markdown handling in
MinimalPydanticOutputParser, enabling cleaner and more accurate parsing (#4528a38).
For detailed information on using the new features, including prompt optimization, please refer to our documentation page. Users are encouraged to provide feedback through our GitHub repository.
v0.1.4
PlanAI v0.1.4 Release Notes
We are excited to announce the release of PlanAI v0.1.4, which introduces new features, performance improvements, and essential bug fixes to enhance the overall functionality and efficiency of PlanAI.
New Features
- Parallel Execution Control: Added functionality to limit the number of parallel executions of specific
TaskWorkersto better manage resource allocation and improve system performance (#e9f5c75). - Support for Anthropic API: Integrated support for Anthropic's API, enabling a broader range of language models to be used within the platform (#c738edb, #936fe08).
- Enhanced Debugging Capabilities: Introduced the ability for the
llm_from_configto support a host parameter for Ollama and added options to save debug output in JSON format for improved diagnostic capabilities (#87b1961).
Enhancements
- Dashboard Improvements: Enhanced the dashboard to include execution statistics, providing users with deeper insights into application performance and behavior (#679d9ad).
- System Prompt Flexibility: Updated the
LLMTaskWorkerto use thesystem_promptfield, allowing for easier customization and flexibility in task configurations (#d27a846). - Streamlined Task Handling: Improved task list display logic, ensuring task lists retain their state even when elements are updated, resulting in a more stable and user-friendly interface (#4daa2fe).
Maintenance
- Documentation Enhancements: Improved documentation for the example app by adding docstrings and more comprehensive comments, increasing clarity for developers (#7cbaec9).
This release significantly enhances the flexibility, performance, and security of PlanAI. For more detailed information, documentation, and examples, please visit our documentation page. We welcome feedback and suggestions from our users. If you experience any issues, please report them on our GitHub repository.
Thank you for your continued support and for using PlanAI!