Releases: kayba-ai/agentic-context-engine
v0.8.3
v0.8.2
- RecursiveReflector None-response guard — gracefully handles empty/None LLM responses (e.g. from Gemini) with retry prompt instead of crashing
LiteLLMClient.complete_messages()— native multi-turn completion that preserves structured message lists
Full Changelog: v0.8.1...v0.8.2
v0.8.1
Insight Source Tracing
Track where every skill in your skillbook came from.
Added
- Insight source tracing —
InsightSourcedataclass tracks skill provenance (epoch, sample, trace refs, error identification, learning text) - Sample.id promoted to first-class field with UUID auto-generation
- Skillbook query API —
source_map(),source_summary(),source_filter()for skill lineage - Insight sources wired through
OfflineACE,OnlineACE, and async learning pipelines UpdateOperation.learning_indexfor linking operations to reflector learnings- Bedrock e2e example (
examples/litellm/bedrock_insight_source_test.py) docs/INSIGHT_SOURCES.mdguide
Full Changelog: v0.8.0...v0.8.1
v0.8.0
What's New
- TAU-bench integration: Full benchmark framework for evaluating agents on TAU-bench tasks
- Recursive Reflector: New reflector module with sandbox execution, trace context, and sub-agent support
- Skillbook tools: Clean, consolidate, and merge skillbooks via new utility scripts
Full Changelog: v0.7.0...v0.8.0
v0.7.3
v0.7.2
What's New
Agentic System Prompting
New workflow to automatically optimize your agent's system prompts using your own data. Feed in past traces or conversations, and ACE analyzes what worked and what failed to generate actionable prompt suggestions.
Traces / Conversations → ACE → Prompt Suggestions
Each suggestion includes the recommended prompt text, justification for why it helps, and evidence from your actual traces. You review and decide what to implement.
See examples/agentic-system-prompting/ for the full workflow.
Other Changes
- Fix: Align test matrix with Python 3.12 requirement
- Fix: Use setup-uv action for Windows CI compatibility
v0.7.1
v0.7.0: Skillbook Rename
⚠️ Breaking Changes
Complete terminology rename - Playbook → Skillbook, Bullet → Skill
| Old | New |
|---|---|
Playbook |
Skillbook |
Bullet |
Skill |
Generator |
Agent |
Curator |
SkillManager |
OfflineAdapter |
OfflineACE |
OnlineAdapter |
OnlineACE |
DeltaOperation |
UpdateOperation |
DeltaBatch |
UpdateBatch |
Migration:
# Old
from ace import Playbook, Bullet, Generator, Curator, OfflineAdapter
# New
from ace import Skillbook, Skill, Agent, SkillManager, OfflineACEJSON files: Change "bullets" key to "skills" in saved skillbooks.
Fixed
- Deduplication now properly applies consolidation operations
v0.6.0
Summary
Async learning pipeline with parallel Reflectors, bullet deduplication, and Instructor integration.
🚀 Async Learning
Non-blocking background learning - answers return immediately while learning continues in background threads.
agent.learn(samples, env, async_learning=True, max_reflector_workers=3)🔍 Bullet Deduplication
Vector embedding-based duplicate detection prevents playbook bloat.
agent = ACELiteLLM(model="gpt-4o-mini", dedup_config=DeduplicationConfig(similarity_threshold=0.80))📋 Instructor Integration
Robust JSON parsing with Pydantic schema validation and automatic retries.
Other Changes
- Reorganized examples by integration type (litellm/, langchain/, local-models/)
- Fixed Claude temperature+top_p conflict
- Improved Curator prompt for better deduplication and imperative strategy format
- Increased default max_tokens from 512 to 2048 to prevent truncation
- Added comprehensive test suites (~1600 lines)
Tests
291 passed, 67% coverage
🤖 Generated with Claude Code
v0.5.1
Bug Fixes
- Fixed Opik integration warnings for base installations
- Improved Opik configuration for local usage
Full Changelog: v0.5.0...v0.5.1