Skip to content

Releases: kayba-ai/agentic-context-engine

v0.8.3

21 Feb 16:53

Choose a tag to compare

  • Pipeline engine — generic pipeline framework with branching, async boundaries, and parallel execution (#78)
  • Trace passthrough_build_traces() helper and raw trace data passed to RecursiveReflector sandbox

Full Changelog: v0.8.2...v0.8.3

v0.8.2

18 Feb 19:02

Choose a tag to compare

  • RecursiveReflector None-response guard — gracefully handles empty/None LLM responses (e.g. from Gemini) with retry prompt instead of crashing
  • LiteLLMClient.complete_messages() — native multi-turn completion that preserves structured message lists

Full Changelog: v0.8.1...v0.8.2

v0.8.1

18 Feb 13:45

Choose a tag to compare

Insight Source Tracing

Track where every skill in your skillbook came from.

Added

  • Insight source tracingInsightSource dataclass tracks skill provenance (epoch, sample, trace refs, error identification, learning text)
  • Sample.id promoted to first-class field with UUID auto-generation
  • Skillbook query APIsource_map(), source_summary(), source_filter() for skill lineage
  • Insight sources wired through OfflineACE, OnlineACE, and async learning pipelines
  • UpdateOperation.learning_index for linking operations to reflector learnings
  • Bedrock e2e example (examples/litellm/bedrock_insight_source_test.py)
  • docs/INSIGHT_SOURCES.md guide

Full Changelog: v0.8.0...v0.8.1

v0.8.0

17 Feb 15:56
085ad7f

Choose a tag to compare

What's New

  • TAU-bench integration: Full benchmark framework for evaluating agents on TAU-bench tasks
  • Recursive Reflector: New reflector module with sandbox execution, trace context, and sub-agent support
  • Skillbook tools: Clean, consolidate, and merge skillbooks via new utility scripts

Full Changelog: v0.7.0...v0.8.0

v0.7.3

04 Feb 17:49

Choose a tag to compare

Release v0.7.3

v0.7.2

26 Jan 16:47

Choose a tag to compare

What's New

Agentic System Prompting

New workflow to automatically optimize your agent's system prompts using your own data. Feed in past traces or conversations, and ACE analyzes what worked and what failed to generate actionable prompt suggestions.

Traces / ConversationsACEPrompt Suggestions

Each suggestion includes the recommended prompt text, justification for why it helps, and evidence from your actual traces. You review and decide what to implement.

See examples/agentic-system-prompting/ for the full workflow.

Other Changes

  • Fix: Align test matrix with Python 3.12 requirement
  • Fix: Use setup-uv action for Windows CI compatibility

v0.7.1

08 Dec 15:21

Choose a tag to compare

Fix: Forward credentials (api_key, base_url, etc.) to Instructor client (#44)

This patch fixes an issue where custom API credentials weren't being forwarded to all internal LLM calls, causing authentication errors when using OpenAI-compatible endpoints.

v0.7.0: Skillbook Rename

04 Dec 00:57

Choose a tag to compare

⚠️ Breaking Changes

Complete terminology rename - Playbook → Skillbook, Bullet → Skill

Old New
Playbook Skillbook
Bullet Skill
Generator Agent
Curator SkillManager
OfflineAdapter OfflineACE
OnlineAdapter OnlineACE
DeltaOperation UpdateOperation
DeltaBatch UpdateBatch

Migration:

# Old
from ace import Playbook, Bullet, Generator, Curator, OfflineAdapter

# New
from ace import Skillbook, Skill, Agent, SkillManager, OfflineACE

JSON files: Change "bullets" key to "skills" in saved skillbooks.

Fixed

  • Deduplication now properly applies consolidation operations

v0.6.0

29 Nov 10:38

Choose a tag to compare

Summary

Async learning pipeline with parallel Reflectors, bullet deduplication, and Instructor integration.

🚀 Async Learning

Non-blocking background learning - answers return immediately while learning continues in background threads.

agent.learn(samples, env, async_learning=True, max_reflector_workers=3)

🔍 Bullet Deduplication

Vector embedding-based duplicate detection prevents playbook bloat.

agent = ACELiteLLM(model="gpt-4o-mini", dedup_config=DeduplicationConfig(similarity_threshold=0.80))

📋 Instructor Integration

Robust JSON parsing with Pydantic schema validation and automatic retries.

Other Changes

  • Reorganized examples by integration type (litellm/, langchain/, local-models/)
  • Fixed Claude temperature+top_p conflict
  • Improved Curator prompt for better deduplication and imperative strategy format
  • Increased default max_tokens from 512 to 2048 to prevent truncation
  • Added comprehensive test suites (~1600 lines)

Tests

291 passed, 67% coverage

🤖 Generated with Claude Code

v0.5.1

25 Nov 10:51

Choose a tag to compare

Bug Fixes

  • Fixed Opik integration warnings for base installations
  • Improved Opik configuration for local usage

Full Changelog: v0.5.0...v0.5.1