-
Notifications
You must be signed in to change notification settings - Fork 36
Description
Problem/Motivation
(Solution inspired on langmem .. )
With the new eat_semantic_memory store in place, we need an automated process to populate it. The system currently learns from experiences in a "just-in-time" fashion when the ContextBuilderTool queries the raw logs. This is not efficient for extracting generalizable knowledge.
We need a dedicated background agent that can reflect on the episodic memory (eat_agent_experiences), identify patterns, successes, and failures, and distill these observations into durable, semantic facts.
Proposed Solution
We will create a new ReActAgent called InsightExtractorAgent. This agent's sole purpose is to perform "memory reflection." It will be invoked by a script that can be run periodically (e.g., via a cron job). The agent will be given a batch of recent experiences and tasked with generating concise, high-value facts to be stored in the new eat_semantic_memory collection.
Implementation Details
-
Create the Agent Definition:
- Create a new file:
evolving_agents/agents/insight_extractor_agent.py. - Define the
InsightExtractorAgentclass, likely inheriting fromReActAgent. - Agent Prompting: The agent's system prompt is critical. It should be instructed to act as an expert AI systems analyst. Its goal is to review a provided set of agent experiences (in JSON format) and identify:
- Successful Patterns: "When tasked with X, the sequence of tools Y -> Z consistently leads to success."
- Failure Correlations: "Component A frequently fails when the input contains complex tables."
- Component Effectiveness: "Tool B is highly effective and efficient for 'data validation' tasks."
- The prompt must explicitly ask the agent to call the
add_factfunction of theMongoSemanticMemoryStoreToolfor each insight it generates.
- Create a new file:
-
Define Agent Tools:
- The
InsightExtractorAgentneeds tools to perform its function. It will be initialized with:MongoSemanticMemoryStoreTool: To save the new facts it generates.SemanticExperienceSearchTool(Read-only access): To potentially find related historical experiences for broader context, if needed.
- The
-
Create the Invocation Script:
- Create a new script:
scripts/run_memory_reflection.py. - This script will be the entry point for running the reflection process. Its logic should be:
a. Initialize theDependencyContainerto get access to all services.
b. Instantiate theInsightExtractorAgent.
c. Fetch Experiences: Query theeat_agent_experiencescollection for records that have not yet been processed. This can be done by looking for documents without areflection_processed: trueflag. Fetch a manageable batch (e.g., 100 records).
d. Invoke the Agent: For each batch of experiences, format them into a single string/JSON payload and pass it to theInsightExtractorAgent.run()method with a prompt like:"Analyze the following agent experiences and generate semantic facts about successful patterns, failure modes, and component effectiveness. Use the 'add_fact' tool to store each insight."
e. Mark as Processed: After the agent has finished, update the processed records ineat_agent_experienceswith a flag (e.g.,db.collection.updateMany(..., {"$set": {"reflection_processed": true}})).
- Create a new script:
Acceptance Criteria
- The
InsightExtractorAgentclass is created. - The
run_memory_reflection.pyscript is implemented and can be executed. - When the script is run, it successfully fetches unprocessed experiences from MongoDB.
- The
InsightExtractorAgentcorrectly invokes theMongoSemanticMemoryStoreToolto add new facts to theeat_semantic_memorycollection. - The experiences processed by the agent are correctly flagged in MongoDB to prevent re-processing.