-
Notifications
You must be signed in to change notification settings - Fork 2
Description
Description
This issue covers the implementation of the core integration layer for the multi-agent framework. The primary goal is to build a robust bridge between the ConversationManager and the specific models and data storage solutions. This task has two main parts:
-
Model Integration: This involves setting up the API connections and local inference environments for the candidate models. It's crucial that this layer handles Input/Output Standardization and implements the Deceptive Goal Injection Logic to correctly format prompts and parse responses from each model.
-
Data Connector Implementation: This is the practical implementation of the data schema and connector interface designed in Milestone 1. The work involves writing the code to persist all conversational and Chain of Thought (CoT) data from the simulation into a database or file, as specified by the connector's configuration.
Acceptance Criteria
- Model Integration: The framework successfully calls both GPT-oss-20B (via its API) and Gemma-3-12B (via vLLM) using a single, unified interface.
- Prompt Formatting: The
ConversationManagercan send a single standardized prompt, and the integration layer correctly formats it for each model's specific requirements. - Deceptive Goal Injection: The deceptive goal is correctly and reliably injected into the system prompt of each model.
- Data Connector: The implemented data connector successfully saves all conversational turns and associated CoT data to the designated storage, as per the defined schema.
- Documentation: All integration components are clearly documented, including instructions for setting up the model environments.