-
Notifications
You must be signed in to change notification settings - Fork 0
refactor(models): Refine MessageAgentThought SQLAlchemy typing #149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: qodo_action_req_1_base_refactormodels_refine_messageagentthought_sqlalchemy_typing_pr11
Are you sure you want to change the base?
Conversation
Code Review by Qodo
1. Pydantic v1 Config used
|
| class Config: | ||
| extra = "allow" # Pydantic v1 syntax - should use ConfigDict(extra='forbid') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1. Pydantic v1 config used 📘 Rule violation ⛨ Security
• AgentThoughtValidation defines class Config with extra = "allow", which is Pydantic v1-style configuration and permits undeclared fields to pass validation. • This violates the requirement to use Pydantic v2 configuration patterns (ConfigDict(extra='forbid')) and weakens validation guarantees for data persisted to the database.
Agent prompt
## Issue description
`AgentThoughtValidation` uses Pydantic v1-style `class Config` and allows extra fields (`extra = "allow"`), violating the project requirement to use Pydantic v2 patterns and forbid undeclared fields by default.
## Issue Context
The compliance rule requires Pydantic v2 configuration via `ConfigDict(extra='forbid')` (and v2 validator patterns).
## Fix Focus Areas
- api/core/agent/base_agent_runner.py[52-66]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| try: | ||
| tool_inputs = json.loads(tool_input_payload) | ||
| except Exception: | ||
| tool_inputs = {tool: {} for tool in tool_names} | ||
| else: | ||
| tool_inputs = {tool: {} for tool in tool_names} | ||
|
|
||
| observation_payload = agent_thought.observation | ||
| if observation_payload: | ||
| try: | ||
| tool_responses = json.loads(observation_payload) | ||
| except Exception: | ||
| tool_responses = dict.fromkeys(tool_names, observation_payload) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2. Swallowed json parse exceptions 📘 Rule violation ✧ Quality
• json.loads() failures for tool_input_payload and observation_payload are caught with broad except Exception and silently replaced with defaults. • This hides malformed data problems (no actionable context/logging), making production debugging and edge-case diagnosis significantly harder.
Agent prompt
## Issue description
The code broadly catches `Exception` during JSON parsing and silently falls back to defaults without logging, which swallows errors and removes critical debugging context.
## Issue Context
This runs while building agent history; malformed JSON should be handled gracefully but still produce actionable logs with identifiers.
## Fix Focus Areas
- api/core/agent/base_agent_runner.py[469-485]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| tool_call_response.append( | ||
| ToolPromptMessage( | ||
| content=tool_responses.get(tool, agent_thought.observation), | ||
| content=str(tool_inputs.get(tool, agent_thought.observation)), | ||
| name=tool, | ||
| tool_call_id=tool_call_id, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
3. Tool output uses inputs 🐞 Bug ✓ Correctness
• ToolPromptMessage.content is populated from parsed tool *inputs* rather than parsed tool *responses/observation*, so the model will “see” arguments as if they were tool results. • This breaks tool-call chains (the assistant cannot incorporate real tool outputs) and can cause incorrect reasoning/actions across all agent conversations that rely on history replay.
Agent prompt
### Issue description
`organize_agent_history` emits TOOL-role messages with the tool **inputs** instead of the tool **outputs**. This corrupts the reconstructed conversation history and breaks model tool-call reasoning.
### Issue Context
- `tool_inputs` comes from `agent_thought.tool_input`.
- `tool_responses` comes from `agent_thought.observation`.
- `ToolPromptMessage.content` should represent the tool result and should be a string (or multimodal content).
### Fix Focus Areas
- api/core/agent/base_agent_runner.py[460-506]
- api/core/model_runtime/entities/message_entities.py[143-151]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
Benchmark PR from agentic-review-benchmarks#11