Skip to content

refactor(models): Refine MessageAgentThought SQLAlchemy typing#162

Open
tomerqodo wants to merge 2 commits intoqodo_claude_vs_qodo_base_refactormodels_refine_messageagentthought_sqlalchemy_typing_pr11from
qodo_claude_vs_qodo_head_refactormodels_refine_messageagentthought_sqlalchemy_typing_pr11
Open

refactor(models): Refine MessageAgentThought SQLAlchemy typing#162
tomerqodo wants to merge 2 commits intoqodo_claude_vs_qodo_base_refactormodels_refine_messageagentthought_sqlalchemy_typing_pr11from
qodo_claude_vs_qodo_head_refactormodels_refine_messageagentthought_sqlalchemy_typing_pr11

Conversation

@tomerqodo
Copy link

Benchmark PR from agentic-review-benchmarks#11

@qodo-code-review
Copy link

Review Summary by Qodo

Refine MessageAgentThought SQLAlchemy typing and improve agent data handling

✨ Enhancement 🐞 Bug fix

Grey Divider

Walkthroughs

Description
• Refine MessageAgentThought SQLAlchemy model with proper type annotations
  - Convert to TypeBase inheritance for consistency
  - Add Mapped type hints for all columns
  - Use Decimal type for numeric price fields
• Improve agent thought data handling and validation
  - Add AgentThoughtValidation Pydantic model for data validation
  - Fix string concatenation logic in save_agent_thought method
  - Use CreatorUserRole enum instead of string literal
• Enhance tool data processing in organize_agent_history method
  - Improve null-safety checks for tool input and observation payloads
  - Add explicit null handling before JSON parsing
  - Fix tool response content assignment bug
• Update test fixtures to remove manual id assignment
  - Leverage auto-generated UUID defaults in model
Diagram
flowchart LR
  A["MessageAgentThought Model"] -->|"Add Mapped type hints"| B["Proper SQLAlchemy Typing"]
  A -->|"Use Decimal type"| C["Numeric Price Fields"]
  A -->|"Inherit TypeBase"| D["Consistent Base Class"]
  E["Agent Thought Processing"] -->|"Add validation model"| F["AgentThoughtValidation"]
  E -->|"Improve null-safety"| G["Better Error Handling"]
  E -->|"Fix string logic"| H["Correct Concatenation"]
  I["Test Fixtures"] -->|"Remove manual IDs"| J["Use Auto-Generated UUIDs"]
Loading

Grey Divider

File Changes

1. api/models/model.py ✨ Enhancement +35/-27

Add SQLAlchemy Mapped type hints and Decimal types

• Changed MessageAgentThought base class from Base to TypeBase for consistency
• Added Mapped type annotations to all column definitions with proper type hints
• Converted numeric price fields (message_unit_price, message_price_unit, answer_unit_price,
 answer_price_unit, total_price) to use Decimal type
• Added explicit default and default_factory parameters for auto-generated UUID and nullable
 fields
• Reordered columns to place created_by_role and created_by earlier in the definition
• Added init=False to created_at column to exclude from constructor

api/models/model.py


2. api/core/agent/base_agent_runner.py ✨ Enhancement +51/-21

Add validation model and improve agent thought handling

• Added AgentThoughtValidation Pydantic model for validating agent thought data before persistence
• Imported Decimal type and CreatorUserRole enum for proper type usage
• Updated create_agent_thought method to use Decimal for price fields and
 CreatorUserRole.ACCOUNT enum
• Fixed save_agent_thought method to safely concatenate thought strings using f-string instead of
 += operator
• Enhanced organize_agent_history method with improved null-safety checks for tool input and
 observation payloads
• Fixed bug where tool response content was incorrectly assigned from tool_inputs instead of
 tool_responses
• Added explicit null checks before JSON parsing to prevent exceptions

api/core/agent/base_agent_runner.py


3. api/tests/test_containers_integration_tests/services/test_agent_service.py 🧪 Tests +0/-7

Remove manual UUID assignments from test fixtures

• Removed manual id=fake.uuid4() assignments from MessageAgentThought instantiations in multiple
 test methods
• Updated test fixtures to rely on auto-generated UUID defaults from the model definition
• Affected test methods: _create_test_agent_thoughts, test_get_agent_logs_with_tool_error,
 test_get_agent_logs_with_complex_tool_data, test_get_agent_logs_with_files,
 test_get_agent_logs_with_empty_tool_data, test_get_agent_logs_with_malformed_json

api/tests/test_containers_integration_tests/services/test_agent_service.py


Grey Divider

Qodo Logo

@qodo-code-review
Copy link

qodo-code-review bot commented Mar 10, 2026

Code Review by Qodo

🐞 Bugs (1) 📘 Rule violations (1) 📎 Requirement gaps (0)

Grey Divider


Action required

1. Pydantic v1 Config used 📘 Rule violation ✓ Correctness
Description
AgentThoughtValidation uses Pydantic v1-style class Config and allows extra fields, which
weakens validation and violates the required Pydantic v2 configuration pattern. This can permit
unexpected/unvalidated data to pass through persistence logic.
Code

api/core/agent/base_agent_runner.py[R52-66]

+class AgentThoughtValidation(BaseModel):
+    """
+    Validation model for agent thought data before database persistence.
+    """
+
+    message_id: str
+    position: int
+    thought: str | None = None
+    tool: str | None = None
+    tool_input: str | None = None
+    observation: str | None = None
+
+    class Config:
+        extra = "allow"  # Pydantic v1 syntax - should use ConfigDict(extra='forbid')
+
Evidence
PR Compliance ID 13 requires Pydantic v2 models to use ConfigDict(extra='forbid') by default; the
added model uses deprecated v1 class Config and sets extra = 'allow'.

AGENTS.md
api/core/agent/base_agent_runner.py[52-66]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`AgentThoughtValidation` is implemented with Pydantic v1-style `class Config` and sets `extra = "allow"`, which violates the requirement to use Pydantic v2 configuration (`ConfigDict`) and to forbid undeclared fields by default.

## Issue Context
The compliance requirement mandates Pydantic v2 patterns for DTO/data validation and `ConfigDict(extra='forbid')` by default to prevent unexpected fields.

## Fix Focus Areas
- api/core/agent/base_agent_runner.py[52-66]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Tool output uses inputs 🐞 Bug ✓ Correctness
Description
BaseAgentRunner.organize_agent_history populates ToolPromptMessage.content from tool_inputs instead
of the tool observation payload, so reconstructed history feeds the LLM the tool arguments rather
than the tool results. This can break follow-up reasoning because prior tool call outputs are
missing/incorrect in the prompt history.
Code

api/core/agent/base_agent_runner.py[R499-505]

                            )
                            tool_call_response.append(
                                ToolPromptMessage(
-                                    content=tool_responses.get(tool, agent_thought.observation),
+                                    content=str(tool_inputs.get(tool, agent_thought.observation)),
                                    name=tool,
                                    tool_call_id=tool_call_id,
                                )
Evidence
In BaseAgentRunner.organize_agent_history, tool responses are parsed into tool_responses but the
ToolPromptMessage content is mistakenly set from tool_inputs; other agent runner code and the logs
service treat tool-role message content / observation as tool outputs.

api/core/agent/base_agent_runner.py[464-506]
api/core/agent/fc_agent_runner.py[272-280]
api/models/model.py[1936-1956]
api/services/agent_service.py[97-109]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`BaseAgentRunner.organize_agent_history()` builds `ToolPromptMessage` instances with `content` sourced from `tool_inputs` (the tool arguments) instead of `tool_responses` (the tool observations/outputs). This causes reconstructed conversation history to provide the LLM incorrect tool-role messages.

### Issue Context
Elsewhere, tool-role messages are constructed from tool invocation results (see `fc_agent_runner.py`), and `MessageAgentThought.observation` is treated as tool outputs (see `tool_outputs_dict` and `AgentService.get_agent_logs`).

### Fix Focus Areas
- api/core/agent/base_agent_runner.py[487-506]

### Suggested change
Replace:
- `content=str(tool_inputs.get(tool, agent_thought.observation))`

With something equivalent to:
- `content=str(tool_responses.get(tool, agent_thought.observation))`

Optionally keep the same fallback behavior if `tool_responses` is not a dict (defensive cast) and ensure `tool_responses` is always defined for the loop.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

Comment on lines +52 to +66
class AgentThoughtValidation(BaseModel):
"""
Validation model for agent thought data before database persistence.
"""

message_id: str
position: int
thought: str | None = None
tool: str | None = None
tool_input: str | None = None
observation: str | None = None

class Config:
extra = "allow" # Pydantic v1 syntax - should use ConfigDict(extra='forbid')

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. Pydantic v1 config used 📘 Rule violation ✓ Correctness

AgentThoughtValidation uses Pydantic v1-style class Config and allows extra fields, which
weakens validation and violates the required Pydantic v2 configuration pattern. This can permit
unexpected/unvalidated data to pass through persistence logic.
Agent Prompt
## Issue description
`AgentThoughtValidation` is implemented with Pydantic v1-style `class Config` and sets `extra = "allow"`, which violates the requirement to use Pydantic v2 configuration (`ConfigDict`) and to forbid undeclared fields by default.

## Issue Context
The compliance requirement mandates Pydantic v2 patterns for DTO/data validation and `ConfigDict(extra='forbid')` by default to prevent unexpected fields.

## Fix Focus Areas
- api/core/agent/base_agent_runner.py[52-66]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines 499 to 505
)
tool_call_response.append(
ToolPromptMessage(
content=tool_responses.get(tool, agent_thought.observation),
content=str(tool_inputs.get(tool, agent_thought.observation)),
name=tool,
tool_call_id=tool_call_id,
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

2. Tool output uses inputs 🐞 Bug ✓ Correctness

BaseAgentRunner.organize_agent_history populates ToolPromptMessage.content from tool_inputs instead
of the tool observation payload, so reconstructed history feeds the LLM the tool arguments rather
than the tool results. This can break follow-up reasoning because prior tool call outputs are
missing/incorrect in the prompt history.
Agent Prompt
### Issue description
`BaseAgentRunner.organize_agent_history()` builds `ToolPromptMessage` instances with `content` sourced from `tool_inputs` (the tool arguments) instead of `tool_responses` (the tool observations/outputs). This causes reconstructed conversation history to provide the LLM incorrect tool-role messages.

### Issue Context
Elsewhere, tool-role messages are constructed from tool invocation results (see `fc_agent_runner.py`), and `MessageAgentThought.observation` is treated as tool outputs (see `tool_outputs_dict` and `AgentService.get_agent_logs`).

### Fix Focus Areas
- api/core/agent/base_agent_runner.py[487-506]

### Suggested change
Replace:
- `content=str(tool_inputs.get(tool, agent_thought.observation))`

With something equivalent to:
- `content=str(tool_responses.get(tool, agent_thought.observation))`

Optionally keep the same fallback behavior if `tool_responses` is not a dict (defensive cast) and ensure `tool_responses` is always defined for the loop.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant