Skip to content

Comments

fix: handle string responses in agent model_dump calls#64

Merged
jeremyeder merged 1 commit intomainfrom
fix/agent-model-dump-error
Sep 11, 2025
Merged

fix: handle string responses in agent model_dump calls#64
jeremyeder merged 1 commit intomainfrom
fix/agent-model-dump-error

Conversation

@jeremyeder
Copy link
Collaborator

Summary

Fixes the critical error where agent analyses fail with "'str' object has no attribute 'model_dump'" that was occurring for STAFF_ENGINEER and ENGINEERING_MANAGER personas.

Root Cause

The issue occurs when Settings.llm.astructured_predict() sometimes returns a string instead of the expected Pydantic model object. The code was blindly calling .model_dump() on all responses, causing runtime errors when strings were returned.

Changes Made

  • Type checking in stream_structured_predict(): Added safety check to detect when LLM returns unexpected types and wrap them properly
  • Safe model_dump in event streaming: Added hasattr() check before calling .model_dump()
  • Consistent error handling: Made both functions use the same safe approach
  • Better function reuse: Made event streaming use the safer stream_structured_predict() function

Technical Details

Before (Broken)

return final_response.model_dump()  # ❌ Crashes if final_response is a string

After (Fixed)

if hasattr(final_response, 'model_dump'):
    result = final_response.model_dump()
else:
    result = {"analysis": str(final_response), "persona": persona}

Test Results

Before Fix: STAFF_ENGINEER and ENGINEERING_MANAGER analyses failed with model_dump errors
After Fix: All agents handle both Pydantic models and string responses gracefully

Impact

  • Agent Analysis: Prevents runtime crashes for all agent personas
  • RFE Generation: Ensures workflow completes even with unexpected response types
  • Error Handling: Graceful degradation when LLM returns unexpected formats
  • Robustness: Makes the system more resilient to LLM API variations

🤖 Generated with Claude Code

- Add type checking before calling model_dump() method
- Wrap string responses in proper Pydantic model structure
- Use safer stream_structured_predict in event streaming
- Prevents 'str' object has no attribute 'model_dump' errors
- Fixes STAFF_ENGINEER and ENGINEERING_MANAGER streaming failures

Resolves the issue where Settings.llm.astructured_predict() sometimes
returns strings instead of Pydantic models, causing runtime errors when
the code attempts to call .model_dump() on the response.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
@claude
Copy link

claude bot commented Sep 11, 2025

Code Review Feedback

✅ Positive Aspects

Defensive Programming: Excellent use of hasattr() checks to handle unexpected response types gracefully. This is a robust approach to dealing with LLM API variability.

DRY Principle: Smart refactoring to make stream_structured_predict_with_events() use the safer stream_structured_predict() function instead of duplicating the LLM call.

Error Handling: Good fallback strategy when the LLM returns unexpected types - wrapping strings in the expected Pydantic model structure.

Clear Documentation: The PR description clearly explains the root cause and fix with before/after examples.

🔍 Code Quality Observations

Type Safety: The fix addresses a runtime type issue, but consider adding type hints for better IDE support.

Logging Consistency: Using print() statements for logging (lines 30, 35, 109). Consider using a proper logging framework instead.

Magic Values: The fallback structure is hardcoded. Consider extracting to a helper method.

⚠️ Potential Issues

Constructor Assumptions (line 31): The code assumes output_cls(analysis=str(response), persona=persona) will work, but this might fail if the Pydantic model constructor requires different fields.

Import Duplication: import asyncio appears both at the top (line 3) and inside the function (line 79). Remove the inline import.

Performance: The simulated streaming with await asyncio.sleep(0.01) adds 10ms per chunk. For long responses, this could add significant latency.

🧪 Test Coverage Concerns

The changes lack test coverage for:

  • LLM returning string responses instead of Pydantic models
  • Error handling paths in both functions
  • Edge cases where output_cls constructor fails

📊 Overall Assessment

This is a solid defensive fix that addresses a critical runtime error. The approach is pragmatic and maintains backward compatibility while improving system reliability.

Status: ✅ Approve with minor suggestions

@jeremyeder jeremyeder merged commit daa90f0 into main Sep 11, 2025
3 checks passed
sallyom pushed a commit that referenced this pull request Oct 15, 2025
- Add type checking before calling model_dump() method
- Wrap string responses in proper Pydantic model structure
- Use safer stream_structured_predict in event streaming
- Prevents 'str' object has no attribute 'model_dump' errors
- Fixes STAFF_ENGINEER and ENGINEERING_MANAGER streaming failures

Resolves the issue where Settings.llm.astructured_predict() sometimes
returns strings instead of Pydantic models, causing runtime errors when
the code attempts to call .model_dump() on the response.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude <noreply@anthropic.com>
@bobbravo2 bobbravo2 added this to the v0.0.1 milestone Jan 30, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants