fix: handle string responses in agent model_dump calls#64
Conversation
- Add type checking before calling model_dump() method - Wrap string responses in proper Pydantic model structure - Use safer stream_structured_predict in event streaming - Prevents 'str' object has no attribute 'model_dump' errors - Fixes STAFF_ENGINEER and ENGINEERING_MANAGER streaming failures Resolves the issue where Settings.llm.astructured_predict() sometimes returns strings instead of Pydantic models, causing runtime errors when the code attempts to call .model_dump() on the response. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
Code Review Feedback✅ Positive AspectsDefensive Programming: Excellent use of hasattr() checks to handle unexpected response types gracefully. This is a robust approach to dealing with LLM API variability. DRY Principle: Smart refactoring to make stream_structured_predict_with_events() use the safer stream_structured_predict() function instead of duplicating the LLM call. Error Handling: Good fallback strategy when the LLM returns unexpected types - wrapping strings in the expected Pydantic model structure. Clear Documentation: The PR description clearly explains the root cause and fix with before/after examples. 🔍 Code Quality ObservationsType Safety: The fix addresses a runtime type issue, but consider adding type hints for better IDE support. Logging Consistency: Using print() statements for logging (lines 30, 35, 109). Consider using a proper logging framework instead. Magic Values: The fallback structure is hardcoded. Consider extracting to a helper method.
|
- Add type checking before calling model_dump() method - Wrap string responses in proper Pydantic model structure - Use safer stream_structured_predict in event streaming - Prevents 'str' object has no attribute 'model_dump' errors - Fixes STAFF_ENGINEER and ENGINEERING_MANAGER streaming failures Resolves the issue where Settings.llm.astructured_predict() sometimes returns strings instead of Pydantic models, causing runtime errors when the code attempts to call .model_dump() on the response. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-authored-by: Claude <noreply@anthropic.com>
Summary
Fixes the critical error where agent analyses fail with "'str' object has no attribute 'model_dump'" that was occurring for STAFF_ENGINEER and ENGINEERING_MANAGER personas.
Root Cause
The issue occurs when
Settings.llm.astructured_predict()sometimes returns a string instead of the expected Pydantic model object. The code was blindly calling.model_dump()on all responses, causing runtime errors when strings were returned.Changes Made
stream_structured_predict(): Added safety check to detect when LLM returns unexpected types and wrap them properlyhasattr()check before calling.model_dump()stream_structured_predict()functionTechnical Details
Before (Broken)
After (Fixed)
Test Results
✅ Before Fix: STAFF_ENGINEER and ENGINEERING_MANAGER analyses failed with model_dump errors
✅ After Fix: All agents handle both Pydantic models and string responses gracefully
Impact
🤖 Generated with Claude Code