[doximity] add more info for error response type mismatch#197
[doximity] add more info for error response type mismatch#197vivek-athina merged 3 commits intomainfrom
Conversation
WalkthroughThe update enhances error messages within the Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant PromptExecution
participant LLMService
User->>PromptExecution: execute / execute_async / execute_stream
PromptExecution->>LLMService: Send prompt
LLMService-->>PromptExecution: Return response
alt Response type/format invalid
PromptExecution-->>User: Raise error with type and preview of response
else Response valid
PromptExecution-->>User: Return processed result
end
Possibly related PRs
Suggested reviewers
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 0
🔭 Outside diff range comments (2)
athina/steps/llm.py (2)
358-359:⚠️ Potential issueAddress the unused variable warning.
The code comment "TODO: Why is api_formatted_messages not used?" indicates awareness of the issue, but it should be fixed to silence the linter warning. Either use the variable or remove it.
- # TODO: Why is api_formatted_messages not used? - api_formatted_messages = [msg.to_api_format() for msg in messages]Alternatively, if it needs to be kept for future reference:
- # TODO: Why is api_formatted_messages not used? - api_formatted_messages = [msg.to_api_format() for msg in messages] + # TODO: Why is this conversion not used? Keeping for reference + # api_formatted_messages = [msg.to_api_format() for msg in messages]🧰 Tools
🪛 Ruff (0.8.2)
359-359: Local variable
api_formatted_messagesis assigned to but never usedRemove assignment to unused variable
api_formatted_messages(F841)
🪛 GitHub Actions: Flake8, Pyflakes and Compileall Linter
[warning] 4-359: Unused import 'athina.helpers.json.JsonExtractor' and local variable 'api_formatted_messages' is assigned to but never used.
1-5:⚠️ Potential issueRemove unused import to fix linter warning.
The static analysis shows that
JsonExtractoris imported but never used in this file.from pydantic import BaseModel from typing import List, Dict, Any, Optional, Union -from athina.helpers.json import JsonExtractor from athina.interfaces.model import Model from athina.steps.base import Step🧰 Tools
🪛 Ruff (0.8.2)
4-4:
athina.helpers.json.JsonExtractorimported but unusedRemove unused import:
athina.helpers.json.JsonExtractor(F401)
🧹 Nitpick comments (1)
athina/steps/llm.py (1)
512-515: Consider usingfinal_responsefor type information in streaming context.While this enhancement is valuable, in the streaming context, you're using
llmresponsewhich contains the current chunk, but usingtype(final_response)might be more appropriate for reporting the type of the complete response that failed validation.- error += ( - f" | Received type: {type(llmresponse).__name__} | " - f"Preview: {repr(llmresponse)[:300]}" - ) + error += ( + f" | Received type: {type(final_response).__name__} | " + f"Preview: {repr(final_response)[:300]}" + )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
athina/steps/llm.py(3 hunks)
🧰 Additional context used
🪛 GitHub Actions: Flake8, Pyflakes and Compileall Linter
athina/steps/llm.py
[warning] 4-359: Unused import 'athina.helpers.json.JsonExtractor' and local variable 'api_formatted_messages' is assigned to but never used.
🪛 GitHub Actions: MyPy static type checker
athina/steps/llm.py
[error] 326-465: mypy errors: Incompatible return value types (got 'StepResult', expected 'str' or 'dict'). Missing attribute 'chat_completion_async'. Argument 'status' to '_create_step_result' has incompatible type 'Literal[in_progress]' [return-value, attr-defined, arg-type]
🔇 Additional comments (3)
athina/steps/llm.py (3)
322-325: Good addition of context to error messages.Adding the received type and a preview of the response will make it much easier to debug type mismatch issues when they occur.
408-411: Good enhancement for error diagnostics.This change maintains consistency with the error handling in the
executemethod, making debugging easier for the async version as well.
361-362:Details
❓ Verification inconclusive
Fix type checking error for async method.
The pipeline failure shows a
Missing attribute 'chat_completion_async'error, suggesting that the async method might be calling a non-existent function. Verify thatself.llm_serviceimplements this method.
🏁 Script executed:
#!/bin/bash # Check if AbstractLlmService or its implementations define chat_completion_async rg -A 2 -B 2 'def chat_completion_async' --type pyLength of output: 52
#!/bin/bash # Locate all service classes to inspect available methods rg -A5 -B5 'class \w*LlmService' --type py # Check for the synchronous chat_completion implementation rg -A2 -B2 'def chat_completion' --type py # Re-check for any async chat_completion_async definitions rg -A2 -B2 'def chat_completion_async' --type py
#!/bin/bash # Find all LlmService classes (including AbstractLlmService and implementations) rg -n "class .*LlmService" --type py # Search for any definition or override of chat_completion (sync or async) rg -n "chat_completion" --type py
#!/bin/bash # 1. Locate AbstractLlmService definition and list its methods rg -n "class AbstractLlmService" --type py rg -A20 -B1 "class AbstractLlmService" --type py # 2. Find all concrete LlmService implementations and inspect them rg -n "class .*LlmService" --type py rg -A20 -B1 "class .*LlmService" --type py # 3. Search for any async chat_completion method definitions rg -n "def chat_completion_async" --type py
Ensure
chat_completion_asyncExists on All LLM Service ClassesIt appears there’s no
chat_completion_asyncmethod defined onAbstractLlmServiceor any concrete*LlmServiceimplementations. Please verify and address the discrepancy:• Confirm whether
AbstractLlmServicedeclaresasync def chat_completion_async(…).
• For each concrete LLM service (e.g.OpenAIService,AzureLlmService), ensure they implement or overridechat_completion_async.
• If the intended method is synchronous (chat_completion), update the call site accordingly or add an async alias in your base/service classes.
Summary by CodeRabbit