Skip to content

Commit 7e8d90b

Browse files
authored
[Cursor] fix: Correct reasoning_tokens access for o1 model (#132)
Updated tools/llm_api.py to correctly access `response.usage.reasoning_tokens` instead of the non-existent `response.usage.completion_tokens_details.reasoning_tokens`. This resolves the unit test failure for `test_query_o1_model` on the multi-agent branch.
1 parent b74684d commit 7e8d90b

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

tools/llm_api.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -199,7 +199,7 @@ def query_llm(prompt: str, client=None, model=None, provider="openai", image_pat
199199
prompt_tokens=response.usage.prompt_tokens,
200200
completion_tokens=response.usage.completion_tokens,
201201
total_tokens=response.usage.total_tokens,
202-
reasoning_tokens=response.usage.completion_tokens_details.reasoning_tokens if model.lower().startswith("o") else None # Only checks if model starts with "o", e.g., o1, o1-preview, o1-mini, o3, etc. Can update this logic to specific models in the future.
202+
reasoning_tokens=response.usage.reasoning_tokens if hasattr(response.usage, 'reasoning_tokens') and model.lower().startswith("o") else None
203203
)
204204

205205
# Calculate cost

0 commit comments

Comments
 (0)