Skip to content

Conversation

@fenilfaldu
Copy link
Contributor

📥 Pull Request

Issue:Token usage wasn't being extracted from ChatCompletion, Responses API response.

Response object wasn't being properly converted to dictionary due to incorrect condition checking and added proper handling for Pydantic models and improved fallback logic.

Screenshot 2025-06-17 at 2 24 42 AM

@codecov
Copy link

codecov bot commented Jun 16, 2025

Codecov Report

Attention: Patch coverage is 0% with 26 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
agentops/instrumentation/openai/wrappers/chat.py 0.00% 22 Missing ⚠️
...tops/instrumentation/openai/wrappers/embeddings.py 0.00% 4 Missing ⚠️

📢 Thoughts on this report? Let us know!

@fenilfaldu fenilfaldu requested review from Dwij1704 and dot-agi June 16, 2025 21:02
@dot-agi dot-agi requested a review from Copilot June 16, 2025 21:03
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This pull request fixes the OpenAI token extraction issues by adjusting the response handling logic. Key changes include adding support for Pydantic models via model_dump in both embeddings and chat wrappers, and adding explicit checks to ensure tool_calls and function_call values are handled only when present.

Reviewed Changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated no comments.

File Description
agentops/instrumentation/openai/wrappers/embeddings.py Adds fallback conditions to handle Pydantic models and dict responses
agentops/instrumentation/openai/wrappers/chat.py Introduces explicit checks for tool_calls and function_call, ensuring proper extraction and logging
Comments suppressed due to low confidence (3)

agentops/instrumentation/openai/wrappers/embeddings.py:65

  • The new branch for handling Pydantic models using model_dump is correctly placed before the dict check, which is good. Consider adding unit tests to confirm that the precedence between model_dump and dict conversions behaves as intended.
elif hasattr(return_value, "model_dump"):

agentops/instrumentation/openai/wrappers/chat.py:86

  • The explicit check for tool_calls helps avoid errors when the value is None; however, please verify that an empty list is handled appropriately as it will evaluate to false.
if tool_calls:  # Check if tool_calls is not None

agentops/instrumentation/openai/wrappers/chat.py:186

  • The guard for function_call adds a useful safeguard; ensure that an empty dictionary is handled as intended in downstream processing.
if function_call:  # Check if function_call is not None

@dot-agi dot-agi merged commit 66ae700 into main Jun 16, 2025
9 of 10 checks passed
@dot-agi dot-agi deleted the token_extraction branch June 16, 2025 21:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants