-
Notifications
You must be signed in to change notification settings - Fork 2k
Labels
bot triaged[Bot] This issue is triaged by ADK bot[Bot] This issue is triaged by ADK botmodels[Component] Issues related to model support[Component] Issues related to model support
Description
Describe the bug
Using Azure models, the cached_content_token_count is logged, when turning on debug logging in litellm. In python-adk the usage_metadata section for caching is not populated.
To Reproduce
- Configure a model: e.g. LiteLlm( model="azure/gpt-4.1" )
- Use the model for an agent with a prompt with at least 2000 tokens two times
- When using the agent the second time the usage_metadata.cached_content_token_count should be filled, but is null at the moment. (I do not want to spam this issue with 2000 tokens....)
Expected behavior
I would like to see the usage_metadata filled with the correct data to make valid cost estimates at runtime.
Screenshots
nothing to screenshot here besides debugger sessions with None objects...
Desktop (please complete the following information):
- OS: macOS
- Python version(python -V): 3.13.7
- ADK version(pip show google-adk): 1.15.1
Model Information:
- Are you using LiteLLM: Yes
- Which model is being used: Azure GPT4.1 mostly
Additional context
The problem is probably at these lines:
Metadata
Metadata
Assignees
Labels
bot triaged[Bot] This issue is triaged by ADK bot[Bot] This issue is triaged by ADK botmodels[Component] Issues related to model support[Component] Issues related to model support