Skip to content

Conversation

omerfarukeskin01
Copy link

@omerfarukeskin01 omerfarukeskin01 commented Oct 4, 2025

Fixes #3049

When using Azure via LiteLLM, cached token information visible in debug logs was not propagated into ADK’s usage metadata; specifically, usage_metadata.cached_content_token_count remained null because lite_llm.py only mapped prompt/completion/total tokens and did not read provider-specific cache metrics (e.g., prompt_tokens_details.cached_tokens, list variants, cached_prompt_tokens, cached_tokens). This change adds a robust extractor for cached tokens from the LiteLLM usage payload and populates types.GenerateContentResponseUsageMetadata.cached_content_token_count in both non-stream and streaming paths. As a result, when you run the same large prompt a second time (cache hit), cached_content_token_count is now populated, enabling accurate runtime cost estimates. The updates are in src/google/adk/models/lite_llm.py with accompanying unit tests in tests/unittests/models/test_litellm.py, and the change is backward compatible (#non-breaking).

Copy link

Summary of Changes

Hello @omerfarukeskin01, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves an issue where cached token information from LiteLLM responses, particularly when using Azure, was not being correctly propagated into the ADK's usage metadata. By introducing a dedicated field and a robust extraction mechanism, this change ensures that cached_content_token_count is accurately populated. This enhancement is crucial for obtaining precise runtime cost estimations, especially when prompts are served from a cache, thereby improving the accuracy of usage tracking.

Highlights

  • New cached_prompt_tokens field: The UsageMetadataChunk class now includes a cached_prompt_tokens field to store the count of tokens served from the provider's cache, initialized to zero.
  • Robust cached token extraction: A new utility function, _extract_cached_prompt_tokens, has been added. This function intelligently parses and retrieves cached token counts from various LiteLLM usage payload formats, handling different provider-specific structures.
  • Populate cached_content_token_count: The cached_content_token_count field within types.GenerateContentResponseUsageMetadata is now correctly populated for both non-streaming and streaming API calls, utilizing the newly extracted cached token information.
  • Unit tests for cached tokens: New unit tests have been introduced to specifically verify the accurate propagation of cached token counts in both non-streaming and streaming scenarios, ensuring the functionality works as expected across different LiteLLM response shapes.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@adk-bot adk-bot added bot triaged [Bot] This issue is triaged by ADK bot models [Component] Issues related to model support labels Oct 4, 2025
@adk-bot
Copy link
Collaborator

adk-bot commented Oct 4, 2025

Response from ADK Triaging Agent

Hello @omerfarukeskin01, thank you for your contribution!

To help us with the review process, could you please create a GitHub issue that this PR addresses and link it in the description? According to our contribution guidelines, all bug fixes and feature enhancements should have an associated issue.

Thanks!

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively addresses the missing propagation of cached token counts from LiteLLM responses into the ADK's usage metadata. The new extractor function is robust, handling multiple data formats for cached tokens from various providers. The changes are well-implemented for both streaming and non-streaming modes and are accompanied by thorough unit tests. I have one minor suggestion to improve the clarity of the new extractor function.

@omerfarukeskin01
Copy link
Author

Linked to existing issue #3049.
All tests passing locally. Ready for review and workflow approval.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bot triaged [Bot] This issue is triaged by ADK bot models [Component] Issues related to model support
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Missing LiteLlm usage_metadata.cached_content_token_count
2 participants