fix(langchain): Extract token usage from message.usage_metadata for streaming responses#127
Open
NikitaVoitov wants to merge 2 commits intosignalfx:mainfrom
Open
fix(langchain): Extract token usage from message.usage_metadata for streaming responses#127NikitaVoitov wants to merge 2 commits intosignalfx:mainfrom
NikitaVoitov wants to merge 2 commits intosignalfx:mainfrom
Conversation
…treaming responses Token usage attributes (gen_ai.usage.input_tokens, gen_ai.usage.output_tokens) were missing when LLM streaming is enabled because the code only checked llm_output.token_usage. In streaming mode, LangChain puts token counts in message.usage_metadata instead. This fix adds priority-based extraction: 1. First check message.usage_metadata (streaming mode) 2. Fallback to llm_output.token_usage (non-streaming mode) Adds two helper functions: - _extract_token_usage_from_generations(): extracts from usage_metadata - _extract_token_usage_from_llm_output(): extracts from llm_output Tests added: - test_token_usage_extraction_streaming_mode - test_token_usage_extraction_non_streaming_mode - test_token_usage_streaming_priority Affects: OpenAI, Anthropic, Google, and other providers using streaming with usage_metadata
Before fix (trace_streaming_before_fix.json): - Trace ID: 77432872a967d4321701ce1f22032d8c - gen_ai.usage.input_tokens: MISSING - gen_ai.usage.output_tokens: MISSING After fix (trace_streaming_after_fix.json): - Trace ID: 303595c0d1031acdae9bacd46083d87b - gen_ai.usage.input_tokens: 40 - gen_ai.usage.output_tokens: 57
|
I have read the CLA Document and I hereby sign the CLA You can retrigger this bot by commenting recheck in this Pull Request. Posted by the CLA Assistant Lite bot. |
zhirafovod
requested changes
Jan 15, 2026
Contributor
zhirafovod
left a comment
There was a problem hiding this comment.
@NikitaVoitov , thank you for creating the PR!
can you add the real app which you used to get this traces? I am specifically trying to understand when
usage.get("prompt_tokens") or usage.get("input_tokens")
``` can be the use case?
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Fixes token usage extraction to support streaming mode by checking
message.usage_metadatain addition tollm_output. This enables accurate token tracking for OpenAI, Anthropic, and other providers when streaming is enabled.Fixes #126
The Bug (Before)
The Fix (After)
Changes
callback_handler.py
Added two helper functions and updated token extraction logic:
1. New
_extract_token_usage_from_generations()helper:2. New
_extract_token_usage_from_llm_output()helper:3. Updated
on_llm_end()with priority-based extraction:Token Source Priority
message.usage_metadatainput_tokens,output_tokensllm_output.token_usageprompt_tokens,completion_tokensTesting
test_token_usage_extraction_streaming_mode- verifiesmessage.usage_metadataextractiontest_token_usage_extraction_non_streaming_mode- verifiesllm_outputextractiontest_token_usage_streaming_priority- verifies streaming source takes precedenceEvidence
Live test showing the fix works:
response.usage_metadatagen_ai.usage.*input_tokens: 31, output_tokens: 531, 5input_tokens: 40, output_tokens: 55input_tokens: 40, output_tokens: 5740, 57Trace Evidence:
77432872a967d4321701ce1f22032d8c303595c0d1031acdae9bacd46083d87bFiles Changed
instrumentation-genai/opentelemetry-instrumentation-langchain/src/opentelemetry/instrumentation/langchain/callback_handler.py_extract_token_usage_from_generations(),_extract_token_usage_from_llm_output(), updatedon_llm_end()instrumentation-genai/opentelemetry-instrumentation-langchain/tests/test_callback_handler_agent.py