-
Notifications
You must be signed in to change notification settings - Fork 568
work on chat span #4696
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
work on chat span #4696
Conversation
…pirker/langchain1
1fbdabe
into
shellmayr/feat/langchain-integration-update
❌ 20 Tests Failed:
View the top 3 failed test(s) by shortest run time
To view more test analytics, go to the Test Analytics Dashboard |
| for key, attribute in DATA_FIELDS.items(): | ||
| if key in all_params: | ||
| set_data_normalized(span, attribute, all_params[key], unpack=False) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| input_tokens=input_tokens, | ||
| output_tokens=output_tokens, | ||
| total_tokens=total_tokens, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Token Usage Metrics Inflated Due to Integration Change
The Langchain integration now unconditionally records token usage for both chat and generic LLM spans. The no_collect_tokens guard, which previously prevented double-counting for models with dedicated Sentry integrations, was removed. This leads to inflated token usage metrics in production.
response of chat will follow soon.