- 
                Notifications
    
You must be signed in to change notification settings  - Fork 131
 
Add response token count logic to Bedrock instrumentation. #1504
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: feature-llm-token-counts
Are you sure you want to change the base?
Conversation
          ✅MegaLinter analysis: Success
 See detailed reports in MegaLinter artifacts  | 
    
          Codecov Report❌ Patch coverage is  
 Additional details and impacted files@@                     Coverage Diff                     @@
##             feature-llm-token-counts    #1504   +/-   ##
===========================================================
  Coverage                            ?   81.65%           
===========================================================
  Files                               ?      207           
  Lines                               ?    24037           
  Branches                            ?     3802           
===========================================================
  Hits                                ?    19628           
  Misses                              ?     3132           
  Partials                            ?     1277           ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
  | 
    
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I said this in a comment too but I'm going to stop reviewing this for the moment.
Can you for the sake of being able to easily identify what's changed here: re-order these changes in the file such that they end up being diffed against the original function. Right now it's really hard to compare to see what's changed because the diff isn't showing a 1:1 comparison. I know it's a pain but in a larger PR like this with lots of files changed it makes it a ton easier to review.
| 
               | 
          ||
| bedrock_attrs["input"] = request_body.get("inputText") | ||
| 
               | 
          ||
| def extract_bedrock_titan_text_model_streaming_response(response_body, bedrock_attrs): | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you for the sake of being able to easily identify what's changed here: re-order these changes in the file such that they end up being diffed against the original function. Right now it's really hard to compare to see what's changed because the diff isn't showing a 1:1 comparison. I know it's a pain but in a larger PR like this with lots of files changed it makes it a ton easier to review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a couple minor suggestions and one question-otherwise looks good.
| model = bedrock_attrs.get("model", None) | ||
| input_ = bedrock_attrs.get("input") | ||
| 
               | 
          ||
| response_total_tokens = bedrock_attrs.get("response.usage.total_tokens", None) | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
None is redundant.
| response_total_tokens = bedrock_attrs.get("response.usage.total_tokens", None) | |
| response_total_tokens = bedrock_attrs.get("response.usage.total_tokens") | 
| response_id = bedrock_attrs.get("response_id", None) | ||
| model = bedrock_attrs.get("model", None) | ||
| 
               | 
          ||
| response_prompt_tokens = bedrock_attrs.get("response.usage.prompt_tokens", None) | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| response_prompt_tokens = bedrock_attrs.get("response.usage.prompt_tokens", None) | |
| response_prompt_tokens = bedrock_attrs.get("response.usage.prompt_tokens") | 
| model = bedrock_attrs.get("model", None) | ||
| 
               | 
          ||
| response_prompt_tokens = bedrock_attrs.get("response.usage.prompt_tokens", None) | ||
| response_completion_tokens = bedrock_attrs.get("response.usage.completion_tokens", None) | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| response_completion_tokens = bedrock_attrs.get("response.usage.completion_tokens", None) | |
| response_completion_tokens = bedrock_attrs.get("response.usage.completion_tokens") | 
| 
               | 
          ||
| response_prompt_tokens = bedrock_attrs.get("response.usage.prompt_tokens", None) | ||
| response_completion_tokens = bedrock_attrs.get("response.usage.completion_tokens", None) | ||
| response_total_tokens = bedrock_attrs.get("response.usage.total_tokens", None) | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| response_total_tokens = bedrock_attrs.get("response.usage.total_tokens", None) | |
| response_total_tokens = bedrock_attrs.get("response.usage.total_tokens") | 
| len(input_message_list) + len(output_message_list) | ||
| ) or None # If 0, attribute will be set to None and removed | ||
| 
               | 
          ||
| input_message_content = " ".join([msg.get("content", "") for msg in input_message_list if msg.get("content")]) | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think the default values of "" are reachable here given the if msg.get("content").
| "ingest_source": "Python", | ||
| } | ||
| if all_token_counts: | ||
| chat_completion_message_dict["token_count"] = 0 | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this the value that was previously added in the pipeline? Are we just setting it to 0 as a transition?

This PR updates sync and async Bedrock instrumentation and tests to pull token counts directly from the response object. If a customer has an LLM token counting callback function registered, this will still take priority over the response token count values.
It also removes token count calculations from error cases as the agent will consistently stop reporting token counts when a error was met during the creation of a text generation.