Skip to content

Conversation

dannykok
Copy link

Currently when streaming from bedrock llama3, the usage numbers feedback from streaming chunks will cause error in Langchain core merge_dicts function.

Change:

  • format prompt_token_count and generation_token_count from int to list[int]

This should enable the streaming from bedrock llama3 models.

@dannykok
Copy link
Author

Hi, can someone help review this PR please?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant