You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I searched existing ideas and did not find a similar one
I added a very descriptive title
I've clearly described the feature request and motivation for it
Feature request
Well, I was creating some RAG Chatbots and added memory with dynamoDB, so serverless deployment in lambda also does not have any impact for me.
But then, I wanted to track my tokens, I see when we don't chain with prompt, its easier to track our tokens: Input, Output, Total Tokens, added in dynamodb, if we are saving the message history, but when Model is chained with Prompt, retriever then how it is possible to track tokens, I searched but I could not find the right solutions.
Please provide the feedback.
Motivation
There was a small cost tracking operation because OPENAI is not free, for a chained with prompt and retriever, I could not find the solution to track the tokens per chat session, usually, if we only invoke LLM, everything is saved in metadata info like which model is used, and what is input token and output token but when chained together with multiple prompt, then retriever then chat history how to track the cost/tokens ??
For example: https://python.langchain.com/v0.2/docs/how_to/qa_chat_history_how_to/
can someone, help me to find the tokens for each chat?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Checked
Feature request
Well, I was creating some RAG Chatbots and added memory with dynamoDB, so serverless deployment in lambda also does not have any impact for me.
But then, I wanted to track my tokens, I see when we don't chain with prompt, its easier to track our tokens: Input, Output, Total Tokens, added in dynamodb, if we are saving the message history, but when Model is chained with Prompt, retriever then how it is possible to track tokens, I searched but I could not find the right solutions.
Please provide the feedback.
Motivation
There was a small cost tracking operation because OPENAI is not free, for a chained with prompt and retriever, I could not find the solution to track the tokens per chat session, usually, if we only invoke LLM, everything is saved in metadata info like which model is used, and what is input token and output token but when chained together with multiple prompt, then retriever then chat history how to track the cost/tokens ??
For example: https://python.langchain.com/v0.2/docs/how_to/qa_chat_history_how_to/
can someone, help me to find the tokens for each chat?
Proposal (If applicable)
No response
Beta Was this translation helpful? Give feedback.
All reactions