How to keep track of the openAI tokens used by a session? #940
Unanswered
saikarthikp9
asked this question in
Q&A
Replies: 3 comments
-
I've been trying to figure out the same thing. |
Beta Was this translation helpful? Give feedback.
0 replies
-
@saikarthikp9 @mattbisme Use LangSmith or LLM Monitor API to keep track of all input and output token usage. |
Beta Was this translation helpful? Give feedback.
0 replies
-
Hi all, we will need to make some enhancement for this to work. Extracting the value of I have added this to our backlog. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
As I understand, I can create a tool to make API calls, so I can make API calls to my custom backend to store token usage. But how can I get this info from the Conversational Retrieval QA Chain output?
Beta Was this translation helpful? Give feedback.
All reactions