Replies: 3 comments 3 replies
-
https://jxnl.github.io/instructor/concepts/raw_response/?h=completion+tokens |
Beta Was this translation helpful? Give feedback.
1 reply
-
can you let me know where you looked so i can make this more obvious? |
Beta Was this translation helpful? Give feedback.
2 replies
-
I'll add a section called token usage in concepts |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi there,
Hi there,
Firstly, I'd like to thank you for developing and maintaining this library. It's a strong component for LLM, and I appreciate the effort that goes into it.
I'm currently working on a project where comprehensive token usage is crucial, specifically the number of input and output tokens when making requests to the GPT-3.5 Turbo model. However, I'm having trouble finding this information in the documentation or in the response object.
Here's a snippet of the code I'm using:
I'd like to know if there's a way to find out:
Understanding the number of tokens is critical to managing my application's quota and optimizing usage. If this feature is not available, could you consider it for future releases or suggest a possible workaround?
For context, in the OpenAI library, token counts can be retrieved as follows:
However, it appears that the library does not support similar functionality, or it's not clearly documented. I'm seeking guidance on how I might achieve similar token tracking with
instructor
.Thank you for your assistance, and I look forward to hearing from you.
Beta Was this translation helpful? Give feedback.
All reactions