Replies: 6 comments 3 replies
-
I just see that there is such a thing
If I understand correctly, you can count the entire prompt (including previous messages) at the end of the result, and then answer and return the finished result to the user. What do you think about something like this? Example: var completionResult = openAiService.ChatCompletion.CreateCompletionAsStream(new ChatCompletionCreateRequest
{
Messages = messageSend,
Temperature = (float)s.Temperature,
Model = model_GPT,
});
await foreach (var completion in completionResult)
{
if (completion.Successful)
{
...
}
}
var countPrompt = completionResult.Tokens.Prompt();
var countOutput = completionResult.Tokens.Completion(); |
Beta Was this translation helpful? Give feedback.
-
Code: var count = messageSend.Select(x => TokenizerGpt3.TokenCount(x.Content)).ToList().Sum();
var count2 = TokenizerGpt3.TokenCount(OutputAI.Message);
Console.WriteLine($"{count} + {count2} = {count + count2}"); According to the AI analysis, there are such possibilities:
|
Beta Was this translation helpful? Give feedback.
-
TokenizerGpt3.TokenCount does not use the OpenAI service, but it employs a similar calculation method. Tokens may differ depending on character encoding or environment. For example, if you use Windows, it uses The OpenAI I am going to convert this issue to discussion, so we can talk about it if can find an easier way to implement a tokenizer. Again, the diff in your example is probably end-of-line characters, if it is not, please share the content so we can test it. |
Beta Was this translation helpful? Give feedback.
-
Results :
Threw this test together b/c I couldn't figure out why my app had such large discrepancies as i keep running into max token limits. TokenizerGpt3.TokenCount() probably shouldn't ever be used |
Beta Was this translation helpful? Give feedback.
-
Just found: https://github.com/dmitry-brazhenko/SharpToken looks promising too |
Beta Was this translation helpful? Give feedback.
-
Hi, I'm new to using this library, and I'm not sure I'm following the conversation. I'll probably use Microsoft's tokenizer library to get a rough estimate of token use so I can batch my requests, but I'd like to keep a full accounting of the actual tokens used based on OpenAI's counts, since that's what they'll be billing against. In the OpenAI API docs, they say the https://platform.openai.com/docs/guides/gpt/chat-completions-api But I'm using streamed completion, and in all results from Is OpenAI not sending this information when |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
There is no display of used tokens in
CreateCompletionAsStream
, I would like to record how many tokens I used each time.Also
Usage
is null.Beta Was this translation helpful? Give feedback.
All reactions