Increase the calculation of tokens #609
Closed
xuzeyu91
started this conversation in
2. Feature requests
Replies: 4 comments
-
I found this solution, try to set MaxTokenTotal }).WithAzureOpenAITextGeneration(new AzureOpenAIConfig()
{
Endpoint = azureOpenAi.SummarizationModel.Endpoint,
APIKey = azureOpenAi.SummarizationModel.ApiKey,
Deployment = azureOpenAi.SummarizationModel.DeploymentOrModelId,
Auth = AzureOpenAIConfig.AuthTypes.APIKey,
// try this
MaxTokenTotal = 25000,
MaxRetries = 1230
}, httpClient: httpClient) |
Beta Was this translation helpful? Give feedback.
0 replies
-
What I hope to know is how many tokens are consumed per request
|
Beta Was this translation helpful? Give feedback.
0 replies
-
Please feel free to use the poll at #532 to vote for this feature |
Beta Was this translation helpful? Give feedback.
0 replies
-
Update: starting from version 0.96, KM Ask API returns a Token Usage report. Hope this helps! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Context / Scenario
When using Kernel Memory, whether it is Import or Ask, I hope to increase the consumption of returning tokens, so that I can analyze the cost of question and answer scenarios
The problem
Increase the return value of token consumption
Proposed solution
Increase the return value of token consumption
Importance
would be great to have
Beta Was this translation helpful? Give feedback.
All reactions