-| estimate-prompt-tokens | Boolean value that determines whether to estimate the number of tokens required for a prompt: <br> - `true`: estimate the number of tokens based on prompt schema in API; may reduce performance. <br> - `false`: don't estimate prompt tokens. <br><br> _When setting this to `false`, the remaing tokens per `counter-key` will be calculated using the actual token-usage from the response of the model. This could result in prompts being sent to the model, that exceed the token limit. In such case, this will be detected in the response result in all succeeding requests being blocked by the policy, until the token limit frees up again._ | Yes | N/A |
0 commit comments