Skip to content

Commit 4101a3c

Browse files
authored
[APIM] Update azure-openai-token-limit-policy.md
Fixes https://github.com/MicrosoftDocs/azure-docs/issues/122842
1 parent b70a526 commit 4101a3c

File tree

1 file changed

+5
-4
lines changed

1 file changed

+5
-4
lines changed

articles/api-management/azure-openai-token-limit-policy.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -45,8 +45,8 @@ For more information, see [Azure OpenAI Service models](../ai-services/openai/co
4545
retry-after-variable-name="policy expression variable name"
4646
remaining-tokens-header-name="header name"
4747
remaining-tokens-variable-name="policy expression variable name"
48-
consumed-tokens-header-name="header name"
49-
consumed-tokens-variable-name="policy expression variable name" />
48+
tokens-consumed-header-name="header name"
49+
tokens-consume-variable-name="policy expression variable name" />
5050
```
5151
## Attributes
5252

@@ -59,8 +59,8 @@ For more information, see [Azure OpenAI Service models](../ai-services/openai/co
5959
| retry-after-variable-name | The name of a variable that stores the recommended retry interval in seconds after the specified `tokens-per-minute` is exceeded. Policy expressions aren't allowed. | No | N/A |
6060
| remaining-tokens-header-name | The name of a response header whose value after each policy execution is the number of remaining tokens allowed for the time interval. Policy expressions aren't allowed.| No | N/A |
6161
| remaining-tokens-variable-name | The name of a variable that after each policy execution stores the number of remaining tokens allowed for the time interval. Policy expressions aren't allowed.| No | N/A |
62-
| consumed-tokens-header-name | The name of a response header whose value is the number of tokens consumed by both prompt and completion. The header is added to response only after the response is received from backend. Policy expressions aren't allowed.| No | N/A |
63-
| consumed-tokens-variable-name | The name of a variable initialized to the estimated number of tokens in the prompt in `backend` section of pipeline if `estimate-prompt-tokens` is `true` and zero otherwise. The variable is updated with the reported count upon receiving the response in `outbound` section.| No | N/A |
62+
| tokens-consumed-header-name | The name of a response header whose value is the number of tokens consumed by both prompt and completion. The header is added to response only after the response is received from backend. Policy expressions aren't allowed.| No | N/A |
63+
| tokens-consumed-variable-name | The name of a variable initialized to the estimated number of tokens in the prompt in `backend` section of pipeline if `estimate-prompt-tokens` is `true` and zero otherwise. The variable is updated with the reported count upon receiving the response in `outbound` section.| No | N/A |
6464

6565
## Usage
6666

@@ -72,6 +72,7 @@ For more information, see [Azure OpenAI Service models](../ai-services/openai/co
7272

7373
* This policy can be used multiple times per policy definition.
7474
* This policy can optionally be configured when adding an API from the Azure OpenAI Service using the portal.
75+
* Certain Azure OpenAI endpoints support streaming of responses. When `stream` is set to `true` in the API request to enable streaming, prompt tokens are always estimated, regardless of the value of the `estimate-prompt-tokens` attribute.
7576
* [!INCLUDE [api-management-rate-limit-key-scope](../../includes/api-management-rate-limit-key-scope.md)]
7677

7778
## Example

0 commit comments

Comments
 (0)