You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/api-management/api-management-policies.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,7 +36,7 @@ More information about policies:
36
36
| [Set usage quota by subscription](quota-policy.md) | Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per subscription basis. | Yes | Yes | Yes | Yes
37
37
|[Set usage quota by key](quota-by-key-policy.md)| Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per key basis. | Yes | No | No | Yes |
38
38
|[Limit concurrency](limit-concurrency-policy.md)| Prevents enclosed policies from executing by more than the specified number of requests at a time. | Yes | Yes | Yes | Yes |
39
-
|[Limit Azure OpenAI Service token usage](azure-openai-token-limit-policy.md)| Prevents Azure OpenAI API usage spikes by limiting language model tokens per calculated key. | Yes | Yes | No | No |
39
+
|[Limit Azure OpenAI Service token usage](azure-openai-token-limit-policy.md)| Prevents Azure OpenAI API usage spikes by limiting large language model tokens per calculated key. | Yes | Yes | No | No |
40
40
|[Limit large language model API token usage](llm-token-limit-policy.md)| Prevents large language model (LLM) API usage spikes by limiting LLM tokens per calculated key. | Yes | Yes | No | No |
41
41
42
42
## Authentication and authorization
@@ -81,9 +81,9 @@ More information about policies:
81
81
|[Get value from cache](cache-lookup-value-policy.md)| Retrieves a cached item by key. | Yes | Yes | Yes | Yes |
82
82
|[Store value in cache](cache-store-value-policy.md)| Stores an item in the cache by key. | Yes | Yes | Yes | Yes |
83
83
|[Remove value from cache](cache-remove-value-policy.md)| Removes an item in the cache by key. | Yes | Yes | Yes | Yes |
84
-
|[Get cached responses of Azure OpenAI API requests](azure-openai-semantic-cache-lookup-policy.md)| Performs cache lookup using semantic search and returns a valid cached response when available. | Yes | Yes | Yes | Yes |
84
+
|[Get cached responses of Azure OpenAI API requests](azure-openai-semantic-cache-lookup-policy.md)| Performs lookup in Azure OpenAI API cache using semantic search and returns a valid cached response when available. | Yes | Yes | Yes | Yes |
85
85
|[Store responses of Azure OpenAI API requests to cache](azure-openai-semantic-cache-store-policy.md)| Caches response according to the Azure OpenAI API cache configuration. | Yes | Yes | Yes | Yes |
86
-
|[Get cached responses of large language model API requests](llm-semantic-cache-lookup-policy.md)| Performs cache lookup using semantic search and returns a valid cached response when available. | Yes | Yes | Yes | Yes |
86
+
|[Get cached responses of large language model API requests](llm-semantic-cache-lookup-policy.md)| Performs lookup in large language model API cache using semantic search and returns a valid cached response when available. | Yes | Yes | Yes | Yes |
87
87
|[Store responses of large language model API requests to cache](llm-semantic-cache-store-policy.md)| Caches response according to the large language model API cache configuration. | Yes | Yes | Yes | Yes |
88
88
89
89
@@ -133,7 +133,7 @@ More information about policies:
|[Trace](trace-policy.md)| Adds custom traces into the [request tracing](./api-management-howto-api-inspector.md) output in the test console, Application Insights telemetries, and resource logs. | Yes | Yes<sup>1</sup> | Yes | Yes |
135
135
|[Emit metrics](emit-metric-policy.md)| Sends custom metrics to Application Insights at execution. | Yes | Yes | Yes | Yes |
136
-
|[Emit Azure OpenAI token metrics](azure-openai-emit-token-metric-policy.md)| Sends metrics to Application Insights for consumption of language model tokens through Azure OpenAI service APIs. | Yes | Yes | No | No |
136
+
|[Emit Azure OpenAI token metrics](azure-openai-emit-token-metric-policy.md)| Sends metrics to Application Insights for consumption of large language model tokens through Azure OpenAI service APIs. | Yes | Yes | No | No |
137
137
|[Emit large language model API token metrics](llm-emit-token-metric-policy.md)| Sends metrics to Application Insights for consumption of large language model (LLM) tokens through LLM APIs. | Yes | Yes | No | No |
0 commit comments