You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/api-management/api-management-policies.md
+15Lines changed: 15 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -151,6 +151,21 @@ More information about policies:
151
151
|[HTTP data source for resolver](http-data-source-policy.md)| Configures the HTTP request and optionally the HTTP response to resolve data for an object type and field in a GraphQL schema. | Yes | Yes | Yes | No | No |
152
152
|[Publish event to GraphQL subscription](publish-event-policy.md)| Publishes an event to one or more subscriptions specified in a GraphQL API schema. Configure the policy in a GraphQL resolver for a related field in the schema for another operation type such as a mutation. | Yes | Yes | Yes | No | No |
|[Limit Azure OpenAI Service token usage](azure-openai-token-limit-policy.md)| Prevents Azure OpenAI API usage spikes by limiting large language model tokens per calculated key. | Yes | Yes | No | Yes | Yes |
159
+
|[Limit large language model API token usage](llm-token-limit-policy.md)| Prevents large language model (LLM) API usage spikes by limiting LLM tokens per calculated key. | Yes | Yes | No | Yes | Yes |
160
+
|[Emit Azure OpenAI token metrics](azure-openai-emit-token-metric-policy.md)| Sends metrics to Application Insights for consumption of large language model tokens through Azure OpenAI service APIs. | Yes | Yes | No | Yes | Yes |
161
+
|[Emit large language model API token metrics](llm-emit-token-metric-policy.md)| Sends metrics to Application Insights for consumption of large language model (LLM) tokens through LLM APIs. | Yes | Yes | No | Yes | Yes |
162
+
|[Get cached responses of Azure OpenAI API requests](azure-openai-semantic-cache-lookup-policy.md)| Performs lookup in Azure OpenAI API cache using semantic search and returns a valid cached response when available. | Yes | Yes | Yes | Yes | No |
163
+
|[Get cached responses of large language model API requests](llm-semantic-cache-lookup-policy.md)| Performs lookup in large language model API cache using semantic search and returns a valid cached response when available. | Yes | Yes | Yes | Yes | No |
164
+
|[Store responses of Azure OpenAI API requests to cache](azure-openai-semantic-cache-store-policy.md)| Caches response according to the Azure OpenAI API cache configuration. | Yes | Yes | Yes | Yes | No |
165
+
|[Store responses of large language model API requests to cache](llm-semantic-cache-store-policy.md)| Caches response according to the large language model API cache configuration. | Yes | Yes | Yes | Yes | No |
166
+
|[Enforce content safety checks on LLM requests](llm-content-safety-policy.md)| Enforces content safety checks on LLM requests (prompts) by transmitting them to the [Azure AI Content Safety](/azure/ai-services/content-safety/overview) service before sending to the backend LLM. | Yes | Yes | Yes | Yes | Yes |
0 commit comments