Skip to content

Commit 0f8eed7

Browse files
committed
fix typos
1 parent 1ad8b9a commit 0f8eed7

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

articles/api-management/azure-openai-enable-semantic-caching.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ Enable semantic caching of responses to Azure OpenAI API requests to reduce band
2222
2323
## Prerequisites
2424

25-
* One or more Azure OpenAI in FModel Inference APIs must be added to your API Management instance. For more information, see [Add an Azure OpenAI in Model Inference API to Azure API Management](azure-openai-api-from-specification.md).
25+
* One or more Azure OpenAI in Foundry Model Inference APIs must be added to your API Management instance. For more information, see [Add an Azure OpenAI in Foundry Model Inference API to Azure API Management](azure-openai-api-from-specification.md).
2626
* Azure OpenAI must have deployments for the following:
2727
* Chat Completion API - Deployment used for API consumer calls
2828
* Embeddings API - Deployment used for semantic caching

includes/api-management-azure-openai-models.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,5 +22,5 @@ The policy is used with APIs [added to API Management from the Azure OpenAI in F
2222
> [!NOTE]
2323
> Traditional completion APIs are only available with legacy model versions and support is limited.
2424
25-
For current information about the models and their capabilities, see [Azure OpenAI in Foundry Mo9dels](/azure/ai-services/openai/concepts/models).
25+
For current information about the models and their capabilities, see [Azure OpenAI in Foundry Models](/azure/ai-services/openai/concepts/models).
2626

0 commit comments

Comments
 (0)