Skip to content

Commit 336ba60

Browse files
authored
Merge pull request #5683 from msakande/fix-naming-for-azure-ai-model-inference-api
Fix naming for azure ai model inference api
2 parents 908f0b4 + 692d7a8 commit 336ba60

24 files changed

+91
-91
lines changed

articles/ai-foundry/concepts/model-catalog-content-safety.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ Content filtering occurs synchronously as the service processes prompts to gener
3434
- When you first deploy a language model
3535
- Later, by selecting the content filtering toggle on the deployment details page
3636

37-
Suppose you decide to use an API other than the [Foundry Models API](/azure/ai-studio/reference/reference-model-inference-api) to work with a model that is deployed via a standard deployment. In such a situation, content filtering (preview) isn't enabled unless you implement it separately by using Azure AI Content Safety. To get started with Azure AI Content Safety, see [Quickstart: Analyze text content](/azure/ai-services/content-safety/quickstart-text). You run a higher risk of exposing users to harmful content if you don't use content filtering (preview) when working with models that are deployed via standard deployments.
37+
Suppose you decide to use an API other than the [Model Inference API](/azure/ai-studio/reference/reference-model-inference-api) to work with a model that is deployed via a standard deployment. In such a situation, content filtering (preview) isn't enabled unless you implement it separately by using Azure AI Content Safety. To get started with Azure AI Content Safety, see [Quickstart: Analyze text content](/azure/ai-services/content-safety/quickstart-text). You run a higher risk of exposing users to harmful content if you don't use content filtering (preview) when working with models that are deployed via standard deployments.
3838

3939
[!INCLUDE [content-safety-harm-categories](../includes/content-safety-harm-categories.md)]
4040

articles/ai-foundry/concepts/models-featured.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ The Azure AI model catalog offers a large selection of Azure AI Foundry Models f
2222

2323
[!INCLUDE [models-preview](../includes/models-preview.md)]
2424

25-
To perform inferencing with the models, some models such as [Nixtla's TimeGEN-1](#nixtla) and [Cohere rerank](#cohere-rerank) require you to use custom APIs from the model providers. Others support inferencing using the [Foundry Models API](../model-inference/overview.md). You can find more details about individual models by reviewing their model cards in the [model catalog for Azure AI Foundry portal](https://ai.azure.com/explore/models).
25+
To perform inferencing with the models, some models such as [Nixtla's TimeGEN-1](#nixtla) and [Cohere rerank](#cohere-rerank) require you to use custom APIs from the model providers. Others support inferencing using the [Model Inference API](../model-inference/overview.md). You can find more details about individual models by reviewing their model cards in the [model catalog for Azure AI Foundry portal](https://ai.azure.com/explore/models).
2626

2727
:::image type="content" source="../media/models-featured/models-catalog.gif" alt-text="An animation showing Azure AI Foundry model catalog section and the models available." lightbox="../media/models-featured/models-catalog.gif":::
2828

@@ -67,7 +67,7 @@ The Cohere family of models includes various models optimized for different use
6767

6868
### Cohere command and embed
6969

70-
The following table lists the Cohere models that you can inference via the Foundry Models API.
70+
The following table lists the Cohere models that you can inference via the Model Inference API.
7171

7272
| Model | Type | Capabilities |
7373
| ------ | ---- | --- |
@@ -363,7 +363,7 @@ xAI's Grok 3 and Grok 3 Mini models are designed to excel in various enterprise
363363

364364
#### Inference examples: Stability AI
365365

366-
Stability AI models deployed via standard deployment implement the Foundry Models API on the route `/image/generations`.
366+
Stability AI models deployed via standard deployment implement the Model Inference API on the route `/image/generations`.
367367
For examples of how to use Stability AI models, see the following examples:
368368

369369
- [Use OpenAI SDK with Stability AI models for text to image requests](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/stabilityai/Text_to_Image_openai_library.ipynb)

articles/ai-foundry/how-to/deploy-models-gretel-navigator.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -80,10 +80,10 @@ Read more about the [Azure AI inference package and reference](https://aka.ms/az
8080

8181
## Work with chat completions
8282

83-
In this section, you use the [Azure AI Foundry Models API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
83+
In this section, you use the [Azure AI Model Inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
8484

8585
> [!TIP]
86-
> The [Foundry Models API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Gretel Navigator chat model.
86+
> The [Model Inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Gretel Navigator chat model.
8787
8888
### Create a client to consume the model
8989

@@ -235,7 +235,7 @@ result = client.complete(
235235

236236
### Apply Guardrails and controls
237237

238-
The Foundry Models API supports [Azure AI Content Safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI Content Safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
238+
The Model Inference API supports [Azure AI Content Safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI Content Safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
239239

240240
The following example shows how to handle events when the model detects harmful content in the input prompt and the filter is enabled.
241241

@@ -310,17 +310,17 @@ Deployment to a standard deployment doesn't require quota from your subscription
310310

311311
### A REST client
312312

313-
Models deployed with the [Foundry Models API](https://aka.ms/azureai/modelinference) can be consumed using any REST client. To use the REST client, you need the following prerequisites:
313+
Models deployed with the [Model Inference API](https://aka.ms/azureai/modelinference) can be consumed using any REST client. To use the REST client, you need the following prerequisites:
314314

315315
* To construct the requests, you need to pass in the endpoint URL. The endpoint URL has the form `https://your-host-name.your-azure-region.inference.ai.azure.com`, where `your-host-name`` is your unique model deployment host name and `your-azure-region`` is the Azure region where the model is deployed (for example, eastus2).
316316
* Depending on your model deployment and authentication preference, you need either a key to authenticate against the service, or Microsoft Entra ID credentials. The key is a 32-character string.
317317

318318
## Work with chat completions
319319

320-
In this section, you use the [Foundry Models API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
320+
In this section, you use the [Model Inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
321321

322322
> [!TIP]
323-
> The [Foundry Models API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Gretel Navigator chat model.
323+
> The [Model Inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Gretel Navigator chat model.
324324

325325
### Create a client to consume the model
326326

@@ -479,7 +479,7 @@ The following example request shows other parameters that you can specify in the
479479

480480
### Apply Guardrails & controls
481481

482-
The Foundry Models API supports [Azure AI Content Safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI Content Safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
482+
The Model Inference API supports [Azure AI Content Safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI Content Safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
483483

484484
The following example shows how to handle events when the model detects harmful content in the input prompt.
485485

@@ -536,7 +536,7 @@ For more information on how to track costs, see [Monitor costs for models offere
536536

537537
## Related content
538538

539-
* [Foundry Models API](../../ai-foundry/model-inference/reference/reference-model-inference-api.md)
539+
* [Model Inference API](../../ai-foundry/model-inference/reference/reference-model-inference-api.md)
540540
* [Deploy models as standard deployments](deploy-models-serverless.md)
541541
* [Consume standard deployments from a different Azure AI Foundry project or hub](deploy-models-serverless-connect.md)
542542
* [Region availability for models in standard deployments](deploy-models-serverless-availability.md)

0 commit comments

Comments
 (0)