You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/concepts/model-catalog-content-safety.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,7 +34,7 @@ Content filtering occurs synchronously as the service processes prompts to gener
34
34
- When you first deploy a language model
35
35
- Later, by selecting the content filtering toggle on the deployment details page
36
36
37
-
Suppose you decide to use an API other than the [Foundry Models API](/azure/ai-studio/reference/reference-model-inference-api) to work with a model that is deployed via a standard deployment. In such a situation, content filtering (preview) isn't enabled unless you implement it separately by using Azure AI Content Safety. To get started with Azure AI Content Safety, see [Quickstart: Analyze text content](/azure/ai-services/content-safety/quickstart-text). You run a higher risk of exposing users to harmful content if you don't use content filtering (preview) when working with models that are deployed via standard deployments.
37
+
Suppose you decide to use an API other than the [Model Inference API](/azure/ai-studio/reference/reference-model-inference-api) to work with a model that is deployed via a standard deployment. In such a situation, content filtering (preview) isn't enabled unless you implement it separately by using Azure AI Content Safety. To get started with Azure AI Content Safety, see [Quickstart: Analyze text content](/azure/ai-services/content-safety/quickstart-text). You run a higher risk of exposing users to harmful content if you don't use content filtering (preview) when working with models that are deployed via standard deployments.
To perform inferencing with the models, some models such as [Nixtla's TimeGEN-1](#nixtla) and [Cohere rerank](#cohere-rerank) require you to use custom APIs from the model providers. Others support inferencing using the [Foundry Models API](../model-inference/overview.md). You can find more details about individual models by reviewing their model cards in the [model catalog for Azure AI Foundry portal](https://ai.azure.com/explore/models).
25
+
To perform inferencing with the models, some models such as [Nixtla's TimeGEN-1](#nixtla) and [Cohere rerank](#cohere-rerank) require you to use custom APIs from the model providers. Others support inferencing using the [Model Inference API](../model-inference/overview.md). You can find more details about individual models by reviewing their model cards in the [model catalog for Azure AI Foundry portal](https://ai.azure.com/explore/models).
26
26
27
27
:::image type="content" source="../media/models-featured/models-catalog.gif" alt-text="An animation showing Azure AI Foundry model catalog section and the models available." lightbox="../media/models-featured/models-catalog.gif":::
28
28
@@ -67,7 +67,7 @@ The Cohere family of models includes various models optimized for different use
67
67
68
68
### Cohere command and embed
69
69
70
-
The following table lists the Cohere models that you can inference via the Foundry Models API.
70
+
The following table lists the Cohere models that you can inference via the Model Inference API.
71
71
72
72
| Model | Type | Capabilities |
73
73
| ------ | ---- | --- |
@@ -363,7 +363,7 @@ xAI's Grok 3 and Grok 3 Mini models are designed to excel in various enterprise
363
363
364
364
#### Inference examples: Stability AI
365
365
366
-
Stability AI models deployed via standard deployment implement the Foundry Models API on the route `/image/generations`.
366
+
Stability AI models deployed via standard deployment implement the Model Inference API on the route `/image/generations`.
367
367
For examples of how to use Stability AI models, see the following examples:
368
368
369
369
-[Use OpenAI SDK with Stability AI models for text to image requests](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/stabilityai/Text_to_Image_openai_library.ipynb)
Copy file name to clipboardExpand all lines: articles/ai-foundry/how-to/deploy-models-gretel-navigator.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -80,10 +80,10 @@ Read more about the [Azure AI inference package and reference](https://aka.ms/az
80
80
81
81
## Work with chat completions
82
82
83
-
In this section, you use the [Azure AI Foundry Models API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
83
+
In this section, you use the [Azure AI Model Inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
84
84
85
85
> [!TIP]
86
-
> The [Foundry Models API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Gretel Navigator chat model.
86
+
> The [Model Inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Gretel Navigator chat model.
87
87
88
88
### Create a client to consume the model
89
89
@@ -235,7 +235,7 @@ result = client.complete(
235
235
236
236
### Apply Guardrails and controls
237
237
238
-
The Foundry ModelsAPI supports [Azure AI Content Safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI Content Safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
238
+
The Model InferenceAPI supports [Azure AI Content Safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI Content Safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
239
239
240
240
The following example shows how to handle events when the model detects harmful content in the input prompt and the filteris enabled.
241
241
@@ -310,17 +310,17 @@ Deployment to a standard deployment doesn't require quota from your subscription
310
310
311
311
### A REST client
312
312
313
-
Models deployed with the [Foundry ModelsAPI](https://aka.ms/azureai/modelinference) can be consumed using anyREST client. To use the REST client, you need the following prerequisites:
313
+
Models deployed with the [Model InferenceAPI](https://aka.ms/azureai/modelinference) can be consumed using anyREST client. To use the REST client, you need the following prerequisites:
314
314
315
315
* To construct the requests, you need to passin the endpoint URL. The endpoint URL has the form `https://your-host-name.your-azure-region.inference.ai.azure.com`, where `your-host-name`` is your unique model deployment host name and `your-azure-region``is the Azure region where the model is deployed (for example, eastus2).
316
316
* Depending on your model deployment and authentication preference, you need either a key to authenticate against the service, or Microsoft Entra ID credentials. The key is a 32-character string.
317
317
318
318
## Work with chat completions
319
319
320
-
In this section, you use the [Foundry ModelsAPI](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
320
+
In this section, you use the [Model InferenceAPI](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
321
321
322
322
> [!TIP]
323
-
> The [Foundry ModelsAPI](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Gretel Navigator chat model.
323
+
> The [Model InferenceAPI](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Gretel Navigator chat model.
324
324
325
325
### Create a client to consume the model
326
326
@@ -479,7 +479,7 @@ The following example request shows other parameters that you can specify in the
479
479
480
480
### Apply Guardrails & controls
481
481
482
-
The Foundry ModelsAPI supports [Azure AI Content Safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI Content Safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
482
+
The Model InferenceAPI supports [Azure AI Content Safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI Content Safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
483
483
484
484
The following example shows how to handle events when the model detects harmful content in the input prompt.
485
485
@@ -536,7 +536,7 @@ For more information on how to track costs, see [Monitor costs for models offere
0 commit comments