You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/concepts/content-filtering.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,7 +26,7 @@ author: PatrickFarley
26
26
27
27
The content filtering system is powered by [Azure AI Content Safety](../../ai-services/content-safety/overview.md), and it works by running both the prompt input and completion output through a set of classification models designed to detect and prevent the output of harmful content. Variations in API configurations and application design might affect completions and thus filtering behavior.
28
28
29
-
With Azure OpenAI model deployments, you can use the default content filter or create your own content filter (described later on). Models available through **serverless APIs** have content filtering enabled by default. To learn more about the default content filter enabled for serverless APIs, see [Guardrails & controls for models curated by Azure AI in the model catalog](model-catalog-content-safety.md).
29
+
With Azure OpenAI model deployments, you can use the default content filter or create your own content filter (described later on). Models available through **serverless APIs** have content filtering enabled by default. To learn more about the default content filter enabled for serverless APIs, see [Guardrails & controls for Azure Direct Models in the model catalog](model-catalog-content-safety.md).
| Which models can be deployed? |[Azure OpenAI models](../../ai-services/openai/concepts/models.md)|[Azure OpenAI models and Standard deployment](../../ai-foundry/model-inference/concepts/models.md)|[Standard deployment](../how-to/model-catalog-overview.md)|[Open and custom models](../how-to/model-catalog-overview.md#availability-of-models-for-deployment-as-managed-compute)|
31
31
| Deployment resource | Azure OpenAI resource | Azure AI services resource | AI project resource | AI project resource |
@@ -37,7 +37,7 @@ Azure AI Foundry offers four different deployment options:
37
37
| Key-less authentication | Yes | Yes | No | No |
38
38
| Best suited when | You're planning to use only OpenAI models | You're planning to take advantage of the flagship models in Azure AI catalog, including OpenAI. | You're planning to use a single model from a specific provider (excluding OpenAI). | If you plan to use open models and you have enough compute quota available in your subscription. |
| Deployment instructions |[Deploy to Azure OpenAI](../how-to/deploy-models-openai.md)|[Deploy to Azure AI model inference](../model-inference/how-to/create-model-deployments.md)|[Deploy to Standard deployment](../how-to/deploy-models-serverless.md)|[Deploy to Managed compute](../how-to/deploy-models-managed.md)|
40
+
| Deployment instructions |[Deploy to Azure OpenAI](../how-to/deploy-models-openai.md)|[Deploy to Foundry Models](../model-inference/how-to/create-model-deployments.md)|[Deploy to Standard deployment](../how-to/deploy-models-serverless.md)|[Deploy to Managed compute](../how-to/deploy-models-managed.md)|
41
41
42
42
<sup>1</sup> A minimal endpoint infrastructure is billed per minute. You aren't billed for the infrastructure that hosts the model in pay-as-you-go. After you delete the endpoint, no further charges accrue.
43
43
@@ -50,7 +50,7 @@ Azure AI Foundry offers four different deployment options:
50
50
51
51
Azure AI Foundry encourages you to explore various deployment options and choose the one that best suites your business and technical needs. In general, Consider using the following approach to select a deployment option:
52
52
53
-
* Start with [Azure AI model inference](../../ai-foundry/model-inference/overview.md), which is the option with the largest scope. This option allows you to iterate and prototype faster in your application without having to rebuild your architecture each time you decide to change something. If you're using Azure AI Foundry hubs or projects, enable this option by [turning on the Azure AI model inference feature](../model-inference/how-to/quickstart-ai-project.md#configure-the-project-to-use-azure-ai-model-inference).
53
+
* Start with [Foundry Models](../../ai-foundry/model-inference/overview.md), which is the option with the largest scope. This option allows you to iterate and prototype faster in your application without having to rebuild your architecture each time you decide to change something. If you're using Azure AI Foundry hubs or projects, enable this option by [turning on the Foundry Models feature](../model-inference/how-to/quickstart-ai-project.md#configure-the-project-to-use-azure-ai-model-inference).
54
54
55
55
* When you're looking to use a specific model:
56
56
@@ -63,8 +63,8 @@ Azure AI Foundry encourages you to explore various deployment options and choose
63
63
64
64
## Related content
65
65
66
-
*[Configure your AI project to use Azure AI model inference](../../ai-foundry/model-inference/how-to/quickstart-ai-project.md)
67
-
*[Add and configure models to Azure AI model inference](../model-inference/how-to/create-model-deployments.md)
66
+
*[Configure your AI project to use Foundry Models](../../ai-foundry/model-inference/how-to/quickstart-ai-project.md)
67
+
*[Add and configure models to Foundry Models](../model-inference/how-to/create-model-deployments.md)
68
68
*[Deploy Azure OpenAI models with Azure AI Foundry](../how-to/deploy-models-openai.md)
69
69
*[Deploy open models with Azure AI Foundry](../how-to/deploy-models-managed.md)
70
70
*[Model catalog and collections in Azure AI Foundry portal](../how-to/model-catalog-overview.md)
@@ -24,7 +24,7 @@ In this article, learn about Guardrails & controls capabilities for models from
24
24
25
25
Azure AI uses a default configuration of [Azure AI Content Safety](/azure/ai-services/content-safety/overview) content filters to detect harmful content across four categories including hate and fairness, self-harm, sexual, and violence for models deployed via serverless APIs. To learn more about content filtering, see [Understand harm categories](#understand-harm-categories).
26
26
27
-
The default content filtering configuration for text models is set to filter at the medium severity threshold, filtering any detected content at this level or higher. For image models, the default content filtering configuration is set at the low configuration threshold, filtering at this level or higher. For models deployed using the [Azure AI model inference service](../../ai-foundry/model-inference/how-to/configure-content-filters.md), you can create configurable filters by selecting the **Content filters** tab within the **Guardrails & controls** page of the Azure AI Foundry portal.
27
+
The default content filtering configuration for text models is set to filter at the medium severity threshold, filtering any detected content at this level or higher. For image models, the default content filtering configuration is set at the low configuration threshold, filtering at this level or higher. For models deployed using the [Azure AI Foundry Models](../../ai-foundry/model-inference/how-to/configure-content-filters.md), you can create configurable filters by selecting the **Content filters** tab within the **Guardrails & controls** page of the Azure AI Foundry portal.
28
28
29
29
> [!TIP]
30
30
> Content filtering isn't available for certain model types that are deployed via serverless APIs. These model types include embedding models and time series models.
@@ -34,7 +34,7 @@ Content filtering occurs synchronously as the service processes prompts to gener
34
34
- When you first deploy a language model
35
35
- Later, by selecting the content filtering toggle on the deployment details page
36
36
37
-
Suppose you decide to use an API other than the [Azure AI Model Inference API](/azure/ai-studio/reference/reference-model-inference-api) to work with a model that is deployed via a serverless API. In such a situation, content filtering isn't enabled unless you implement it separately by using Azure AI Content Safety. To get started with Azure AI Content Safety, see [Quickstart: Analyze text content](/azure/ai-services/content-safety/quickstart-text). You run a higher risk of exposing users to harmful content if you don't use content filtering when working with models that are deployed via serverless APIs.
37
+
Suppose you decide to use an API other than the [Azure AI Foundry Models API](/azure/ai-studio/reference/reference-model-inference-api) to work with a model that is deployed via a serverless API. In such a situation, content filtering isn't enabled unless you implement it separately by using Azure AI Content Safety. To get started with Azure AI Content Safety, see [Quickstart: Analyze text content](/azure/ai-services/content-safety/quickstart-text). You run a higher risk of exposing users to harmful content if you don't use content filtering when working with models that are deployed via serverless APIs.
Copy file name to clipboardExpand all lines: articles/ai-foundry/concepts/model-lifecycle-retirement.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -58,7 +58,7 @@ Models labeled _Retired_ are no longer available for use. You can't create new d
58
58
59
59
- Models are labeled _Deprecated_ and remain in the deprecated state for at least 90 days before being moved to the retired state. During this notification period, you can migrate any existing deployments to newer or replacement models.
60
60
61
-
- For each subscription that has a model deployed as a severless API or deployed to the Azure AI model inference, members of the _owner_, _contributor_, _reader_, monitoring contributor_, and _monitoring reader_ roles receive a notification when a model deprecation is announced. The notification contains the dates when the model enters legacy, deprecated, and retired states. The notification might provide information about possible replacement model options, if applicable.
61
+
- For each subscription that has a model deployed as a standard deployment or deployed to the Azure AI model inference, members of the _owner_, _contributor_, _reader_, monitoring contributor_, and _monitoring reader_ roles receive a notification when a model deprecation is announced. The notification contains the dates when the model enters legacy, deprecated, and retired states. The notification might provide information about possible replacement model options, if applicable.
62
62
63
63
The following tables list the timelines for models that are on track for retirement. The specified dates are in UTC time.
Copy file name to clipboardExpand all lines: articles/ai-foundry/includes/content-safety-serverless-models.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,7 @@ ms.custom: include file
13
13
# Also used in Azure Machine Learning documentation
14
14
---
15
15
16
-
For language models deployed via standard deployment, Azure AI implements a default configuration of [Azure AI Content Safety](../../ai-services/content-safety/overview.md) text moderation filters that detect harmful content such as hate, self-harm, sexual, and violent content. To learn more about content filtering, see [Guardrails & controls for models curated by Azure AI in the model catalog](../concepts/model-catalog-content-safety.md).
16
+
For language models deployed via standard deployment, Azure AI implements a default configuration of [Azure AI Content Safety](../../ai-services/content-safety/overview.md) text moderation filters that detect harmful content such as hate, self-harm, sexual, and violent content. To learn more about content filtering, see [Guardrails & controls for Azure Direct Models in the model catalog](../concepts/model-catalog-content-safety.md).
17
17
18
18
> [!TIP]
19
19
> Content filtering is not available for certain model types that are deployed via serverless APIs. These model types include embedding models and time series models.
> Some models don't support system messages (`role="system"`). When you use the Azure AI model inference API, system messages are translated to user messages, which is the closest capability available. This translation is offered for convenience, but it's important for you to verify that the model is following the instructions in the system message with the right level of confidence.
80
+
> Some models don't support system messages (`role="system"`). When you use the Foundry Models API, system messages are translated to user messages, which is the closest capability available. This translation is offered for convenience, but it's important for you to verify that the model is following the instructions in the system message with the right level of confidence.
81
81
82
82
The response is as follows, where you can see the model's usage statistics:
#### Explore more parameters supported by the inference client
160
160
161
-
Explore other parameters that you can specify in the inference client. For a full list of all the supported parameters and their corresponding documentation, see [Azure AI Model Inference API reference](https://aka.ms/azureai/modelinference).
161
+
Explore other parameters that you can specify in the inference client. For a full list of all the supported parameters and their corresponding documentation, see [Foundry Models API reference](https://aka.ms/azureai/modelinference).
The Azure AI Model Inference API allows you to pass extra parameters to the model. The following code example shows how to pass the extra parameter `logprobs` to the model.
215
+
The Foundry Models API allows you to pass extra parameters to the model. The following code example shows how to pass the extra parameter `logprobs` to the model.
Before you pass extra parameters to the Azure AI model inference API, make sure your model supports those extra parameters. When the request is made to the underlying model, the header `extra-parameters` is passed to the model with the value `pass-through`. This value tells the endpoint to pass the extra parameters to the model. Use of extra parameters with the model doesn't guarantee that the model can actually handle them. Read the model's documentation to understand which extra parameters are supported.
233
+
Before you pass extra parameters to the Foundry Models API, make sure your model supports those extra parameters. When the request is made to the underlying model, the header `extra-parameters` is passed to the model with the value `pass-through`. This value tells the endpoint to pass the extra parameters to the model. Use of extra parameters with the model doesn't guarantee that the model can actually handle them. Read the model's documentation to understand which extra parameters are supported.
234
234
235
235
### Use tools
236
236
237
-
Some models support the use of tools, which can be an extraordinary resource when you need to offload specific tasks from the language model and instead rely on a more deterministic system or even a different language model. The Azure AI Model Inference API allows you to define tools in the following way.
237
+
Some models support the use of tools, which can be an extraordinary resource when you need to offload specific tasks from the language model and instead rely on a more deterministic system or even a different language model. The Foundry Models API allows you to define tools in the following way.
238
238
239
239
The following code example creates a tool definition that is able to look from flight information from two different cities.
The Azure AI model inference API supports [Azure AI Content Safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI Content Safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
366
+
The Foundry Models API supports [Azure AI Content Safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI Content Safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
367
367
368
368
The following example shows how to handle events when the model detects harmful content in the input prompt.
0 commit comments