You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/concepts/content-filtering.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,7 +26,7 @@ author: PatrickFarley
26
26
27
27
The content filtering system is powered by [Azure AI Content Safety](../../ai-services/content-safety/overview.md), and it works by running both the prompt input and completion output through a set of classification models designed to detect and prevent the output of harmful content. Variations in API configurations and application design might affect completions and thus filtering behavior.
28
28
29
-
With Azure OpenAI model deployments, you can use the default content filter or create your own content filter (described later on). Models available through **serverless APIs** have content filtering enabled by default. To learn more about the default content filter enabled for serverless APIs, see [Content safety for models curated by Azure AI in the model catalog](model-catalog-content-safety.md).
29
+
With Azure OpenAI model deployments, you can use the default content filter or create your own content filter (described later on). Models available through **standard deployments** have content filtering enabled by default. To learn more about the default content filter enabled for standard deployments, see [Content safety for models curated by Azure AI in the model catalog](model-catalog-content-safety.md).
Copy file name to clipboardExpand all lines: articles/ai-foundry/concepts/fine-tuning-overview.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -86,13 +86,13 @@ It's important to call out that fine-tuning is heavily dependent on the quality
86
86
Now that you know when to use fine-tuning for your use case, you can go to Azure AI Foundry to find models available to fine-tune.
87
87
For some models in the model catalog, fine-tuning is available by using a standard deployment, or a managed compute (preview), or both.
88
88
89
-
Fine-tuning is available in specific Azure regions for some models that are deployed via serverless APIs. To fine-tune such models, a user must have a hub/project in the region where the model is available for fine-tuning. See [Region availability for models in serverless API endpoints](../how-to/deploy-models-serverless-availability.md) for detailed information.
89
+
Fine-tuning is available in specific Azure regions for some models that are deployed via standard deployments. To fine-tune such models, a user must have a hub/project in the region where the model is available for fine-tuning. See [Region availability for models in standard deployment](../how-to/deploy-models-serverless-availability.md) for detailed information.
90
90
91
91
For more information on fine-tuning using a managed compute (preview), see [Fine-tune models using managed compute (preview)](../how-to/fine-tune-managed-compute.md).
92
92
93
93
For details about Azure OpenAI in Azure AI Foundry Models that are available for fine-tuning, see the [Azure OpenAI in Foundry Models documentation](../../ai-services/openai/concepts/models.md#fine-tuning-models) or the [Azure OpenAI models table](#fine-tuning-azure-openai-models) later in this guide.
94
94
95
-
For the Azure OpenAI Service models that you can fine tune, supported regions for fine-tuning include North Central US, Sweden Central, and more.
95
+
For the Azure OpenAI Service models that you can fine tune, supported regions for fine-tuning include North Central US, Sweden Central, and more.
96
96
97
97
### Fine-tuning Azure OpenAI models
98
98
@@ -102,5 +102,5 @@ For the Azure OpenAI Service models that you can fine tune, supported regions f
102
102
103
103
-[Fine-tune models using managed compute (preview)](../how-to/fine-tune-managed-compute.md)
104
104
-[Fine-tune an Azure OpenAI model in Azure AI Foundry portal](../../ai-services/openai/how-to/fine-tuning.md?context=/azure/ai-studio/context/context)
105
-
-[Fine-tune models using serverless API](../how-to/fine-tune-serverless.md)
105
+
-[Fine-tune models using standard deployment](../how-to/fine-tune-serverless.md)
| Region | East US/East US2 |[Serverless APIs](../how-to/model-catalog-overview.md#serverless-api-pay-per-token-billing) and [Azure OpenAI](/azure/ai-services/openai/overview)|
77
-
| Tokens per minute (TPM) rate limit | 30k (180 RPM based on Azure OpenAI) for non-reasoning and 100k for reasoning models <br> N/A (serverless APIs) | For Azure OpenAI models, selection is available for users with rate limit ranges based on deployment type (standard, global, global standard, and so on.) <br> For serverless APIs, this setting is abstracted. |
78
-
| Number of requests | Two requests in a trail for every hour (24 trails per day) |Serverless APIs, Azure OpenAI |
79
-
| Number of trails/runs | 14 days with 24 trails per day for 336 runs |Serverless APIs, Azure OpenAI |
| Number of tokens processed (moderate) | 80:20 ratio for input to output tokens, that is, 800 input tokens to 200 output tokens. |Serverless APIs, Azure OpenAI |
82
-
| Number of concurrent requests | One (requests are sent sequentially one after other) |Serverless APIs, Azure OpenAI |
83
-
| Data | Synthetic (input prompts prepared from static text) |Serverless APIs, Azure OpenAI |
84
-
| Region | East US/East US2 |Serverless APIs and Azure OpenAI |
76
+
| Region | East US/East US2 |[Standard deployments](../how-to/model-catalog-overview.md#serverless-api-pay-per-token-billing) and [Azure OpenAI](/azure/ai-services/openai/overview)|
77
+
| Tokens per minute (TPM) rate limit | 30k (180 RPM based on Azure OpenAI) for non-reasoning and 100k for reasoning models <br> N/A (standard deployments) | For Azure OpenAI models, selection is available for users with rate limit ranges based on deployment type (standard, global, global standard, and so on.) <br> For standard deployments, this setting is abstracted. |
78
+
| Number of requests | Two requests in a trail for every hour (24 trails per day) |Standard deployments, Azure OpenAI |
79
+
| Number of trails/runs | 14 days with 24 trails per day for 336 runs |Standard deployments, Azure OpenAI |
| Number of tokens processed (moderate) | 80:20 ratio for input to output tokens, that is, 800 input tokens to 200 output tokens. |Standard deployments, Azure OpenAI |
82
+
| Number of concurrent requests | One (requests are sent sequentially one after other) |Standard deployments, Azure OpenAI |
83
+
| Data | Synthetic (input prompts prepared from static text) |Standard deployments, Azure OpenAI |
84
+
| Region | East US/East US2 |Standard deployments and Azure OpenAI |
85
85
| Deployment type | Standard | Applicable only for Azure OpenAI |
86
-
| Streaming | True | Applies to serverless APIs and Azure OpenAI. For models deployed via [managed compute](../how-to/model-catalog-overview.md#managed-compute), or for endpoints when streaming is not supported TTFT is represented as P50 of latency metric. |
86
+
| Streaming | True | Applies to standard deployments and Azure OpenAI. For models deployed via [managed compute](../how-to/model-catalog-overview.md#managed-compute), or for endpoints when streaming is not supported TTFT is represented as P50 of latency metric. |
87
87
| SKU | Standard_NC24ads_A100_v4 (24 cores, 220GB RAM, 64GB storage) | Applicable only for Managed Compute (to estimate the cost and perf metrics) |
88
88
89
89
The performance of LLMs and SLMs is assessed across the following metrics:
@@ -111,14 +111,14 @@ For performance metrics like latency or throughput, the time to first token and
111
111
112
112
### Cost
113
113
114
-
Cost calculations are estimates for using an LLM or SLM model endpoint hosted on the Azure AI platform. Azure AI supports displaying the cost of serverless APIs and Azure OpenAI models. Because these costs are subject to change, we refresh our cost calculations on a regular cadence.
114
+
Cost calculations are estimates for using an LLM or SLM model endpoint hosted on the Azure AI platform. Azure AI supports displaying the cost of standard deployments and Azure OpenAI models. Because these costs are subject to change, we refresh our cost calculations on a regular cadence.
115
115
116
116
The cost of LLMs and SLMs is assessed across the following metrics:
117
117
118
118
| Metric | Description |
119
119
|--------|-------------|
120
-
| Cost per input tokens | Cost for serverless API deployment for 1 million input tokens |
121
-
| Cost per output tokens | Cost for serverless API deployment for 1 million output tokens |
120
+
| Cost per input tokens | Cost for standard deployment for 1 million input tokens |
121
+
| Cost per output tokens | Cost for standard deployment for 1 million output tokens |
122
122
| Estimated cost | Cost for the sum of cost per input tokens and cost per output tokens, with a ratio of 3:1. |
In this article, learn about content safety capabilities for models from the model catalog deployed using serverless APIs.
20
+
In this article, learn about content safety capabilities for models from the model catalog deployed using standard deployments.
21
21
22
22
23
23
## Content filter defaults
24
24
25
-
Azure AI uses a default configuration of [Azure AI Content Safety](/azure/ai-services/content-safety/overview) content filters to detect harmful content across four categories including hate and fairness, self-harm, sexual, and violence for models deployed via serverless APIs. To learn more about content filtering (preview), see [Understand harm categories](#understand-harm-categories).
25
+
Azure AI uses a default configuration of [Azure AI Content Safety](/azure/ai-services/content-safety/overview) content filters to detect harmful content across four categories including hate and fairness, self-harm, sexual, and violence for models deployed via standard deployments. To learn more about content filtering (preview), see [Understand harm categories](#understand-harm-categories).
26
26
27
27
The default content filtering configuration for text models is set to filter at the medium severity threshold, filtering any detected content at this level or higher. For image models, the default content filtering configuration is set at the low configuration threshold, filtering at this level or higher. For models deployed using the [Azure AI model inference service](../../ai-foundry/model-inference/how-to/configure-content-filters.md), you can create configurable filters by selecting the **Content filters** tab within the **Safety + security** page of the Azure AI Foundry portal.
28
28
29
29
> [!TIP]
30
-
> Content filtering (preview) isn't available for certain model types that are deployed via serverless APIs. These model types include embedding models and time series models.
30
+
> Content filtering (preview) isn't available for certain model types that are deployed via standard deployments. These model types include embedding models and time series models.
31
31
32
32
Content filtering (preview) occurs synchronously as the service processes prompts to generate content. You might be billed separately according to [Azure AI Content Safety pricing](https://azure.microsoft.com/pricing/details/cognitive-services/content-safety/) for such use. You can disable content filtering (preview) for individual serverless endpoints either:
33
33
34
34
- When you first deploy a language model
35
35
- Later, by selecting the content filtering toggle on the deployment details page
36
36
37
-
Suppose you decide to use an API other than the [Azure AI Model Inference API](/azure/ai-studio/reference/reference-model-inference-api) to work with a model that is deployed via a serverless API. In such a situation, content filtering (preview) isn't enabled unless you implement it separately by using Azure AI Content Safety. To get started with Azure AI Content Safety, see [Quickstart: Analyze text content](/azure/ai-services/content-safety/quickstart-text). You run a higher risk of exposing users to harmful content if you don't use content filtering (preview) when working with models that are deployed via serverless APIs.
37
+
Suppose you decide to use an API other than the [Azure AI Model Inference API](/azure/ai-studio/reference/reference-model-inference-api) to work with a model that is deployed via a standard deployment. In such a situation, content filtering (preview) isn't enabled unless you implement it separately by using Azure AI Content Safety. To get started with Azure AI Content Safety, see [Quickstart: Analyze text content](/azure/ai-services/content-safety/quickstart-text). You run a higher risk of exposing users to harmful content if you don't use content filtering (preview) when working with models that are deployed via standard deployments.
0 commit comments