Skip to content

Commit 2d85601

Browse files
committed
Content safety for serverless API page
Created a new overview page for serverless API
1 parent 7159a95 commit 2d85601

File tree

1 file changed

+39
-0
lines changed

1 file changed

+39
-0
lines changed
Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
---
2+
title: Content Safety for serverless APIs
3+
titleSuffix: Azure AI Foundry
4+
description: Learn about content safety for models deployed using serverless APIs, using Azure AI Foundry.
5+
manager: scottpolly
6+
ms.service: azure-ai-foundry
7+
ms.topic: how-to
8+
ms.date: 01/21/2025
9+
ms.author: mopeakande
10+
author: msakande
11+
ms.reviewer: ositanachi
12+
reviewer: ositanachi
13+
ms.custom:
14+
---
15+
16+
# Content Safety for serverless APIs
17+
18+
In this article, learn about content safety capabilities for models from the model catalog deployed using serverless APIs.
19+
20+
21+
## Content filter defaults
22+
23+
Azure AI uses a default configuration of [Azure AI Content Safety](/azure/ai-services/content-safety/overview) content filters that detect harmful content across four categories hate, self-harm, sexual, and violence for models deployed via serverless APIs. To learn more about content filtering (preview), see [Harm categories in Azure AI Content Safety](/azure/ai-services/content-safety/concepts/harm-categories).
24+
25+
The default content filtering configuration for text models is set to filter at the medium severity threshold, filtering any detected content at this level or higher. For image models, the default content filtering configuration is set at the low configuration threshold, filtering at this level or higher. Models deployed using the [Azure AI model inference service]() can create configurable filters by clicking the **Content filters** tab within the **Safety + security** page.
26+
27+
> [!TIP]
28+
> Content filtering (preview) is not available for certain model types that are deployed via serverless APIs. These model types include embedding models and time series models.
29+
30+
Content filtering (preview) occurs synchronously as the service processes prompts to generate content. You might be billed separately according to [Azure AI Content Safety pricing](https://azure.microsoft.com/pricing/details/cognitive-services/content-safety/) for such use. You can disable content filtering (preview) for individual serverless endpoints either:
31+
32+
- When you first deploy a language model
33+
- Later, by selecting the content filtering toggle on the deployment details page
34+
35+
Suppose you decide to use an API other than the [Azure AI Model Inference API](/azure/ai-studio/reference/reference-model-inference-api) to work with a model that is deployed via a serverless API. In such a situation, content filtering (preview) isn't enabled unless you implement it separately by using Azure AI Content Safety. To get started with Azure AI Content Safety, see [Quickstart: Analyze text content](/azure/ai-services/content-safety/quickstart-text). You run a higher risk of exposing users to harmful content if you don't use content filtering (preview) when working with models that are deployed via serverless APIs.
36+
37+
## How charges are calculated
38+
39+
Pricing details are viewable at [Azure AI Content Safety pricing](https://azure.microsoft.com/pricing/details/cognitive-services/content-safety/). Charges are incurred when the Azure AI Content Safety validates the prompt or completion. If Azure AI Content Safety blocks the prompt or completion, you're charged for both the evaluation of the content and the inference calls.

0 commit comments

Comments
 (0)