You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/faq.yml
+16-14Lines changed: 16 additions & 14 deletions
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ metadata:
8
8
9
9
ms.service: azure-ai-content-safety
10
10
ms.topic: faq
11
-
ms.date: 09/04/2024
11
+
ms.date: 02/21/2025
12
12
ms.author: pafarley
13
13
ms.custom:
14
14
title: Azure AI Content Safety Frequently Asked Questions
@@ -21,18 +21,20 @@ sections:
21
21
- name: General information
22
22
questions:
23
23
- question: |
24
-
How do we get started with Content Safety? First steps?
24
+
How do I get started with Content Safety? What are the first steps?
25
25
answer: |
26
-
For references on how to start submitting text and images to Azure AI Content Safety, and view model responses and results, visit the [Azure AI Foundry portal](https://ai.azure.com/explore/contentsafety) page. You can use the sidebar to navigate through the Safety+ Security page.
26
+
For references on how to start submitting text and images to Azure AI Content Safety, and view model responses and results, visit the [Azure AI Foundry portal](https://ai.azure.com/explore/contentsafety) page. You can use the sidebar to navigate to the Safety+Security page.
27
+
27
28
- question: |
28
29
What types of media can Azure AI Content Safety classify/moderate?
29
30
answer: |
30
-
Our content harm classification models currently support the moderation of text, images, and multimodal (images with text + OCR). The protected material, prompt shields, and groundedness detection models work with text content.
31
+
Our content harm classification models currently support the moderation of text, images, and multimodal content (images with text + OCR). The protected material, prompt shields, and groundedness detection models work with text content only.
31
32
32
33
- question: |
33
34
How are Azure AI Content Safety's models priced?
34
35
answer: |
35
36
A: We generally charge by volume. For example, the Image API is priced based on the number of images submitted. The Text API is billed for the number of text records submitted to the service. However, each model has its own corresponding rate. See the Azure [pricing page](https://aka.ms/content-safety-pricing) for more information about pricing tiers.
37
+
36
38
- question: |
37
39
Why should I migrate from Azure Content Moderator to Azure AI Content Safety?
38
40
answer: |
@@ -51,19 +53,19 @@ sections:
51
53
- question: |
52
54
Does Azure AI Content Safety remove content or ban users from the platform?
53
55
answer: |
54
-
No. The Azure AI Content Safety API returns classification metadata based on model outputs. Our results tell users whether material across various classes (sexual, violence, hate, self-harm) is present in input content, via either a returned severity level (such as in the Text API) or binary results (such as in Prompt Shields API).
56
+
No. The Azure AI Content Safety API returns classification metadata based on model outputs. Our results tell users whether material across various classes (sexual, violence, hate, self-harm) is present in input content, through either a returned severity level (such as in the Text API) or binary result (such as in Prompt Shields API).
55
57
56
-
As a user, you use those results to inform appropriate enforcement actions - such as automatically tagging or removing certain content - based on your own policies and practices.
58
+
As a user, you use those results to inform appropriate enforcement actions—such as automatically tagging or removing certain content—based on your own policies and practices.
57
59
- question: |
58
60
What happens if I exceed the transaction limit on my free tier for Azure AI Content Safety?
59
61
answer: |
60
62
Service usage is throttled if you reach the transaction limit on the Free tier.
61
63
- question: |
62
-
Should we submit our content to the Content Safety API synchronously or asynchronously?
64
+
Should I submit our content to the Content Safety API synchronously or asynchronously?
63
65
answer: |
64
-
The Content Safety API is optimized for real-time moderation needs. Our model results are returned directly in the API response message.
66
+
The Content Safety API is optimized for real-time (synchronous) moderation needs. Our model results are returned directly in the API response message.
65
67
- question: |
66
-
What is the current RPS (Requests Per Second) limit for each API? If I want to increase the RPS, what steps should I take?
68
+
What is the current RPS (Requests Per Second) limit for each API? If I want to increase the RPS, what steps should I take?
67
69
answer: |
68
70
Refer to the [overview](./overview.md) for the current RPS limits for each API. To request an increase in RPS, [email us](mailto:[email protected]) with justification and an estimated traffic forecast.
69
71
- question: |
@@ -77,7 +79,7 @@ sections:
77
79
- question: |
78
80
How is data retained and what customer controls are available?
79
81
answer: |
80
-
No input texts or images are stored by the model during detection (except for customer-supplied blocklists, and user inputs are not used to train, retrain, or improve the Azure AI Content Safety models.
82
+
No input texts or images are stored by the model during detection (except for customer-supplied blocklists), and user inputs are not used to train, retrain, or improve the Azure AI Content Safety models.
81
83
82
84
To learn more about Microsoft's privacy and security commitments visit [Data, privacy, and security for Azure AI Content Safety](/legal/cognitive-services/content-safety/data-privacy?context=%2Fazure%2Fai-services%2Fcontent-safety%2Fcontext%2Fcontext).
83
85
- question: |
@@ -89,9 +91,9 @@ sections:
89
91
answer: |
90
92
Content Safety APIs don't support batch processing. Currently, Content Safety APIs achieve high concurrency in processing with generous default rate limits of 1000 per minute, depending on the model. While this is sufficient for most users, we're happy to increase rate limits for users that process or are looking to process higher volumes.
91
93
- question: |
92
-
Can we access multiple models with a single API call?
94
+
Can I access multiple models with a single API call?
93
95
answer: |
94
-
Currently, we can combine several classifications into a single API endpoint, enabling users to access both model outputs with a single task, but this API is in private preview, [email us](mailto:[email protected]) for allowlisting your subscription ID.
96
+
We have developed an API to combine several classifications into a single API endpoint, enabling users to access both model outputs with a single task, but this API is in private preview. [Email us](mailto:[email protected]) to apply for access.
95
97
- name: Dissatisfied cases
96
98
questions:
97
99
- question: |
@@ -113,7 +115,7 @@ sections:
113
115
- name: Text moderation API
114
116
questions:
115
117
- question: |
116
-
Can we increase the character limit for text moderation?
118
+
Can I increase the character limit for text moderation?
117
119
answer: |
118
120
No. Currently, text moderation tasks are limited to 10k-character submissions. You can, however, split longer text content into segments (for example, based on punctuation or spacing) and submit each segment as related tasks to the Content Safety API.
119
121
- name: Custom categories
@@ -141,5 +143,5 @@ sections:
141
143
- question: |
142
144
I'm running into the limitation of three custom categories per service deployment. Are there any plans or options to increase that quota?
143
145
answer: |
144
-
Yes, users can request an increase in their custom categories quota for specific service deployments. Simply [email us](mailto:[email protected]) with the desired quota amount, and our team will review your request to accommodate your needs where possible.
146
+
Yes, users can request an increase in their custom categories quota for specific service deployments. [Email us](mailto:[email protected]) with the desired quota amount, and our team will review your request to accommodate your needs where possible.
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/how-to/containers/image-container.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,22 +6,22 @@ author: PatrickFarley
6
6
manager: nitinme
7
7
ms.service: azure-ai-content-safety
8
8
ms.topic: how-to
9
-
ms.date: 9/11/2024
9
+
ms.date: 02/21/2025
10
10
ms.author: pafarley
11
11
keywords: on-premises, Docker, container
12
12
---
13
13
14
14
# Analyze image content with docker containers (preview)
15
15
16
-
The analyze image container scans images for sexual content, violence, hate, and self-harm with multi-severity levels. This guide shows how to download, install, and run a content safety image container.
16
+
The Analyze image container scans images for sexual content, violence, hate, and self-harm with multi-severity levels. This guide shows how to download, install, and run a content safety image container.
17
17
18
18
For more information about prerequisites, validating that a container is running, running multiple containers on the same host, and running disconnected containers, see [Install and run content safety containers with Docker](./install-run-container.md).
19
19
20
20
## Specify a container image
21
21
22
22
The content safety analyze image container image for all supported versions can be found on the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/contentsafety/image-analyze/tags) syndicate. It resides in the `azure-cognitive-services/contentsafety` repository and is named `image-analyze`.
23
23
24
-
:::image type="content" source="../../media/image-container.png" alt-text="Screenshot of image container on registry website.":::
24
+
:::image type="content" source="../../media/image-container.png" lightbox="../../media/image-container.png" alt-text="Screenshot of image container on registry website.":::
25
25
26
26
27
27
The fully qualified container name is, `mcr.microsoft.com/en-us/product/azure-cognitive-services/contentsafety/image-analyze`. Append a specific container version, or append `:latest` to get the most recent version. For example:
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/how-to/containers/install-run-container.md
+6-7Lines changed: 6 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,22 +6,21 @@ author: PatrickFarley
6
6
manager: nitinme
7
7
ms.service: azure-ai-content-safety
8
8
ms.topic: how-to
9
-
ms.date: 9/11/2024
9
+
ms.date: 02/21/2025
10
10
ms.author: pafarley
11
11
keywords: on-premises, Docker, container
12
12
---
13
13
14
14
# Install and run content safety containers with Docker (preview)
15
15
16
-
By using containers, you can use a subset of the content safety service features in your own environment. In this article, you learn how to download, install, and run a content safety container.
16
+
By using containers, you can use a subset of the Azure AI Content Safety features in your own environment. In this article, you learn how to download, install, and run a content safety container.
17
17
18
18
> [!NOTE]
19
19
> Disconnected container pricing and commitment tiers vary from standard containers. For more information, see [content safety service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/content-safety/).
20
20
21
21
## Prerequisites
22
22
23
23
You must meet the following prerequisites before you use content safety containers. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin. You need:
24
-
25
24
*[Docker](https://docs.docker.com/) installed on a host computer. Docker must be configured to allow the containers to connect with and send billing data to Azure.
26
25
* On Windows, Docker must also be configured to support Linux containers.
27
26
* You should have a basic understanding of [Docker concepts](https://docs.docker.com/get-started/overview/).
@@ -127,15 +126,15 @@ The example request URLs listed here are `http://localhost:5000`, but your speci
127
126
128
127
## Stop the container
129
128
130
-
To shut down the container, in the command-line environment where the container is running, select <kbd>Ctrl+C</kbd>.
129
+
To shut down the container, enter <kbd>Ctrl+C</kbd> in the command-line environment where the container is running.
131
130
132
131
## Run multiple containers on the same host
133
132
134
133
If you intend to run multiple containers with exposed ports, make sure to run each container with a different exposed port. For example, run the first container on port 5000 and the second container on port 5001.
135
134
136
-
You can have this container and a different Azure AI container running on the HOST together. You also can have multiple containers of the same Azure AI container running.
135
+
You can have this container and a different Azure AI container running on the host together. You also can have multiple instances of the same Azure AI container running.
137
136
138
-
## Host URLs
137
+
###Host URLs
139
138
140
139
> [!NOTE]
141
140
> Use a unique port number if you're running multiple containers.
For more information about logging, see [usage records](../../../containers/disconnected-containers.md#usage-records) in the Azure AI services documentation.
171
170
172
-
## Microsoft diagnostics container
171
+
###Microsoft diagnostics container
173
172
174
173
If you're having trouble running an Azure AI container, you can try using the Microsoft diagnostics container. Use this container to diagnose common errors in your deployment environment that might prevent Azure AI containers from functioning as expected.
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/how-to/containers/text-container.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ author: PatrickFarley
6
6
manager: nitinme
7
7
ms.service: azure-ai-content-safety
8
8
ms.topic: how-to
9
-
ms.date: 9/11/2024
9
+
ms.date: 02/21/2025
10
10
ms.author: pafarley
11
11
keywords: on-premises, Docker, container
12
12
---
@@ -21,7 +21,7 @@ For more information about prerequisites, validating that a container is running
21
21
22
22
The content safety analyze text container image for all supported versions can be found on the [Microsoft Container Registry (MCR)](https://aka.ms/aacscontainermcr) syndicate. It resides within the `azure-cognitive-services/contentsafety` repository and is named `text-analyze`.
23
23
24
-
:::image type="content" source="../../media/text-container.png" alt-text="Screenshot of text container on registry website.":::
24
+
:::image type="content" source="../../media/text-container.png" lightbox="../../media/text-container.png" alt-text="Screenshot of text container on registry website.":::
25
25
26
26
The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/contentsafety/text-analyze`. Append a specific container version, or append `:latest` to get the most recent version. For example:
### Use Microsoft Entra ID or Managed Identity to manage access
86
+
### Microsoft Entra ID or Managed Identity
87
87
88
88
For enhanced security, you can use Microsoft Entra ID or Managed Identity (MI) to manage access to your resources.
89
89
* Managed Identity is automatically enabled when you create a Content Safety resource.
90
90
* Microsoft Entra ID is supported in both API and SDK scenarios. Refer to the general AI services guideline of [Authenticating with Microsoft Entra ID](/azure/ai-services/authentication?tabs=powershell#authenticate-with-azure-active-directory). You can also grant access to other users within your organization by assigning them the roles of **Cognitive Services Users** and **Reader**. To learn more about granting user access to Azure resources using the Azure portal, refer to the [Role-based access control guide](/azure/role-based-access-control/quickstart-assign-role-user-portal).
91
91
92
92
### Encryption of data at rest
93
93
94
-
Learn how Azure AI Content Safety handles the [encryption and decryption of your data](./how-to/encrypt-data-at-rest.md). Customer-managed keys (CMK), also known as Bring Your Own Key (BYOK), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
94
+
### Encryption
95
+
96
+
Content Safety encryption protects your data at rest. It encrypts your data as it is written in our datacenters and automatically decrypts it when you access it.
97
+
98
+
By default, data in Content Safety is encrypted using Microsoft Managed Keys (MMK). However, Content Safety supports both Microsoft Managed Keys (MMK) and Customer Managed Keys (CMK), also known as Bring Your Own Key (BYOK), for encryption. CMK offers greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
99
+
100
+
When CMK is selected, users can choose CMK and have the ability to select either user-assigned managed identities (UMI) or system-assigned managed identities (SMI).
101
+
102
+
Learn how Azure AI Content Safety handles the [encryption and decryption of your data](./how-to/encrypt-data-at-rest.md).
95
103
96
104
## Pricing
97
105
98
-
Currently, Azure AI Content Safety has an **F0** and **S0** pricing tier. See the Azure [pricing page](https://aka.ms/content-safety-pricing) for more information.
106
+
Azure AI Content Safety has an **F0** and **S0** pricing tier. See the Azure [pricing page](https://aka.ms/content-safety-pricing) for more information.
99
107
100
108
## Service limits
101
109
102
-
> [!CAUTION]
110
+
> [!IMPORTANT]
103
111
> **Deprecation Notice**
104
112
>
105
113
> As part of Content Safety versioning and lifecycle management, we are announcing the deprecation of certain Public Preview and GA versions of our service APIs. Following our deprecation policy:
@@ -189,11 +197,7 @@ Content Safety features have query rate limits in requests-per-second (RPS) or r
189
197
190
198
If you need a faster rate, please [contact us](mailto:[email protected]) to request it.
191
199
192
-
### Encryption
193
-
194
-
Content Safety encryption protects your data at rest. It encrypts your data as it is written in our datacenters and automatically decrypts it when you access it.
195
200
196
-
By default, data in Content Safety is encrypted using Microsoft Managed Keys (MMK). However, Content Safety supports both Microsoft Managed Keys (MMK) and Customer Managed Keys (CMK) for encryption. When CMK is selected, users can choose CMK and have the ability to select either user-assigned managed identities (UMI) or system-assigned managed identities (SMI).
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/whats-new.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ manager: nitinme
8
8
ms.service: azure-ai-content-safety
9
9
ms.custom: build-2023
10
10
ms.topic: overview
11
-
ms.date: 09/04/2024
11
+
ms.date: 02/21/2025
12
12
ms.author: pafarley
13
13
---
14
14
@@ -21,7 +21,7 @@ Learn what's new in the service. These items might be release notes, videos, blo
21
21
### Upcoming deprecations
22
22
23
23
To align with Content Safety versioning and lifecycle management policies, the following versions are scheduled for deprecation:
24
-
***Effective March 1st, 2025**: All versions except `2024-09-01`, `2024-09-15-preview`, and `2024-09-30-preview` will be deprecated and no longer supported. We encourage users to transition to the latest available versions to continue receiving full support and updates. If you have any questions about this process or need assistance with the transition, please reach out to our support team.
24
+
***Effective March 1st, 2025**: All API versions except `2024-09-01`, `2024-09-15-preview`, and `2024-09-30-preview` will be deprecated and no longer supported. We encourage users to transition to the latest available versions to continue receiving full support and updates. If you have any questions about this process or need assistance with the transition, please reach out to our support team.
0 commit comments