You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/how-to/containers/container-overview.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ author: PatrickFarley
6
6
manager: nitinme
7
7
ms.service: azure-ai-content-safety
8
8
ms.topic: overview
9
-
ms.date: 09/23/2024
9
+
ms.date: 03/26/2025
10
10
ms.author: pafarley
11
11
keywords: on-premises, Docker, container
12
12
---
@@ -17,7 +17,7 @@ Containers let you use a subset of the Azure AI Content Safety features in your
17
17
18
18
## Available containers
19
19
20
-
The following table lists the content safety containers available in the Microsoft Container Registry (MCR). The table also lists the features supported by each container and the latest version of the container.
20
+
The following table lists the content safety containers available in the Microsoft Container Registry (MCR). The table also lists the features supported by each container and the latest version of the container.
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/how-to/embedded-content-safety.md
+26-29Lines changed: 26 additions & 29 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,97 +6,94 @@ author: PatrickFarley
6
6
manager: nitinme
7
7
ms.service: azure-ai-content-safety
8
8
ms.topic: how-to
9
-
ms.date: 9/24/2024
10
-
ms.author: zhanxia
9
+
ms.date: 03/26/2025
10
+
ms.author: pafarley
11
11
---
12
12
13
13
# Embedded content safety (preview)
14
14
15
-
Embedded content safety is designed for on-device scenarios where cloud connectivity is intermittent or prefer on-device for privacy reason. For example, you can use embedded content safety in a PC to detect harmful content generated by foundation model, or a car that might travel out of range. You can also develop hybrid cloud and offline solutions. For scenarios where your devices must be in a secure environment like a bank or government entity, you should first consider [disconnected containers](../../containers/disconnected-containers.md).
15
+
Embedded content safety is designed for on-device scenarios where cloud connectivity is intermittent or the user prefers on-device access for privacy reasons.
16
+
17
+
You can use embedded content safety locally on a PC to detect harmful content generated by a large language model, or in a car that might travel out of a specified range. You can also develop hybrid cloud and offline solutions. For scenarios where your devices must be in a secure environment like a bank or government entity, you should first consider [disconnected containers](../../containers/disconnected-containers.md).
16
18
17
19
> [!IMPORTANT]
18
-
> Microsoft limits access to embedded content safety. You can apply for access through the Azure AI content safety [embedded content safety limited access review](https://aka.ms/aacs-embedded-application). For more information, see [Limited access](../limited-access.md).
20
+
> Microsoft limits access to embedded content safety. You can apply for access through the Azure AI content safety [embedded content safety limited access review](https://aka.ms/aacs-embedded-application). Instructions are provided upon successful completion of the limited access review process. For more information, see [Limited access](../limited-access.md).
19
21
20
22
## Platform requirements
21
23
22
-
Embedded content safety is included with the content safety C++ SDK.
23
-
24
-
**Choose your target environment**
24
+
Embedded content safety is included with the Azure AI Content Safety C++ SDK.
25
25
26
-
Embedded content safety only supports Windows right now. Contact your Microsoft account contact if you need to run embedded content safety on a different platform.
26
+
### Choose your target environment
27
27
28
-
# [Windows X64](#tab/windows-target)
28
+
Embedded content safety only supports Windows. Contact your Microsoft account administrator if you need to run embedded content safety on a different platform.
29
29
30
30
Requires Windows 10 or newer on x64 hardware.
31
31
32
32
The latest [Microsoft Visual C++ Redistributable for Visual Studio 2015-2022](/cpp/windows/latest-supported-vc-redist?view=msvc-170&preserve-view=true) must be installed regardless of the programming language used with the content safety SDK.
33
33
34
-
---
35
34
36
35
## Limitations
37
36
38
-
Embedded content safety is only available with C++ SDK. The other content safety SDKs, and REST APIs don't support embedded content safety.
37
+
Embedded content safety is only available with the C++ SDK. The other Content Safety SDKs and REST APIs don't support embedded content safety.
39
38
40
39
41
40
## Embedded content safety SDK packages
42
41
43
42
44
-
For C++ embedded applications, install following content safety SDK for C++ packages:
43
+
For C++ embedded applications, install the following C++ packages:
45
44
46
45
|Package |Description |
47
46
| --------- | --------- |
48
47
|[Azure.AI.ContentSafety.Extension.Embedded.Text](https://www.nuget.org/packages/Azure.AI.ContentSafety.Extension.Embedded.Text)|Required to run text analysis on device|
49
48
|[Azure.AI.ContentSafety.Extension.Embedded.Image](https://www.nuget.org/packages/Azure.AI.ContentSafety.Extension.Embedded.Image)|Required to run image analysis on device|
50
49
51
50
52
-
53
-
54
51
## Models
55
52
56
-
For embedded content safety, you need to download the content safety to your device. Microsoft limits access to embedded content safety. You can apply for access through the [embedded content safety limited access review](https://aka.ms/aacs-embedded-application). Instructions are provided upon successful completion of the limited access review process.
53
+
For embedded content safety, you need to download the content safety to your device.
57
54
58
-
The embedded content safety supports [analyze text](../quickstart-text.md) and [analyze image](../quickstart-image.md) features. These features scan text or image content for sexual content, violence, hate, and self-harm with multiple severity levels. It should be noted that these embedded models have been optimized for on-device execution with less computational resources compared to the Azure API. Therefore, it's possible that the output generated from the embedded content safety model may vary from that of the Azure API.
55
+
The embedded content safety supports [Analyze text](../quickstart-text.md) and [Analyze image](../quickstart-image.md) features. These features scan text or image content for sexual content, violence, hate, and self-harm with multiple severity levels.
59
56
57
+
These embedded models have been optimized for on-device execution with less computational resources compared to the Azure API. Therefore, it's possible that the output generated from the embedded content safety model may vary from that of the Azure API.
60
58
61
-
## Embedded content safety code samples
62
59
63
-
Below is the ready to use embedded content safety samples. Follow the readme file to run the sample.
Embedded content safety models run fully on your target devices. Understanding the performance characteristics of these models on your devices’ hardware can be critical to delivering low latency experiences within your products and applications. This section provides information to help answer the question, "Is my device suitable to run embedded content safety for text analysis or image analysis?"
69
+
Embedded content safety models run fully on your target devices. Understanding the performance characteristics of these models on your devices' hardware can be critical to delivering low latency experiences within your products and applications. This section provides information to help determine if your device is suitable to run embedded content safety for text analysis or image analysis.
72
70
73
71
### Factors that affect performance
74
-
Device specifications – The specifications of your device play a key role in whether embedded content safety models can run without performance issues. CPU clock speed, architecture (for example, x64, ARM processor, etcetera), and memory can all affect model inference speed.
75
72
76
-
CPU/GPU load – In most cases, your device is running other applications in parallel to the application where embedded content safety models are integrated. The amount of CPU/GPU load your device experiences when idle and at peak can also affect performance.
73
+
**Device specifications** – The specifications of your device play a key role in whether embedded content safety models can run without performance issues. CPU clock speed, architecture (for example, x64, ARM processor, etcetera), and memory can all affect model inference speed.
74
+
75
+
**CPU/GPU load** – In most cases, your device is running other applications in parallel to the application where embedded content safety models are integrated. The amount of CPU/GPU load your device experiences when idle and at peak can also affect performance.
77
76
78
77
For example, if the device is under moderate to high CPU load from all other applications running on the device, it's possible to encounter performance issues for running embedded content safety in addition to the other applications, even with a powerful processor.
79
78
80
-
Memory load – An embedded content safety text analysis process consumes about 900 MB of memory at runtime. If your device has less memory available for the embedded content safety process to use, frequent fallbacks to virtual memory and paging can introduce more latencies. This can affect both the real-time factor and user-perceived latency.
79
+
**Memory load** – An embedded content safety text analysis process consumes about 900 MB of memory at runtime. If your device has less memory available for the embedded content safety process to use, frequent fallbacks to virtual memory and paging can introduce more latencies. This can affect both the real-time factor and user-perceived latency.
81
80
82
-
### SDK parameters that can impact performance
81
+
### SDK parameters that can affect performance
83
82
84
83
The following SDK parameters can impact the inference time of the embedded content safety model.
85
84
86
85
-`gpuEnabled` Set as **true** to enable GPU, otherwise CPU is used. Generally inference time is shorter on GPU.
87
86
-`numThreads` This parameter only works for CPU. It defines number of threads to be used in a multi-threaded environment. We support a maximum number of four threads.
88
87
89
-
See next section for performance benchmark data on popular PC CPUs and GPUs.
90
-
91
88
92
89
### Performance benchmark data on popular CPUs and GPUs
93
90
94
-
As stated above, there are multiple factors that impact the performance of embedded content safety model. We highly suggest you test it on your device and tweak the parameters to fit for your application's requirement.
91
+
As stated above, there are multiple factors that impact the performance of an embedded content safety model. We highly recommend you test it on your device and tweak the parameters to fit for your application's requirements.
95
92
96
93
We also conduct performance benchmark tests on various popular PC CPUs and GPUs. Keep in mind that even with the same CPU, performance can vary depending on the CPU and memory load. The benchmark data provided should serve as a reference when considering if the embedded content safety can operate on your device. For optimal results, we advise testing on your intended device and in your specific application scenario.
97
94
98
95
99
-
The [sample code](https://github.com/Azure/azure-ai-content-safety-sdk) includes code snippet to monitor performance metrics like memory, inference time.
96
+
The [sample code](#code-samples) includes code snippets to monitor performance metrics like memory, inference time.
100
97
101
98
#### [Text analysis performance](#tab/text)
102
99
@@ -166,6 +163,6 @@ The [sample code](https://github.com/Azure/azure-ai-content-safety-sdk) includes
166
163
167
164
---
168
165
169
-
## Related Content
166
+
## Related content
170
167
171
168
-[Limited access to Content Safety](../limited-access.md)
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/how-to/improve-performance.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,16 +6,16 @@ author: PatrickFarley
6
6
manager: nitinme
7
7
ms.service: azure-ai-content-safety
8
8
ms.topic: how-to
9
-
ms.date: 09/18/2024
9
+
ms.date: 03/26/2025
10
10
ms.author: pafarley
11
11
#customer intent: As a user, I want to improve the performance of Azure AI Content Safety so that I can ensure accurate content moderation.
12
12
---
13
13
14
14
# Mitigate false results in Azure AI Content Safety
15
15
16
-
This guide provides a step-by-step process for handling false positives and false negatives from Azure AI Content Safety models.
16
+
This guide shows you how to handle false positives and false negatives from Azure AI Content Safety models.
17
17
18
-
False positives are when the system incorrectly flags non-harmful content as harmful; false negatives are when harmful content is not flagged as harmful. Address these instances to ensure the integrity and reliability of your content moderation process, including responsible generative AI deployment.
18
+
False positives occur when the system incorrectly flags non-harmful content as harmful; false negatives occur when harmful content is not flagged as harmful. Address these instances to ensure the integrity and reliability of your content moderation process, including responsible generative AI deployment.
19
19
20
20
## Prerequisites
21
21
@@ -24,7 +24,7 @@ False positives are when the system incorrectly flags non-harmful content as har
24
24
25
25
## Review and verification
26
26
27
-
Conduct an initial assessment to confirm that the flagged content is really a false positive or false negative. This can involve:
27
+
Conduct an initial assessment to confirm that you really have a false positive or false negative. This can involve:
28
28
- Checking the context of the flagged content.
29
29
- Comparing the flagged content against the content safety risk categories and severity definitions:
30
30
- If you're using content safety in Azure OpenAI, see the [Azure OpenAI content filtering doc](/azure/ai-services/openai/concepts/content-filter).
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/includes/code-indexer.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,4 +11,4 @@ ms.author: pafarley
11
11
12
12
13
13
> [!CAUTION]
14
-
> The content safety service's code scanner/indexer is only current through April 6, 2023. Code that was added to GitHub after this date will not be detected. Use your own discretion when using Protected Material for Code to detect recent bodies of code.
14
+
> The Content Safety service's code scanner/indexer is only current through April 6, 2023. Code that was added to GitHub after this date will not be detected. Use your own discretion when using Protected Material for Code to detect recent bodies of code.
The Multimodal API analyzes materials containing both image content and text content to help make applications and services safer from harmful user-generated or AI-generated content. Analyzing an image and its associated text content together can preserve context and provide a more comprehensive understanding of the content.
17
17
18
-
For more information on the way content is filtered, see the [Harm categories concept page](./concepts/harm-categories.md#multimodal-image-with-text-content). For API input limits, see the [Input requirements](./overview.md#input-requirements) section of the Overview.
18
+
For more information on how content is filtered, see the [Harm categories concept page](./concepts/harm-categories.md#multimodal-image-with-text-content). For API input limits, see the [Input requirements](./overview.md#input-requirements) section of the Overview.
19
19
20
20
> [!IMPORTANT]
21
21
> This feature is only available in certain Azure regions. See [Region availability](./overview.md#region-availability).
@@ -45,7 +45,7 @@ You can input your image by one of two methods: **local filestream** or **blob s
45
45
-**Local filestream** (recommended): Encode your image to base64. You can use a website like [codebeautify](https://codebeautify.org/image-to-base64-converter) to do the encoding. Then save the encoded string to a temporary location.
46
46
-**Blob storage URL**: Upload your image to an Azure Blob Storage account. Follow the [blob storage quickstart](/azure/storage/blobs/storage-quickstart-blobs-portal) to learn how to do this. Then open Azure Storage Explorer and get the URL to your image. Save it to a temporary location.
47
47
48
-
### Analyze image with text
48
+
### Analyze content
49
49
50
50
Paste the command below into a text editor, and make the following changes.
51
51
@@ -98,7 +98,8 @@ The parameters in the request body are defined in this table:
98
98
Open a command prompt window and run the cURL command.
99
99
100
100
101
-
### Output
101
+
### Interpret the API response
102
+
102
103
103
104
You should see the image and text moderation results displayed as JSON data in the console. For example:
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/quickstart-protected-material-code.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,13 +7,13 @@ author: PatrickFarley
7
7
manager: nitinme
8
8
ms.service: azure-ai-content-safety
9
9
ms.topic: quickstart
10
-
ms.date: 09/22/2024
10
+
ms.date: 03/26/2025
11
11
ms.author: pafarley
12
12
---
13
13
14
14
# Quickstart: Protected material detection for code (preview)
15
15
16
-
The Protected Material for Code feature provides a comprehensive solution for identifying AI outputs that match code from existing GitHub repositories. This feature allows code generation models to be used confidently, in a way that enhances transparency to end users and promotes compliance with organizational policies.
16
+
The Protected Material for Code feature provides a comprehensive solution for identifying AI outputs that match code from existing GitHub repositories. This feature allows you to use code generation models confidently, in a way that enhances transparency to end users and promotes compliance with organizational policies.
0 commit comments