Skip to content

Commit ef542bc

Browse files
committed
consaf freshness
1 parent 8e17566 commit ef542bc

File tree

7 files changed

+42
-44
lines changed

7 files changed

+42
-44
lines changed

articles/ai-services/content-safety/how-to/containers/container-overview.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: PatrickFarley
66
manager: nitinme
77
ms.service: azure-ai-content-safety
88
ms.topic: overview
9-
ms.date: 09/23/2024
9+
ms.date: 03/26/2025
1010
ms.author: pafarley
1111
keywords: on-premises, Docker, container
1212
---
@@ -17,7 +17,7 @@ Containers let you use a subset of the Azure AI Content Safety features in your
1717

1818
## Available containers
1919

20-
The following table lists the content safety containers available in the Microsoft Container Registry (MCR). The table also lists the features supported by each container and the latest version of the container.
20+
The following table lists the content safety containers available in the Microsoft Container Registry (MCR). The table also lists the features supported by each container and the latest version of the container.
2121

2222
| Container | Features |
2323
|--------------------------------------|----------|
@@ -37,7 +37,7 @@ The request form takes information about you, your company, and the user scenari
3737

3838
After you submit the form, the Azure AI services team reviews it and emails you with a decision within 10 business days.
3939

40-
## Billing
40+
## Billing information
4141

4242
The content safety containers send billing information to Azure through the content safety resource in your Azure account.
4343

articles/ai-services/content-safety/how-to/embedded-content-safety.md

Lines changed: 26 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -6,97 +6,94 @@ author: PatrickFarley
66
manager: nitinme
77
ms.service: azure-ai-content-safety
88
ms.topic: how-to
9-
ms.date: 9/24/2024
10-
ms.author: zhanxia
9+
ms.date: 03/26/2025
10+
ms.author: pafarley
1111
---
1212

1313
# Embedded content safety (preview)
1414

15-
Embedded content safety is designed for on-device scenarios where cloud connectivity is intermittent or prefer on-device for privacy reason. For example, you can use embedded content safety in a PC to detect harmful content generated by foundation model, or a car that might travel out of range. You can also develop hybrid cloud and offline solutions. For scenarios where your devices must be in a secure environment like a bank or government entity, you should first consider [disconnected containers](../../containers/disconnected-containers.md).
15+
Embedded content safety is designed for on-device scenarios where cloud connectivity is intermittent or the user prefers on-device access for privacy reasons.
16+
17+
You can use embedded content safety locally on a PC to detect harmful content generated by a large language model, or in a car that might travel out of a specified range. You can also develop hybrid cloud and offline solutions. For scenarios where your devices must be in a secure environment like a bank or government entity, you should first consider [disconnected containers](../../containers/disconnected-containers.md).
1618

1719
> [!IMPORTANT]
18-
> Microsoft limits access to embedded content safety. You can apply for access through the Azure AI content safety [embedded content safety limited access review](https://aka.ms/aacs-embedded-application). For more information, see [Limited access](../limited-access.md).
20+
> Microsoft limits access to embedded content safety. You can apply for access through the Azure AI content safety [embedded content safety limited access review](https://aka.ms/aacs-embedded-application). Instructions are provided upon successful completion of the limited access review process. For more information, see [Limited access](../limited-access.md).
1921
2022
## Platform requirements
2123

22-
Embedded content safety is included with the content safety C++ SDK.
23-
24-
**Choose your target environment**
24+
Embedded content safety is included with the Azure AI Content Safety C++ SDK.
2525

26-
Embedded content safety only supports Windows right now. Contact your Microsoft account contact if you need to run embedded content safety on a different platform.
26+
### Choose your target environment
2727

28-
# [Windows X64](#tab/windows-target)
28+
Embedded content safety only supports Windows. Contact your Microsoft account administrator if you need to run embedded content safety on a different platform.
2929

3030
Requires Windows 10 or newer on x64 hardware.
3131

3232
The latest [Microsoft Visual C++ Redistributable for Visual Studio 2015-2022](/cpp/windows/latest-supported-vc-redist?view=msvc-170&preserve-view=true) must be installed regardless of the programming language used with the content safety SDK.
3333

34-
---
3534

3635
## Limitations
3736

38-
Embedded content safety is only available with C++ SDK. The other content safety SDKs, and REST APIs don't support embedded content safety.
37+
Embedded content safety is only available with the C++ SDK. The other Content Safety SDKs and REST APIs don't support embedded content safety.
3938

4039

4140
## Embedded content safety SDK packages
4241

4342

44-
For C++ embedded applications, install following content safety SDK for C++ packages:
43+
For C++ embedded applications, install the following C++ packages:
4544

4645
|Package |Description |
4746
| --------- | --------- |
4847
|[Azure.AI.ContentSafety.Extension.Embedded.Text](https://www.nuget.org/packages/Azure.AI.ContentSafety.Extension.Embedded.Text)|Required to run text analysis on device|
4948
|[Azure.AI.ContentSafety.Extension.Embedded.Image](https://www.nuget.org/packages/Azure.AI.ContentSafety.Extension.Embedded.Image)|Required to run image analysis on device|
5049

5150

52-
53-
5451
## Models
5552

56-
For embedded content safety, you need to download the content safety to your device. Microsoft limits access to embedded content safety. You can apply for access through the [embedded content safety limited access review](https://aka.ms/aacs-embedded-application). Instructions are provided upon successful completion of the limited access review process.
53+
For embedded content safety, you need to download the content safety to your device.
5754

58-
The embedded content safety supports [analyze text](../quickstart-text.md) and [analyze image](../quickstart-image.md) features. These features scan text or image content for sexual content, violence, hate, and self-harm with multiple severity levels. It should be noted that these embedded models have been optimized for on-device execution with less computational resources compared to the Azure API. Therefore, it's possible that the output generated from the embedded content safety model may vary from that of the Azure API.
55+
The embedded content safety supports [Analyze text](../quickstart-text.md) and [Analyze image](../quickstart-image.md) features. These features scan text or image content for sexual content, violence, hate, and self-harm with multiple severity levels.
5956

57+
These embedded models have been optimized for on-device execution with less computational resources compared to the Azure API. Therefore, it's possible that the output generated from the embedded content safety model may vary from that of the Azure API.
6058

61-
## Embedded content safety code samples
6259

63-
Below is the ready to use embedded content safety samples. Follow the readme file to run the sample.
60+
## Code samples
6461

65-
1. [C++ sample](https://github.com/Azure/azure-ai-content-safety-sdK)
62+
Below is the ready-to-use embedded content safety sample. Follow the readme file to run the sample.
6663

64+
- [C++ sample](https://github.com/Azure/azure-ai-content-safety-sdK)
6765

6866

6967
## Performance evaluations
7068

71-
Embedded content safety models run fully on your target devices. Understanding the performance characteristics of these models on your devices hardware can be critical to delivering low latency experiences within your products and applications. This section provides information to help answer the question, "Is my device suitable to run embedded content safety for text analysis or image analysis?"
69+
Embedded content safety models run fully on your target devices. Understanding the performance characteristics of these models on your devices' hardware can be critical to delivering low latency experiences within your products and applications. This section provides information to help determine if your device is suitable to run embedded content safety for text analysis or image analysis.
7270

7371
### Factors that affect performance
74-
Device specifications – The specifications of your device play a key role in whether embedded content safety models can run without performance issues. CPU clock speed, architecture (for example, x64, ARM processor, etcetera), and memory can all affect model inference speed.
7572

76-
CPU/GPU load – In most cases, your device is running other applications in parallel to the application where embedded content safety models are integrated. The amount of CPU/GPU load your device experiences when idle and at peak can also affect performance.
73+
**Device specifications** – The specifications of your device play a key role in whether embedded content safety models can run without performance issues. CPU clock speed, architecture (for example, x64, ARM processor, etcetera), and memory can all affect model inference speed.
74+
75+
**CPU/GPU load** – In most cases, your device is running other applications in parallel to the application where embedded content safety models are integrated. The amount of CPU/GPU load your device experiences when idle and at peak can also affect performance.
7776

7877
For example, if the device is under moderate to high CPU load from all other applications running on the device, it's possible to encounter performance issues for running embedded content safety in addition to the other applications, even with a powerful processor.
7978

80-
Memory load – An embedded content safety text analysis process consumes about 900 MB of memory at runtime. If your device has less memory available for the embedded content safety process to use, frequent fallbacks to virtual memory and paging can introduce more latencies. This can affect both the real-time factor and user-perceived latency.
79+
**Memory load** – An embedded content safety text analysis process consumes about 900 MB of memory at runtime. If your device has less memory available for the embedded content safety process to use, frequent fallbacks to virtual memory and paging can introduce more latencies. This can affect both the real-time factor and user-perceived latency.
8180

82-
### SDK parameters that can impact performance
81+
### SDK parameters that can affect performance
8382

8483
The following SDK parameters can impact the inference time of the embedded content safety model.
8584

8685
- `gpuEnabled` Set as **true** to enable GPU, otherwise CPU is used. Generally inference time is shorter on GPU.
8786
- `numThreads` This parameter only works for CPU. It defines number of threads to be used in a multi-threaded environment. We support a maximum number of four threads.
8887

89-
See next section for performance benchmark data on popular PC CPUs and GPUs.
90-
9188

9289
### Performance benchmark data on popular CPUs and GPUs
9390

94-
As stated above, there are multiple factors that impact the performance of embedded content safety model. We highly suggest you test it on your device and tweak the parameters to fit for your application's requirement.
91+
As stated above, there are multiple factors that impact the performance of an embedded content safety model. We highly recommend you test it on your device and tweak the parameters to fit for your application's requirements.
9592

9693
We also conduct performance benchmark tests on various popular PC CPUs and GPUs. Keep in mind that even with the same CPU, performance can vary depending on the CPU and memory load. The benchmark data provided should serve as a reference when considering if the embedded content safety can operate on your device. For optimal results, we advise testing on your intended device and in your specific application scenario.
9794

9895

99-
The [sample code](https://github.com/Azure/azure-ai-content-safety-sdk) includes code snippet to monitor performance metrics like memory, inference time.
96+
The [sample code](#code-samples) includes code snippets to monitor performance metrics like memory, inference time.
10097

10198
#### [Text analysis performance](#tab/text)
10299

@@ -166,6 +163,6 @@ The [sample code](https://github.com/Azure/azure-ai-content-safety-sdk) includes
166163

167164
---
168165

169-
## Related Content
166+
## Related content
170167

171168
- [Limited access to Content Safety](../limited-access.md)

articles/ai-services/content-safety/how-to/improve-performance.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -6,16 +6,16 @@ author: PatrickFarley
66
manager: nitinme
77
ms.service: azure-ai-content-safety
88
ms.topic: how-to
9-
ms.date: 09/18/2024
9+
ms.date: 03/26/2025
1010
ms.author: pafarley
1111
#customer intent: As a user, I want to improve the performance of Azure AI Content Safety so that I can ensure accurate content moderation.
1212
---
1313

1414
# Mitigate false results in Azure AI Content Safety
1515

16-
This guide provides a step-by-step process for handling false positives and false negatives from Azure AI Content Safety models.
16+
This guide shows you how to handle false positives and false negatives from Azure AI Content Safety models.
1717

18-
False positives are when the system incorrectly flags non-harmful content as harmful; false negatives are when harmful content is not flagged as harmful. Address these instances to ensure the integrity and reliability of your content moderation process, including responsible generative AI deployment.
18+
False positives occur when the system incorrectly flags non-harmful content as harmful; false negatives occur when harmful content is not flagged as harmful. Address these instances to ensure the integrity and reliability of your content moderation process, including responsible generative AI deployment.
1919

2020
## Prerequisites
2121

@@ -24,7 +24,7 @@ False positives are when the system incorrectly flags non-harmful content as har
2424

2525
## Review and verification
2626

27-
Conduct an initial assessment to confirm that the flagged content is really a false positive or false negative. This can involve:
27+
Conduct an initial assessment to confirm that you really have a false positive or false negative. This can involve:
2828
- Checking the context of the flagged content.
2929
- Comparing the flagged content against the content safety risk categories and severity definitions:
3030
- If you're using content safety in Azure OpenAI, see the [Azure OpenAI content filtering doc](/azure/ai-services/openai/concepts/content-filter).

articles/ai-services/content-safety/includes/code-indexer.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,4 +11,4 @@ ms.author: pafarley
1111

1212

1313
> [!CAUTION]
14-
> The content safety service's code scanner/indexer is only current through April 6, 2023. Code that was added to GitHub after this date will not be detected. Use your own discretion when using Protected Material for Code to detect recent bodies of code.
14+
> The Content Safety service's code scanner/indexer is only current through April 6, 2023. Code that was added to GitHub after this date will not be detected. Use your own discretion when using Protected Material for Code to detect recent bodies of code.

articles/ai-services/content-safety/index.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ metadata:
1111
ms.topic: landing-page # Required
1212
author: PatrickFarley #Required; your GitHub user alias, with correct capitalization.
1313
ms.author: pafarley #Required; microsoft alias of author; optional team alias.
14-
ms.date: 09/24/2024 #Required; mm/dd/yyyy format.
14+
ms.date: 03/26/2025 #Required; mm/dd/yyyy format.
1515
# linkListType: architecture | concept | deploy | download | get-started | how-to-guide | learn | overview | quickstart | reference | tutorial | video | whats-new
1616

1717

articles/ai-services/content-safety/quickstart-multimodal.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: PatrickFarley
66
manager: nitinme
77
ms.service: azure-ai-content-safety
88
ms.topic: quickstart
9-
ms.date: 09/19/2024
9+
ms.date: 03/26/2025
1010
ms.author: pafarley
1111
# zone_pivot_groups: programming-languages-content-safety
1212
---
@@ -15,7 +15,7 @@ ms.author: pafarley
1515

1616
The Multimodal API analyzes materials containing both image content and text content to help make applications and services safer from harmful user-generated or AI-generated content. Analyzing an image and its associated text content together can preserve context and provide a more comprehensive understanding of the content.
1717

18-
For more information on the way content is filtered, see the [Harm categories concept page](./concepts/harm-categories.md#multimodal-image-with-text-content). For API input limits, see the [Input requirements](./overview.md#input-requirements) section of the Overview.
18+
For more information on how content is filtered, see the [Harm categories concept page](./concepts/harm-categories.md#multimodal-image-with-text-content). For API input limits, see the [Input requirements](./overview.md#input-requirements) section of the Overview.
1919

2020
> [!IMPORTANT]
2121
> This feature is only available in certain Azure regions. See [Region availability](./overview.md#region-availability).
@@ -45,7 +45,7 @@ You can input your image by one of two methods: **local filestream** or **blob s
4545
- **Local filestream** (recommended): Encode your image to base64. You can use a website like [codebeautify](https://codebeautify.org/image-to-base64-converter) to do the encoding. Then save the encoded string to a temporary location.
4646
- **Blob storage URL**: Upload your image to an Azure Blob Storage account. Follow the [blob storage quickstart](/azure/storage/blobs/storage-quickstart-blobs-portal) to learn how to do this. Then open Azure Storage Explorer and get the URL to your image. Save it to a temporary location.
4747

48-
### Analyze image with text
48+
### Analyze content
4949

5050
Paste the command below into a text editor, and make the following changes.
5151

@@ -98,7 +98,8 @@ The parameters in the request body are defined in this table:
9898
Open a command prompt window and run the cURL command.
9999
100100
101-
### Output
101+
### Interpret the API response
102+
102103
103104
You should see the image and text moderation results displayed as JSON data in the console. For example:
104105

articles/ai-services/content-safety/quickstart-protected-material-code.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,13 +7,13 @@ author: PatrickFarley
77
manager: nitinme
88
ms.service: azure-ai-content-safety
99
ms.topic: quickstart
10-
ms.date: 09/22/2024
10+
ms.date: 03/26/2025
1111
ms.author: pafarley
1212
---
1313

1414
# Quickstart: Protected material detection for code (preview)
1515

16-
The Protected Material for Code feature provides a comprehensive solution for identifying AI outputs that match code from existing GitHub repositories. This feature allows code generation models to be used confidently, in a way that enhances transparency to end users and promotes compliance with organizational policies.
16+
The Protected Material for Code feature provides a comprehensive solution for identifying AI outputs that match code from existing GitHub repositories. This feature allows you to use code generation models confidently, in a way that enhances transparency to end users and promotes compliance with organizational policies.
1717

1818
[!INCLUDE [content-safety-code-indexer](./includes/code-indexer.md)]
1919

0 commit comments

Comments
 (0)