Skip to content

Commit abf22bb

Browse files
Merge pull request #264888 from PatrickFarley/content-safety-updates
Content safety updates
2 parents d89924c + 2502709 commit abf22bb

11 files changed

+942
-199
lines changed

articles/ai-services/content-safety/how-to/use-blocklist.md

Lines changed: 832 additions & 129 deletions
Large diffs are not rendered by default.

articles/ai-services/content-safety/includes/quickstarts/csharp-quickstart-image.md

Lines changed: 13 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@ ms.date: 07/04/2023
1111
ms.author: pafarley
1212
---
1313

14+
[Reference documentation](/dotnet/api/overview/azure/ai.contentsafety-readme?view=azure-dotnet) | [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/contentsafety/Azure.AI.ContentSafety) | [Package (NuGet)](https://www.nuget.org/packages/Azure.AI.ContentSafety) | [Samples](https://github.com/Azure-Samples/AzureAIContentSafety/tree/main/dotnet/1.0.0)
15+
1416
## Prerequisites
1517

1618
* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
@@ -29,7 +31,7 @@ Open Visual Studio, and under **Get started** select **Create a new project**. S
2931

3032
### Install the client SDK
3133

32-
Once you've created a new project, install the client SDK by right-clicking on the project solution in the **Solution Explorer** and selecting **Manage NuGet Packages**. In the package manager that opens select **Browse**, check **Include prerelease**, and search for `Azure.AI.ContentSafety`. Select **Install**.
34+
Once you've created a new project, install the client SDK by right-clicking on the project solution in the **Solution Explorer** and selecting **Manage NuGet Packages**. In the package manager that opens select **Browse** and search for `Azure.AI.ContentSafety`. Select **Install**.
3335

3436
#### [CLI](#tab/cli)
3537

@@ -60,7 +62,7 @@ Build succeeded.
6062
Within the application directory, install the Computer Vision client SDK for .NET with the following command:
6163

6264
```dotnet
63-
dotnet add package Azure.AI.ContentSafety --prerelease
65+
dotnet add package Azure.AI.ContentSafety
6466
```
6567

6668
---
@@ -69,7 +71,7 @@ dotnet add package Azure.AI.ContentSafety --prerelease
6971

7072
## Analyze image content
7173

72-
From the project directory, open the *Program.cs* file that was created previously. Paste in the following code:
74+
From the project directory, open the *Program.cs* file that was created previously. Paste in the following code.
7375

7476
```csharp
7577
using System;
@@ -89,8 +91,8 @@ namespace Azure.AI.ContentSafety.Dotnet.Sample
8991

9092
// Example: analyze image
9193
92-
string imagePath = Path.Combine(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location), "Samples", "sample_data", "image.jpg");
93-
ImageData image = new ImageData() { Content = BinaryData.FromBytes(File.ReadAllBytes(imagePath)) };
94+
string imagePath = @"sample_data\image.png";
95+
ContentSafetyImageData image = new ContentSafetyImageData(BinaryData.FromBytes(File.ReadAllBytes(imagePath)));
9496

9597
var request = new AnalyzeImageOptions(image);
9698

@@ -105,10 +107,10 @@ namespace Azure.AI.ContentSafety.Dotnet.Sample
105107
throw;
106108
}
107109

108-
Console.WriteLine("Hate severity: {0}", response.Value.HateResult?.Severity ?? 0);
109-
Console.WriteLine("SelfHarm severity: {0}", response.Value.SelfHarmResult?.Severity ?? 0);
110-
Console.WriteLine("Sexual severity: {0}", response.Value.SexualResult?.Severity ?? 0);
111-
Console.WriteLine("Violence severity: {0}", response.Value.ViolenceResult?.Severity ?? 0);
110+
Console.WriteLine("Hate severity: {0}", response.Value.CategoriesAnalysis.FirstOrDefault(a => a.Category == ImageCategory.Hate)?.Severity ?? 0);
111+
Console.WriteLine("SelfHarm severity: {0}", response.Value.CategoriesAnalysis.FirstOrDefault(a => a.Category == ImageCategory.SelfHarm)?.Severity ?? 0);
112+
Console.WriteLine("Sexual severity: {0}", response.Value.CategoriesAnalysis.FirstOrDefault(a => a.Category == ImageCategory.Sexual)?.Severity ?? 0);
113+
Console.WriteLine("Violence severity: {0}", response.Value.CategoriesAnalysis.FirstOrDefault(a => a.Category == ImageCategory.Violence)?.Severity ?? 0);
112114
}
113115
static void Main()
114116
{
@@ -118,6 +120,8 @@ namespace Azure.AI.ContentSafety.Dotnet.Sample
118120
}
119121
```
120122

123+
Create a _sample_data_ folder in your project directory, and add an _image.png_ file into it.
124+
121125
#### [Visual Studio IDE](#tab/visual-studio)
122126

123127
Build and run the application by selecting **Start Debugging** from the **Debug** menu at the top of the IDE window (or press **F5**).

articles/ai-services/content-safety/includes/quickstarts/csharp-quickstart-text.md

Lines changed: 14 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@ ms.date: 07/04/2023
1111
ms.author: pafarley
1212
---
1313

14+
[Reference documentation](/dotnet/api/overview/azure/ai.contentsafety-readme?view=azure-dotnet) | [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/contentsafety/Azure.AI.ContentSafety) | [Package (NuGet)](https://www.nuget.org/packages/Azure.AI.ContentSafety) | [Samples](https://github.com/Azure-Samples/AzureAIContentSafety/tree/main/dotnet/1.0.0)
15+
1416
## Prerequisites
1517

1618
* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
@@ -28,7 +30,7 @@ Open Visual Studio, and under **Get started** select **Create a new project**. S
2830

2931
### Install the client SDK
3032

31-
Once you've created a new project, install the client SDK by right-clicking on the project solution in the **Solution Explorer** and selecting **Manage NuGet Packages**. In the package manager that opens select **Browse**, check **Include prerelease**, and search for `Azure.AI.ContentSafety`. Select **Install**.
33+
Once you've created a new project, install the client SDK by right-clicking on the project solution in the **Solution Explorer** and selecting **Manage NuGet Packages**. In the package manager that opens select **Browse**, and search for `Azure.AI.ContentSafety`. Select **Install**.
3234

3335
#### [CLI](#tab/cli)
3436

@@ -59,7 +61,7 @@ Build succeeded.
5961
Within the application directory, install the Computer Vision client SDK for .NET with the following command:
6062

6163
```dotnet
62-
dotnet add package Azure.AI.ContentSafety --prerelease
64+
dotnet add package Azure.AI.ContentSafety
6365
```
6466

6567
---
@@ -101,10 +103,16 @@ namespace Azure.AI.ContentSafety.Dotnet.Sample
101103
throw;
102104
}
103105

104-
Console.WriteLine("Hate severity: {0}", response.Value.HateResult?.Severity ?? 0);
105-
Console.WriteLine("SelfHarm severity: {0}", response.Value.SelfHarmResult?.Severity ?? 0);
106-
Console.WriteLine("Sexual severity: {0}", response.Value.SexualResult?.Severity ?? 0);
107-
Console.WriteLine("Violence severity: {0}", response.Value.ViolenceResult?.Severity ?? 0);
106+
Console.WriteLine("\nAnalyze text succeeded:");
107+
Console.WriteLine("Hate severity: {0}", response.Value.CategoriesAnalysis.FirstOrDefault(a => a.Category == TextCategory.Hate)?.Severity ?? 0);
108+
Console.WriteLine("SelfHarm severity: {0}", response.Value.CategoriesAnalysis.FirstOrDefault(a => a.Category == TextCategory.SelfHarm)?.Severity ?? 0);
109+
Console.WriteLine("Sexual severity: {0}", response.Value.CategoriesAnalysis.FirstOrDefault(a => a.Category == TextCategory.Sexual)?.Severity ?? 0);
110+
Console.WriteLine("Violence severity: {0}", response.Value.CategoriesAnalysis.FirstOrDefault(a => a.Category == TextCategory.Violence)?.Severity ?? 0);
111+
112+
}
113+
static void Main()
114+
{
115+
AnalyzeText();
108116
}
109117
}
110118
}

articles/ai-services/content-safety/includes/quickstarts/java-quickstart-image.md

Lines changed: 14 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ ms.date: 10/10/2023
1111
ms.author: pafarley
1212
---
1313

14-
[Reference documentation](/java/api/overview/azure/ai-contentsafety-readme) | [Library source code](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/contentsafety/azure-ai-contentsafety/src) | [Artifact (Maven)](https://central.sonatype.com/artifact/com.azure/azure-ai-contentsafety) | [Samples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/contentsafety/azure-ai-contentsafety/src/samples/java/com/azure/ai/contentsafety)
14+
[Reference documentation](/java/api/overview/azure/ai-contentsafety-readme) | [Library source code](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/contentsafety/azure-ai-contentsafety/src) | [Artifact (Maven)](https://central.sonatype.com/artifact/com.azure/azure-ai-contentsafety) | [Samples](https://github.com/Azure-Samples/AzureAIContentSafety/tree/main/java/1.0.0)
1515

1616

1717
## Prerequisites
@@ -68,7 +68,7 @@ repositories {
6868
mavenCentral()
6969
}
7070
dependencies {
71-
implementation(group = "com.azure", name = "azure-ai-contentsafety", version = "1.0.0-beta.1")
71+
implementation(group = "com.azure", name = "azure-ai-contentsafety", version = "1.0.0")
7272
}
7373
```
7474

@@ -79,17 +79,21 @@ dependencies {
7979
Open *ContentSafetyQuickstart.java* in your preferred editor or IDE and paste in the following code. Replace the `source` variable with the path to your sample image.
8080

8181
```java
82+
import com.azure.ai.contentsafety.ContentSafetyClient;
83+
import com.azure.ai.contentsafety.ContentSafetyClientBuilder;
8284
import com.azure.ai.contentsafety.models.AnalyzeImageOptions;
8385
import com.azure.ai.contentsafety.models.AnalyzeImageResult;
84-
import com.azure.ai.contentsafety.models.ImageData;
85-
import com.azure.ai.contentsafety.*;
86+
import com.azure.ai.contentsafety.models.ContentSafetyImageData;
87+
import com.azure.ai.contentsafety.models.ImageCategoriesAnalysis;
8688
import com.azure.core.credential.KeyCredential;
89+
import com.azure.core.util.BinaryData;
8790
import com.azure.core.util.Configuration;
8891

8992
import java.io.IOException;
9093
import java.nio.file.Files;
9194
import java.nio.file.Paths;
9295

96+
9397
public class ContentSafetyQuickstart {
9498
public static void main(String[] args) throws IOException {
9599

@@ -101,20 +105,18 @@ public class ContentSafetyQuickstart {
101105
.credential(new KeyCredential(key))
102106
.endpoint(endpoint).buildClient();
103107

104-
ImageData image = new ImageData();
108+
ContentSafetyImageData image = new ContentSafetyImageData();
105109
String cwd = System.getProperty("user.dir");
110+
String source = "/src/samples/resources/image.png";
106111

107-
// replace with your own sample image file path
108-
String source = "/src/resources/image.jpg";
109-
image.setContent(Files.readAllBytes(Paths.get(cwd, source)));
112+
image.setContent(BinaryData.fromBytes(Files.readAllBytes(Paths.get(cwd, source))));
110113

111114
AnalyzeImageResult response =
112115
contentSafetyClient.analyzeImage(new AnalyzeImageOptions(image));
113116

114-
System.out.println("Hate severity: " + response.getHateResult().getSeverity());
115-
System.out.println("SelfHarm severity: " + response.getSelfHarmResult().getSeverity());
116-
System.out.println("Sexual severity: " + response.getSexualResult().getSeverity());
117-
System.out.println("Violence severity: " + response.getViolenceResult().getSeverity());
117+
for (ImageCategoriesAnalysis result : response.getCategoriesAnalysis()) {
118+
System.out.println(result.getCategory() + " severity: " + result.getSeverity());
119+
}
118120
}
119121
}
120122
```

articles/ai-services/content-safety/includes/quickstarts/java-quickstart-text.md

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ ms.date: 10/10/2023
1111
ms.author: pafarley
1212
---
1313

14-
[Reference documentation](/java/api/overview/azure/ai-contentsafety-readme) | [Library source code](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/contentsafety/azure-ai-contentsafety/src) |[Artifact (Maven)](https://central.sonatype.com/artifact/com.azure/azure-ai-contentsafety) | [Samples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/contentsafety/azure-ai-contentsafety/src/samples/java/com/azure/ai/contentsafety)
14+
[Reference documentation](/java/api/overview/azure/ai-contentsafety-readme) | [Library source code](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/contentsafety/azure-ai-contentsafety/src) | [Artifact (Maven)](https://central.sonatype.com/artifact/com.azure/azure-ai-contentsafety) | [Samples](https://github.com/Azure-Samples/AzureAIContentSafety/tree/main/java/1.0.0)
1515

1616

1717
## Prerequisites
@@ -67,7 +67,7 @@ repositories {
6767
mavenCentral()
6868
}
6969
dependencies {
70-
implementation(group = "com.azure", name = "azure-ai-contentsafety", version = "1.0.0-beta.1")
70+
implementation(group = "com.azure", name = "azure-ai-contentsafety", version = "1.0.0")
7171
}
7272
```
7373

@@ -84,12 +84,15 @@ Open *ContentSafetyQuickstart.java* in your preferred editor or IDE and paste in
8484
> The default maximum length for text submissions is **10K** characters.
8585
8686
```Java
87+
import com.azure.ai.contentsafety.ContentSafetyClient;
88+
import com.azure.ai.contentsafety.ContentSafetyClientBuilder;
8789
import com.azure.ai.contentsafety.models.AnalyzeTextOptions;
8890
import com.azure.ai.contentsafety.models.AnalyzeTextResult;
89-
import com.azure.ai.contentsafety.*;
91+
import com.azure.ai.contentsafety.models.TextCategoriesAnalysis;
9092
import com.azure.core.credential.KeyCredential;
9193
import com.azure.core.util.Configuration;
9294

95+
9396
public class ContentSafetyQuickstart {
9497
public static void main(String[] args) {
9598

@@ -103,10 +106,9 @@ public class ContentSafetyQuickstart {
103106

104107
AnalyzeTextResult response = contentSafetyClient.analyzeText(new AnalyzeTextOptions("<your text sample>"));
105108

106-
System.out.println("Hate severity: " + response.getHateResult().getSeverity());
107-
System.out.println("SelfHarm severity: " + response.getSelfHarmResult().getSeverity());
108-
System.out.println("Sexual severity: " + response.getSexualResult().getSeverity());
109-
System.out.println("Violence severity: " + response.getViolenceResult().getSeverity());
109+
for (TextCategoriesAnalysis result : response.getCategoriesAnalysis()) {
110+
System.out.println(result.getCategory() + " severity: " + result.getSeverity());
111+
}
110112
}
111113
}
112114
```

articles/ai-services/content-safety/includes/quickstarts/javascript-quickstart-image.md

Lines changed: 9 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ ms.date: 10/10/2023
1111
ms.author: pafarley
1212
---
1313

14-
[Reference documentation](https://www.npmjs.com/package/@azure-rest/ai-content-safety/v/1.0.0-beta.1) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/contentsafety/ai-content-safety-rest) | [Package (npm)](https://www.npmjs.com/package/@azure-rest/ai-content-safety) | [Samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/contentsafety/ai-content-safety-rest/samples) |
14+
[Reference documentation](https://www.npmjs.com/package/@azure-rest/ai-content-safety/v/1.0.0) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/contentsafety/ai-content-safety-rest) | [Package (npm)](https://www.npmjs.com/package/@azure-rest/ai-content-safety) | [Samples](https://github.com/Azure-Samples/AzureAIContentSafety/tree/main/js/1.0.0) |
1515

1616

1717
## Prerequisites
@@ -91,11 +91,14 @@ async function main() {
9191
if (isUnexpected(result)) {
9292
throw result;
9393
}
94-
95-
console.log("Hate severity: ", result.body.hateResult?.severity);
96-
console.log("SelfHarm severity: ", result.body.selfHarmResult?.severity);
97-
console.log("Sexual severity: ", result.body.sexualResult?.severity);
98-
console.log("Violence severity: ", result.body.violenceResult?.severity);
94+
for (let i = 0; i < result.body.categoriesAnalysis.length; i++) {
95+
const imageCategoriesAnalysisOutput = result.body.categoriesAnalysis[i];
96+
console.log(
97+
imageCategoriesAnalysisOutput.category,
98+
" severity: ",
99+
imageCategoriesAnalysisOutput.severity
100+
);
101+
}
99102
}
100103

101104
main().catch((err) => {

articles/ai-services/content-safety/includes/quickstarts/javascript-quickstart-text.md

Lines changed: 9 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ ms.date: 10/10/2023
1111
ms.author: pafarley
1212
---
1313

14-
[Reference documentation](https://www.npmjs.com/package/@azure-rest/ai-content-safety/v/1.0.0-beta.1) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/contentsafety/ai-content-safety-rest) | [Package (npm)](https://www.npmjs.com/package/@azure-rest/ai-content-safety) | [Samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/contentsafety/ai-content-safety-rest/samples) |
14+
[Reference documentation](https://www.npmjs.com/package/@azure-rest/ai-content-safety/v/1.0.0) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/contentsafety/ai-content-safety-rest) | [Package (npm)](https://www.npmjs.com/package/@azure-rest/ai-content-safety) | [Samples](https://github.com/Azure-Samples/AzureAIContentSafety/tree/main/js/1.0.0) |
1515

1616

1717
## Prerequisites
@@ -89,10 +89,14 @@ async function main() {
8989
throw result;
9090
}
9191

92-
console.log("Hate severity: ", result.body.hateResult?.severity);
93-
console.log("SelfHarm severity: ", result.body.selfHarmResult?.severity);
94-
console.log("Sexual severity: ", result.body.sexualResult?.severity);
95-
console.log("Violence severity: ", result.body.violenceResult?.severity);
92+
for (let i = 0; i < result.body.categoriesAnalysis.length; i++) {
93+
const textCategoriesAnalysisOutput = result.body.categoriesAnalysis[i];
94+
console.log(
95+
textCategoriesAnalysisOutput.category,
96+
" severity: ",
97+
textCategoriesAnalysisOutput.severity
98+
);
99+
}
96100
}
97101

98102
main().catch((err) => {

articles/ai-services/content-safety/includes/quickstarts/python-quickstart-image.md

Lines changed: 18 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,9 @@ ms.date: 05/03/2023
1111
ms.author: pafarley
1212
---
1313

14+
[Reference documentation](https://pypi.org/project/azure-ai-contentsafety/) | [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/contentsafety/azure-ai-contentsafety) | [Package (PyPI)](https://pypi.org/project/azure-ai-contentsafety/) | [Samples](https://github.com/Azure-Samples/AzureAIContentSafety/tree/main/python/1.0.0) |
15+
16+
1417
## Prerequisites
1518

1619
* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
@@ -36,9 +39,11 @@ The following section walks through a sample request with the Python SDK.
3639

3740
```python
3841
import os
42+
3943
from azure.ai.contentsafety import ContentSafetyClient
44+
from azure.ai.contentsafety.models import AnalyzeImageOptions, ImageData, ImageCategory
4045
from azure.core.credentials import AzureKeyCredential
41-
from azure.ai.contentsafety.models import AnalyzeImageOptions, ImageData
46+
from azure.core.exceptions import HttpResponseError
4247
4348
def analyze_image():
4449
endpoint = os.environ.get('CONTENT_SAFETY_ENDPOINT')
@@ -65,15 +70,19 @@ The following section walks through a sample request with the Python SDK.
6570
print(e)
6671
raise
6772

68-
if response.hate_result:
69-
print(f"Hate severity: {response.hate_result.severity}")
70-
if response.self_harm_result:
71-
print(f"SelfHarm severity: {response.self_harm_result.severity}")
72-
if response.sexual_result:
73-
print(f"Sexual severity: {response.sexual_result.severity}")
74-
if response.violence_result:
75-
print(f"Violence severity: {response.violence_result.severity}")
73+
hate_result = next(item for item in response.categories_analysis if item.category == ImageCategory.HATE)
74+
self_harm_result = next(item for item in response.categories_analysis if item.category == ImageCategory.SELF_HARM)
75+
sexual_result = next(item for item in response.categories_analysis if item.category == ImageCategory.SEXUAL)
76+
violence_result = next(item for item in response.categories_analysis if item.category == ImageCategory.VIOLENCE)
7677
78+
if hate_result:
79+
print(f"Hate severity: {hate_result.severity}")
80+
if self_harm_result:
81+
print(f"SelfHarm severity: {self_harm_result.severity}")
82+
if sexual_result:
83+
print(f"Sexual severity: {sexual_result.severity}")
84+
if violence_result:
85+
print(f"Violence severity: {violence_result.severity}")
7786
7887
if __name__ == "__main__":
7988
analyze_image()

0 commit comments

Comments
 (0)