Skip to content

Commit 20e0898

Browse files
authored
Merge pull request #268784 from mrbullwinkle/mrb_03_12_2024_release-dalle
[Release Branch] [Azure OpenAI] DALL-E GA
2 parents cbfbaf2 + 2540aa4 commit 20e0898

File tree

12 files changed

+71
-57
lines changed

12 files changed

+71
-57
lines changed

articles/ai-services/openai/concepts/content-filter.md

Lines changed: 16 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: Azure OpenAI Service content filtering
33
titleSuffix: Azure OpenAI
4-
description: Learn about the content filtering capabilities of Azure OpenAI in Azure AI services
4+
description: Learn about the content filtering capabilities of Azure OpenAI in Azure AI services.
55
author: mrbullwinkle
66
ms.author: mbullwin
77
ms.service: azure-ai-openai
@@ -20,22 +20,22 @@ Azure OpenAI Service includes a content filtering system that works alongside co
2020

2121
The content filtering models for the hate, sexual, violence, and self-harm categories have been specifically trained and tested on the following languages: English, German, Japanese, Spanish, French, Italian, Portuguese, and Chinese. However, the service can work in many other languages, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application.
2222

23-
In addition to the content filtering system, the Azure OpenAI Service performs monitoring to detect content and/or behaviors that suggest use of the service in a manner that might violate applicable product terms. For more information about understanding and mitigating risks associated with your application, see the [Transparency Note for Azure OpenAI](/legal/cognitive-services/openai/transparency-note?tabs=text). For more information about how data is processed in connection with content filtering and abuse monitoring, see [Data, privacy, and security for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context#preventing-abuse-and-harmful-content-generation).
23+
In addition to the content filtering system, the Azure OpenAI Service performs monitoring to detect content and/or behaviors that suggest use of the service in a manner that might violate applicable product terms. For more information about understanding and mitigating risks associated with your application, see the [Transparency Note for Azure OpenAI](/legal/cognitive-services/openai/transparency-note?tabs=text). For more information about how data is processed for content filtering and abuse monitoring, see [Data, privacy, and security for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context#preventing-abuse-and-harmful-content-generation).
2424

2525
The following sections provide information about the content filtering categories, the filtering severity levels and their configurability, and API scenarios to be considered in application design and implementation.
2626

2727
## Content filtering categories
2828

2929
The content filtering system integrated in the Azure OpenAI Service contains:
3030
* Neural multi-class classification models aimed at detecting and filtering harmful content; the models cover four categories (hate, sexual, violence, and self-harm) across four severity levels (safe, low, medium, and high). Content detected at the 'safe' severity level is labeled in annotations but isn't subject to filtering and isn't configurable.
31-
* Additional optional classification models aimed at detecting jailbreak risk and known content for text and code; these models are binary classifiers that flag whether user or model behavior qualifies as a jailbreak attack or match to known text or source code. The use of these models is optional, but use of protected material code model may be required for Customer Copyright Commitment coverage.
31+
* Other optional classification models aimed at detecting jailbreak risk and known content for text and code; these models are binary classifiers that flag whether user or model behavior qualifies as a jailbreak attack or match to known text or source code. The use of these models is optional, but use of protected material code model may be required for Customer Copyright Commitment coverage.
3232

3333
## Harm categories
3434

3535
|Category|Description|
3636
|--------|-----------|
37-
| Hate and fairness |Hate and fairness-related harms refer to any content that attacks or uses pejorative or discriminatory language with reference to a person or Identity groups on the basis of certain differentiating attributes of these groups including but not limited to race, ethnicity, nationality, gender identity groups and expression, sexual orientation, religion, immigration status, ability status, personal appearance and body size. </br></br> Fairness is concerned with ensuring that AI systems treat all groups of people equitably without contributing to existing societal inequities. Similar to hate speech, fairness-related harms hinge upon disparate treatment of Identity groups.   |
38-
| Sexual | Sexual describes language related to anatomical organs and genitals, romantic relationships, acts portrayed in erotic or affectionate terms, pregnancy, physical sexual acts, including those portrayed as an assault or a forced sexual violent act against one’s will, prostitution, pornography and abuse.   |
37+
| Hate and fairness |Hate and fairness-related harms refer to any content that attacks or uses pejorative or discriminatory language with reference to a person or Identity groups on the basis of certain differentiating attributes of these groups including but not limited to race, ethnicity, nationality, gender identity groups and expression, sexual orientation, religion, immigration status, ability status, personal appearance, and body size. </br></br> Fairness is concerned with ensuring that AI systems treat all groups of people equitably without contributing to existing societal inequities. Similar to hate speech, fairness-related harms hinge upon disparate treatment of Identity groups.   |
38+
| Sexual | Sexual describes language related to anatomical organs and genitals, romantic relationships, acts portrayed in erotic or affectionate terms, pregnancy, physical sexual acts, including those portrayed as an assault or a forced sexual violent act against one’s will, prostitution, pornography, and abuse.   |
3939
| Violence | Violence describes language related to physical actions intended to hurt, injure, damage, or kill someone or something; describes weapons, guns and related entities, such as manufactures, associations, legislation, etc.   |
4040
| Self-Harm | Self-harm describes language related to physical actions intended to purposely hurt, injure, damage one’s body or kill oneself.|
4141
| Jailbreak risk | Jailbreak attacks are User Prompts designed to provoke the Generative AI model into exhibiting behaviors it was trained to avoid or to break the rules set in the System Message. Such attacks can vary from intricate role play to subtle subversion of the safety objective. |
@@ -55,7 +55,7 @@ The default content filtering configuration is set to filter at the medium sever
5555

5656
| Severity filtered | Configurable for prompts | Configurable for completions | Descriptions |
5757
|-------------------|--------------------------|------------------------------|--------------|
58-
| Low, medium, high | Yes | Yes | Strictest filtering configuration. Content detected at severity levels low, medium and high is filtered.|
58+
| Low, medium, high | Yes | Yes | Strictest filtering configuration. Content detected at severity levels low, medium, and high is filtered.|
5959
| Medium, high | Yes | Yes | Default setting. Content detected at severity level low isn't filtered, content at medium and high is filtered.|
6060
| High | Yes| Yes | Content detected at severity levels low and medium isn't filtered. Only content at severity level high is filtered.|
6161
| No filters | If approved<sup>\*</sup>| If approved<sup>\*</sup>| No content is filtered regardless of severity level detected. Requires approval<sup>\*</sup>.|
@@ -68,7 +68,7 @@ Content filtering configurations are created within a Resource in Azure AI Studi
6868

6969
## Scenario details
7070

71-
When the content filtering system detects harmful content, you'll receive either an error on the API call if the prompt was deemed inappropriate or the `finish_reason` on the response will be `content_filter` to signify that some of the completion was filtered. When building your application or system, you'll want to account for these scenarios where the content returned by the Completions API is filtered, which might result in content that is incomplete. How you act on this information will be application specific. The behavior can be summarized in the following points:
71+
When the content filtering system detects harmful content, you receive either an error on the API call if the prompt was deemed inappropriate, or the `finish_reason` on the response will be `content_filter` to signify that some of the completion was filtered. When building your application or system, you'll want to account for these scenarios where the content returned by the Completions API is filtered, which might result in content that is incomplete. How you act on this information will be application specific. The behavior can be summarized in the following points:
7272

7373
- Prompts that are classified at a filtered category and severity level will return an HTTP 400 error.
7474
- Non-streaming completions calls won't return any content when the content is filtered. The `finish_reason` value will be set to content_filter. In rare cases with longer responses, a partial result can be returned. In these cases, the `finish_reason` will be updated.
@@ -290,10 +290,11 @@ The table below outlines the various ways content filtering can appear:
290290
}
291291
```
292292

293-
## Annotations (preview)
293+
## Annotations
294294

295-
### Main content filters
296-
When annotations are enabled as shown in the code snippet below, the following information is returned via the API for the main categories (hate and fairness, sexual, violence, and self-harm):
295+
### Content filters
296+
297+
When annotations are enabled as shown in the code snippet below, the following information is returned via the API for the categories hate and fairness, sexual, violence, and self-harm:
297298
- content filtering category (hate, sexual, violence, self_harm)
298299
- the severity level (safe, low, medium or high) within each content category
299300
- filtering status (true or false).
@@ -302,7 +303,7 @@ When annotations are enabled as shown in the code snippet below, the following i
302303

303304
Optional models can be enabled in annotate (returns information when content was flagged, but not filtered) or filter mode (returns information when content was flagged and filtered).
304305

305-
When annotations are enabled as shown in the code snippet below, the following information is returned by the API for optional models jailbreak risk, protected material text and protected material code:
306+
When annotations are enabled as shown in the code snippet below, the following information is returned by the API for optional models: jailbreak risk, protected material text and protected material code:
306307
- category (jailbreak, protected_material_text, protected_material_code),
307308
- detected (true or false),
308309
- filtered (true or false).
@@ -313,7 +314,7 @@ For the protected material code model, the following additional information is r
313314

314315
When displaying code in your application, we strongly recommend that the application also displays the example citation from the annotations. Compliance with the cited license may also be required for Customer Copyright Commitment coverage.
315316

316-
Annotations are currently in preview for Completions and Chat Completions (GPT models); the following code snippet shows how to use annotations in preview:
317+
Annotations are currently available in the GA API version `2024-02-01` and in all preview versions starting from `2023-06-01-preview` for Completions and Chat Completions (GPT models). The following code snippet shows how to use annotations:
317318

318319
# [OpenAI Python 0.28.1](#tab/python)
319320

@@ -324,7 +325,7 @@ import os
324325
import openai
325326
openai.api_type = "azure"
326327
openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT")
327-
openai.api_version = "2023-06-01-preview" # API version required to test out Annotations preview
328+
openai.api_version = "2023-06-01-preview" # API version required to use Annotations
328329
openai.api_key = os.getenv("AZURE_OPENAI_API_KEY")
329330

330331
response = openai.Completion.create(
@@ -344,7 +345,6 @@ print(response)
344345
"choices": [
345346
{
346347
"content_filter_results": {
347-
"custom_blocklists": [],
348348
"hate": {
349349
"filtered": false,
350350
"severity": "safe"
@@ -389,7 +389,6 @@ print(response)
389389
"prompt_filter_results": [
390390
{
391391
"content_filter_results": {
392-
"custom_blocklists": [],
393392
"hate": {
394393
"filtered": false,
395394
"severity": "safe"
@@ -435,7 +434,7 @@ import os
435434
import openai
436435
openai.api_type = "azure"
437436
openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT")
438-
openai.api_version = "2023-06-01-preview" # API version required to test out Annotations preview
437+
openai.api_version = "2023-06-01-preview" # API version required to use Annotations
439438
openai.api_key = os.getenv("AZURE_OPENAI_API_KEY")
440439

441440
try:
@@ -589,7 +588,7 @@ violence : @{filtered=False; severity=safe}
589588
590589
---
591590
592-
For details on the inference REST API endpoints for Azure OpenAI and how to create Chat and Completions please follow [Azure OpenAI Service REST API reference guidance](../reference.md). Annotations are returned for all scenarios when using `2023-06-01-preview`.
591+
For details on the inference REST API endpoints for Azure OpenAI and how to create Chat and Completions please follow [Azure OpenAI Service REST API reference guidance](../reference.md). Annotations are returned for all scenarios when using any preview API version starting from `2023-06-01-preview`, as well as the GA API version `2024-02-01`.
593592
594593
### Example scenario: An input prompt containing content that is classified at a filtered category and severity level is sent to the completions API
595594

articles/ai-services/openai/concepts/models.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Azure OpenAI Service is powered by a diverse set of models with different capabi
2121
| [GPT-4](#gpt-4-and-gpt-4-turbo-preview) | A set of models that improve on GPT-3.5 and can understand and generate natural language and code. |
2222
| [GPT-3.5](#gpt-35) | A set of models that improve on GPT-3 and can understand and generate natural language and code. |
2323
| [Embeddings](#embeddings-models) | A set of models that can convert text into numerical vector form to facilitate text similarity. |
24-
| [DALL-E](#dall-e-models-preview) (Preview) | A series of models in preview that can generate original images from natural language. |
24+
| [DALL-E](#dall-e-models) | A series of models that can generate original images from natural language. |
2525
| [Whisper](#whisper-models-preview) (Preview) | A series of models in preview that can transcribe and translate speech to text. |
2626
| [Text to speech](#text-to-speech-models-preview) (Preview) | A series of models in preview that can synthesize text to speech. |
2727

@@ -67,9 +67,9 @@ The third generation embeddings models support reducing the size of the embeddin
6767

6868
OpenAI's MTEB benchmark testing found that even when the third generation model's dimensions are reduced to less than `text-embeddings-ada-002` 1,536 dimensions performance remains slightly better.
6969

70-
## DALL-E (Preview)
70+
## DALL-E
7171

72-
The DALL-E models, currently in preview, generate images from text prompts that the user provides.
72+
The DALL-E models generate images from text prompts that the user provides. DALL-E 3 is generally available for use with the REST APIs. DALL-E 2 and DALL-E 3 with client SDKs are in preview.
7373

7474
## Whisper (Preview)
7575

@@ -200,12 +200,12 @@ The following Embeddings models are available with [Azure Government](/azure/azu
200200
|--|--|
201201
|`text-embedding-ada-002` (version 2) |US Gov Virginia<br>US Gov Arizona |
202202

203-
### DALL-E models (Preview)
203+
### DALL-E models
204204

205205
| Model ID | Feature Availability | Max Request (characters) |
206206
| --- | --- | :---: |
207-
| dalle2 | East US | 1,000 |
208-
| dall-e-3 | Sweden Central | 4,000 |
207+
| dalle2 (preview) | East US | 1,000 |
208+
| dall-e-3 | East US, Australia East, Sweden Central | 4,000 |
209209

210210
### Fine-tuning models
211211

articles/ai-services/openai/dall-e-quickstart.md

Lines changed: 12 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,18 @@ zone_pivot_groups: openai-quickstart-dall-e
2424

2525
::: zone-end
2626

27+
::: zone pivot="rest-api"
28+
29+
[!INCLUDE [REST API quickstart](includes/dall-e-rest.md)]
30+
31+
::: zone-end
32+
33+
::: zone pivot="programming-language-python"
34+
35+
[!INCLUDE [Python SDK quickstart](includes/dall-e-python.md)]
36+
37+
::: zone-end
38+
2739
::: zone pivot="programming-language-csharp"
2840

2941
[!INCLUDE [C# SDK quickstart](includes/dall-e-dotnet.md)]
@@ -48,20 +60,12 @@ zone_pivot_groups: openai-quickstart-dall-e
4860

4961
::: zone-end
5062

51-
::: zone pivot="programming-language-python"
52-
53-
[!INCLUDE [Python SDK quickstart](includes/dall-e-python.md)]
5463

55-
::: zone-end
5664

5765
::: zone pivot="programming-language-powershell"
5866

5967
[!INCLUDE [PowerShell quickstart](includes/dall-e-powershell.md)]
6068

6169
::: zone-end
6270

63-
::: zone pivot="rest-api"
6471

65-
[!INCLUDE [REST API quickstart](includes/dall-e-rest.md)]
66-
67-
::: zone-end

articles/ai-services/openai/includes/dall-e-python.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -22,10 +22,10 @@ Use this guide to get started generating images with the Azure OpenAI SDK for Py
2222
- An Azure subscription. <a href="https://azure.microsoft.com/free/ai-services" target="_blank">Create one for free</a>.
2323
- Access granted to DALL-E in the desired Azure subscription.
2424
- <a href="https://www.python.org/" target="_blank">Python 3.7.1 or later version</a>.
25-
- An Azure OpenAI resource created in the `SwedenCentral` region.
25+
- An Azure OpenAI resource created in the `EastUS`, `AustraliaEast`, or `SwedenCentral` region.
2626
- Then, you need to deploy a `dalle3` model with your Azure resource. For more information, see [Create a resource and deploy a model with Azure OpenAI](../how-to/create-resource.md).
2727

28-
#### [DALL-E 2](#tab/dalle2)
28+
#### [DALL-E 2 (preview)](#tab/dalle2)
2929

3030
- An Azure subscription. <a href="https://azure.microsoft.com/free/ai-services" target="_blank">Create one for free</a>.
3131
- Access granted to DALL-E in the desired Azure subscription.
@@ -72,7 +72,7 @@ Install the OpenAI Python SDK by using the following command:
7272
pip install openai
7373
```
7474

75-
#### [DALL-E 2](#tab/dalle2)
75+
#### [DALL-E 2 (preview)](#tab/dalle2)
7676

7777
> [!IMPORTANT]
7878
> The latest release of the [OpenAI Python library](https://pypi.org/project/openai/) does not currently support DALL-E 2 when used with Azure OpenAI. To access DALL-E 2 with Azure OpenAI use version `0.28.1`. Or, follow the [migration guide](/azure/ai-services/openai/how-to/migration?tabs=python%2Cdalle-fix) to use DALL-E 2 with OpenAI 1.x.
@@ -105,7 +105,7 @@ from PIL import Image
105105
import json
106106

107107
client = AzureOpenAI(
108-
api_version="2023-12-01-preview",
108+
api_version="2024-02-01",
109109
api_key=os.environ["AZURE_OPENAI_API_KEY"],
110110
azure_endpoint=os.environ['AZURE_OPENAI_ENDPOINT']
111111
)
@@ -143,7 +143,7 @@ image.show()
143143
1. Change the value of `prompt` to your preferred text.
144144
1. Change the value of `model` to the name of your deployed DALL-E 3 model.
145145

146-
#### [DALL-E 2](#tab/dalle2)
146+
#### [DALL-E 2 (preview)](#tab/dalle2)
147147

148148
```python
149149
import openai

0 commit comments

Comments
 (0)