You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/api-version-deprecation.md
+10-4Lines changed: 10 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ services: cognitive-services
5
5
manager: nitinme
6
6
ms.service: azure-ai-openai
7
7
ms.topic: conceptual
8
-
ms.date: 03/07/2024
8
+
ms.date: 03/12/2024
9
9
author: mrbullwinkle
10
10
ms.author: mbullwin
11
11
recommendations: false
@@ -17,7 +17,7 @@ ms.custom:
17
17
This article is to help you understand the support lifecycle for the Azure OpenAI API previews. New preview APIs target a monthly release cadence. Post April 2, 2024, the latest three preview APIs will remain supported while older APIs will no longer be supported.
18
18
19
19
> [!NOTE]
20
-
> The `2023-06-01-preview` API will remain supported at this time, as `DALL-E 2` is only available in this API version. `DALL-E 3` is supported in the latest API releases.
20
+
> The `2023-06-01-preview` API will remain supported at this time, as `DALL-E 2` is only available in this API version. `DALL-E 3` is supported in the latest API releases. The `2023-10-01-preview` API will also remain supported at this time.
21
21
22
22
## Latest preview API release
23
23
@@ -28,13 +28,20 @@ This version contains support for all the latest Azure OpenAI features including
28
28
29
29
-[Embeddings `encoding_format` and `dimensions` parameters][**Added in 2024-03-01-preview**]
30
30
-[Assistants API](./assistants-reference.md). [**Added in 2024-02-15-preview**]
31
-
-[DALL-E 3](./dall-e-quickstart.md). [**Added in 2023-12-01-preview**]
32
31
-[Text to speech](./text-to-speech-quickstart.md). [**Added in 2024-02-15-preview**]
32
+
-[DALL-E 3](./dall-e-quickstart.md). [**Added in 2023-12-01-preview**]
33
33
-[Fine-tuning](./how-to/fine-tuning.md)`gpt-35-turbo`, `babbage-002`, and `davinci-002` models.[**Added in 2023-10-01-preview**]
34
34
-[Whisper](./whisper-quickstart.md). [**Added in 2023-09-01-preview**]
35
35
-[Function calling](./how-to/function-calling.md)[**Added in 2023-07-01-preview**]
36
36
-[Retrieval augmented generation with the on your data feature](./use-your-data-quickstart.md). [**Added in 2023-06-01-preview**]
37
37
38
+
## Latest GA API release
39
+
40
+
Azure OpenAI API version [2024-02-01](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2024-02-01/inference.json)
41
+
is currently the latest GA API release. This API version is the replacement for the previous`2023-05-15` GA API release.
42
+
43
+
This version contains support for the latest GA features like Whisper, DALL-E 3, fine-tuning, on your data, etc. Any preview features that were released after the `2023-12-01-preview` release like Assistants, TTS, certain on your data datasources, are only supported in the latest preview API releases.
44
+
38
45
## Retiring soon
39
46
40
47
On April 2, 2024 the following API preview releases will be retired and will stop accepting API requests:
@@ -43,7 +50,6 @@ On April 2, 2024 the following API preview releases will be retired and will sto
43
50
- 2023-07-01-preview
44
51
- 2023-08-01-preview
45
52
- 2023-09-01-preview
46
-
- 2023-10-01-preview
47
53
- 2023-12-01-preview
48
54
49
55
To avoid service disruptions, you must update to use the latest preview version before the retirement date.
Copy file name to clipboardExpand all lines: articles/ai-services/openai/concepts/content-filter.md
+17-18Lines changed: 17 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
---
2
2
title: Azure OpenAI Service content filtering
3
3
titleSuffix: Azure OpenAI
4
-
description: Learn about the content filtering capabilities of Azure OpenAI in Azure AI services
4
+
description: Learn about the content filtering capabilities of Azure OpenAI in Azure AI services.
5
5
author: mrbullwinkle
6
6
ms.author: mbullwin
7
7
ms.service: azure-ai-openai
@@ -14,28 +14,28 @@ manager: nitinme
14
14
# Content filtering
15
15
16
16
> [!IMPORTANT]
17
-
> The content filtering system isn't applied to prompts and completions processed by the Whisper model in Azure OpenAI Service. Learn more about the [Whisper model in Azure OpenAI](models.md#whisper-preview).
17
+
> The content filtering system isn't applied to prompts and completions processed by the Whisper model in Azure OpenAI Service. Learn more about the [Whisper model in Azure OpenAI](models.md#whisper).
18
18
19
19
Azure OpenAI Service includes a content filtering system that works alongside core models. This system works by running both the prompt and completion through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Variations in API configurations and application design might affect completions and thus filtering behavior.
20
20
21
21
The content filtering models for the hate, sexual, violence, and self-harm categories have been specifically trained and tested on the following languages: English, German, Japanese, Spanish, French, Italian, Portuguese, and Chinese. However, the service can work in many other languages, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application.
22
22
23
-
In addition to the content filtering system, the Azure OpenAI Service performs monitoring to detect content and/or behaviors that suggest use of the service in a manner that might violate applicable product terms. For more information about understanding and mitigating risks associated with your application, see the [Transparency Note for Azure OpenAI](/legal/cognitive-services/openai/transparency-note?tabs=text). For more information about how data is processed in connection with content filtering and abuse monitoring, see [Data, privacy, and security for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context#preventing-abuse-and-harmful-content-generation).
23
+
In addition to the content filtering system, the Azure OpenAI Service performs monitoring to detect content and/or behaviors that suggest use of the service in a manner that might violate applicable product terms. For more information about understanding and mitigating risks associated with your application, see the [Transparency Note for Azure OpenAI](/legal/cognitive-services/openai/transparency-note?tabs=text). For more information about how data is processed for content filtering and abuse monitoring, see [Data, privacy, and security for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context#preventing-abuse-and-harmful-content-generation).
24
24
25
25
The following sections provide information about the content filtering categories, the filtering severity levels and their configurability, and API scenarios to be considered in application design and implementation.
26
26
27
27
## Content filtering categories
28
28
29
29
The content filtering system integrated in the Azure OpenAI Service contains:
30
30
* Neural multi-class classification models aimed at detecting and filtering harmful content; the models cover four categories (hate, sexual, violence, and self-harm) across four severity levels (safe, low, medium, and high). Content detected at the 'safe' severity level is labeled in annotations but isn't subject to filtering and isn't configurable.
31
-
*Additional optional classification models aimed at detecting jailbreak risk and known content for text and code; these models are binary classifiers that flag whether user or model behavior qualifies as a jailbreak attack or match to known text or source code. The use of these models is optional, but use of protected material code model may be required for Customer Copyright Commitment coverage.
31
+
*Other optional classification models aimed at detecting jailbreak risk and known content for text and code; these models are binary classifiers that flag whether user or model behavior qualifies as a jailbreak attack or match to known text or source code. The use of these models is optional, but use of protected material code model may be required for Customer Copyright Commitment coverage.
32
32
33
33
## Harm categories
34
34
35
35
|Category|Description|
36
36
|--------|-----------|
37
-
| Hate and fairness |Hate and fairness-related harms refer to any content that attacks or uses pejorative or discriminatory language with reference to a person or Identity groups on the basis of certain differentiating attributes of these groups including but not limited to race, ethnicity, nationality, gender identity groups and expression, sexual orientation, religion, immigration status, ability status, personal appearance and body size. </br></br> Fairness is concerned with ensuring that AI systems treat all groups of people equitably without contributing to existing societal inequities. Similar to hate speech, fairness-related harms hinge upon disparate treatment of Identity groups. |
38
-
| Sexual | Sexual describes language related to anatomical organs and genitals, romantic relationships, acts portrayed in erotic or affectionate terms, pregnancy, physical sexual acts, including those portrayed as an assault or a forced sexual violent act against one’s will, prostitution, pornography and abuse. |
37
+
| Hate and fairness |Hate and fairness-related harms refer to any content that attacks or uses pejorative or discriminatory language with reference to a person or Identity groups on the basis of certain differentiating attributes of these groups including but not limited to race, ethnicity, nationality, gender identity groups and expression, sexual orientation, religion, immigration status, ability status, personal appearance, and body size. </br></br> Fairness is concerned with ensuring that AI systems treat all groups of people equitably without contributing to existing societal inequities. Similar to hate speech, fairness-related harms hinge upon disparate treatment of Identity groups. |
38
+
| Sexual | Sexual describes language related to anatomical organs and genitals, romantic relationships, acts portrayed in erotic or affectionate terms, pregnancy, physical sexual acts, including those portrayed as an assault or a forced sexual violent act against one’s will, prostitution, pornography, and abuse. |
39
39
| Violence | Violence describes language related to physical actions intended to hurt, injure, damage, or kill someone or something; describes weapons, guns and related entities, such as manufactures, associations, legislation, etc. |
40
40
| Self-Harm | Self-harm describes language related to physical actions intended to purposely hurt, injure, damage one’s body or kill oneself.|
41
41
| Jailbreak risk | Jailbreak attacks are User Prompts designed to provoke the Generative AI model into exhibiting behaviors it was trained to avoid or to break the rules set in the System Message. Such attacks can vary from intricate role play to subtle subversion of the safety objective. |
@@ -55,7 +55,7 @@ The default content filtering configuration is set to filter at the medium sever
55
55
56
56
| Severity filtered | Configurable for prompts | Configurable for completions | Descriptions |
| Low, medium, high | Yes | Yes | Strictest filtering configuration. Content detected at severity levels low, medium and high is filtered.|
58
+
| Low, medium, high | Yes | Yes | Strictest filtering configuration. Content detected at severity levels low, medium, and high is filtered.|
59
59
| Medium, high | Yes | Yes | Default setting. Content detected at severity level low isn't filtered, content at medium and high is filtered.|
60
60
| High | Yes| Yes | Content detected at severity levels low and medium isn't filtered. Only content at severity level high is filtered.|
61
61
| No filters | If approved<sup>\*</sup>| If approved<sup>\*</sup>| No content is filtered regardless of severity level detected. Requires approval<sup>\*</sup>.|
@@ -68,7 +68,7 @@ Content filtering configurations are created within a Resource in Azure AI Studi
68
68
69
69
## Scenario details
70
70
71
-
When the content filtering system detects harmful content, you'll receive either an error on the API call if the prompt was deemed inappropriate or the `finish_reason` on the response will be `content_filter` to signify that some of the completion was filtered. When building your application or system, you'll want to account for these scenarios where the content returned by the Completions API is filtered, which might result in content that is incomplete. How you act on this information will be application specific. The behavior can be summarized in the following points:
71
+
When the content filtering system detects harmful content, you receive either an error on the API call if the prompt was deemed inappropriate, or the `finish_reason` on the response will be `content_filter` to signify that some of the completion was filtered. When building your application or system, you'll want to account for these scenarios where the content returned by the Completions API is filtered, which might result in content that is incomplete. How you act on this information will be application specific. The behavior can be summarized in the following points:
72
72
73
73
- Prompts that are classified at a filtered category and severity level will return an HTTP 400 error.
74
74
- Non-streaming completions calls won't return any content when the content is filtered. The `finish_reason` value will be set to content_filter. In rare cases with longer responses, a partial result can be returned. In these cases, the `finish_reason` will be updated.
@@ -290,10 +290,11 @@ The table below outlines the various ways content filtering can appear:
290
290
}
291
291
```
292
292
293
-
## Annotations (preview)
293
+
## Annotations
294
294
295
-
### Main content filters
296
-
When annotations are enabled as shown in the code snippet below, the following information is returned via the API for the main categories (hate and fairness, sexual, violence, and self-harm):
295
+
### Content filters
296
+
297
+
When annotations are enabled as shown in the code snippet below, the following information is returned via the API for the categories hate and fairness, sexual, violence, and self-harm:
- the severity level (safe, low, medium or high) within each content category
299
300
- filtering status (true or false).
@@ -302,7 +303,7 @@ When annotations are enabled as shown in the code snippet below, the following i
302
303
303
304
Optional models can be enabled in annotate (returns information when content was flagged, but not filtered) or filter mode (returns information when content was flagged and filtered).
304
305
305
-
When annotations are enabled as shown in the code snippet below, the following information is returned by the API for optional models jailbreak risk, protected material text and protected material code:
306
+
When annotations are enabled as shown in the code snippet below, the following information is returned by the API for optional models: jailbreak risk, protected material text and protected material code:
@@ -313,7 +314,7 @@ For the protected material code model, the following additional information is r
313
314
314
315
When displaying code in your application, we strongly recommend that the application also displays the example citation from the annotations. Compliance with the cited license may also be required for Customer Copyright Commitment coverage.
315
316
316
-
Annotations are currently in preview for Completions and Chat Completions (GPT models); the following code snippet shows how to use annotations in preview:
317
+
Annotations are currently available in the GA API version `2024-02-01` and in all preview versions starting from `2023-06-01-preview`for Completions and Chat Completions (GPT models). The following code snippet shows how to use annotations:
For details on the inference RESTAPI endpoints for Azure OpenAI and how to create Chat and Completions please follow [Azure OpenAI Service RESTAPI reference guidance](../reference.md). Annotations are returned for all scenarios when using `2023-06-01-preview`.
591
+
For details on the inference RESTAPI endpoints for Azure OpenAI and how to create Chat and Completions please follow [Azure OpenAI Service RESTAPI reference guidance](../reference.md). Annotations are returned for all scenarios when using any preview API version starting from `2023-06-01-preview`, as well as the GAAPI version `2024-02-01`.
593
592
594
593
### Example scenario: An input prompt containing content that is classified at a filtered category and severity level is sent to the completions API
0 commit comments