Skip to content

Commit 7b376d6

Browse files
authored
Merge pull request #275896 from MicrosoftDocs/main
5/21 OOB publish
2 parents fe54a7d + 7f1d1d5 commit 7b376d6

File tree

187 files changed

+2120
-917
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

187 files changed

+2120
-917
lines changed
Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
---
2+
title: "Azure OpenAI account access"
3+
description: Azure OpenAI account access
4+
author: PatrickFarley
5+
manager: nitinme
6+
ms.service: azure-ai-content-safety
7+
ms.topic: include
8+
ms.date: 04/12/2024
9+
ms.author: pafarley
10+
---
11+
12+
13+
1. Enable Managed Identity for Azure AI Content Safety.
14+
15+
Navigate to your Azure AI Content Safety instance in the Azure portal. Find the **Identity** section under the **Settings** category. Enable the system-assigned managed identity. This action grants your Azure AI Content Safety instance an identity that can be recognized and used within Azure for accessing other resources.
16+
17+
:::image type="content" source="/azure/ai-services/content-safety/media/content-safety-identity.png" alt-text="Screenshot of a Content Safety identity resource in the Azure portal." lightbox="/azure/ai-services/content-safety/media/content-safety-identity.png":::
18+
19+
1. Assign role to Managed Identity.
20+
21+
Navigate to your Azure OpenAI instance, select **Add role assignment** to start the process of assigning an Azure OpenAI role to the Azure AI Content Safety identity.
22+
23+
:::image type="content" source="/azure/ai-services/content-safety/media/add-role-assignment.png" alt-text="Screenshot of adding role assignment in Azure portal.":::
24+
25+
Choose the **User** or **Contributor** role.
26+
27+
:::image type="content" source="/azure/ai-services/content-safety/media/assigned-roles-simple.png" alt-text="Screenshot of the Azure portal with the Contributor and User roles displayed in a list." lightbox="/azure/ai-services/content-safety/media/assigned-roles-simple.png":::

articles/ai-services/openai/concepts/content-filter.md

Lines changed: 20 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -59,21 +59,26 @@ The content filtering system integrated in the Azure OpenAI Service contains:
5959

6060
## Configurability (preview)
6161

62-
The default content filtering configuration is set to filter at the medium severity threshold for all four content harm categories for both prompts and completions. That means that content that is detected at severity level medium or high is filtered, while content detected at severity level low isn't filtered by the content filters. The configurability feature is available in preview and allows customers to adjust the settings, separately for prompts and completions, to filter content for each content category at different severity levels as described in the table below:
62+
The default content filtering configuration for the GPT model series is set to filter at the medium severity threshold for all four content harm categories (hate, violence, sexual, and self-harm) and applies to both prompts (text, multi-modal text/image) and completions (text). This means that content that is detected at severity level medium or high is filtered, while content detected at severity level low isn't filtered by the content filters. For DALL-E, the default severity threshold is set to low for both prompts (text) and completions (images), so content detected at severity levels low, medium, or high is filtered. The configurability feature is available in preview and allows customers to adjust the settings, separately for prompts and completions, to filter content for each content category at different severity levels as described in the table below:
6363

6464
| Severity filtered | Configurable for prompts | Configurable for completions | Descriptions |
6565
|-------------------|--------------------------|------------------------------|--------------|
66-
| Low, medium, high | Yes | Yes | Strictest filtering configuration. Content detected at severity levels low, medium, and high is filtered.|
67-
| Medium, high | Yes | Yes | Default setting. Content detected at severity level low isn't filtered, content at medium and high is filtered.|
68-
| High | Yes| Yes | Content detected at severity levels low and medium isn't filtered. Only content at severity level high is filtered.|
69-
| No filters | If approved<sup>\*</sup>| If approved<sup>\*</sup>| No content is filtered regardless of severity level detected. Requires approval<sup>\*</sup>.|
66+
| Low, medium, high | Yes | Yes | Strictest filtering configuration. Content detected at severity levels low, medium and high is filtered.|
67+
| Medium, high | Yes | Yes | Content detected at severity level low isn't filtered, content at medium and high is filtered.|
68+
| High | If approved<sup>1</sup>| If approved<sup>1</sup> | Content detected at severity levels low and medium isn't filtered. Only content at severity level high is filtered. Requires approval<sup>1</sup>.|
69+
| No filters | If approved<sup>1</sup>| If approved<sup>1</sup>| No content is filtered regardless of severity level detected. Requires approval<sup>1</sup>.|
7070

71-
<sup>\*</sup> Only customers who have been approved for modified content filtering have full content filtering control and can turn content filters partially or fully off. Content filtering control doesn't apply to content filters for DALL-E (preview) or GPT-4 Turbo with Vision (preview). Apply for modified content filters using this form: [Azure OpenAI Limited Access Review: Modified Content Filtering (microsoft.com)](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUMlBQNkZMR0lFRldORTdVQzQ0TEI5Q1ExOSQlQCN0PWcu).
71+
<sup>1</sup> For Azure OpenAI models, only customers who have been approved for modified content filtering have full content filtering control, including configuring content filters at severity level high only or turning off content filters. Apply for modified content filters via this form: [Azure OpenAI Limited Access Review: Modified Content Filters and Abuse Monitoring (microsoft.com)](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xURE01NDY1OUhBRzQ3MkQxMUhZSE1ZUlJKTiQlQCN0PWcu)
7272

73-
Customers are responsible for ensuring that applications integrating Azure OpenAI comply with the [Code of Conduct](/legal/cognitive-services/openai/code-of-conduct?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext).
73+
This preview feature is available for the following Azure OpenAI models:
74+
* GPT model series (text)
75+
* GPT-4 Turbo Vision 2024-04-09 (multi-modal text/image)
76+
* DALL-E 2 and 3 (image)
7477

7578
Content filtering configurations are created within a Resource in Azure AI Studio, and can be associated with Deployments. [Learn more about configurability here](../how-to/content-filters.md).
7679

80+
Customers are responsible for ensuring that applications integrating Azure OpenAI comply with the [Code of Conduct](/legal/cognitive-services/openai/code-of-conduct?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext).
81+
7782
## Scenario details
7883

7984
When the content filtering system detects harmful content, you receive either an error on the API call if the prompt was deemed inappropriate, or the `finish_reason` on the response will be `content_filter` to signify that some of the completion was filtered. When building your application or system, you'll want to account for these scenarios where the content returned by the Completions API is filtered, which might result in content that is incomplete. How you act on this information will be application specific. The behavior can be summarized in the following points:
@@ -815,27 +820,29 @@ The escaped text in a chat completion context would read:
815820
816821
## Content streaming
817822
818-
This section describes the Azure OpenAI content streaming experience and options. With approval, you have the option to receive content from the API as it's generated, instead of waiting for chunks of content that have been verified to pass your content filters.
823+
This section describes the Azure OpenAI content streaming experience and options. Customers have the option to receive content from the API as it's generated, instead of waiting for chunks of content that have been verified to pass your content filters.
819824
820825
### Default
821826
822827
The content filtering system is integrated and enabled by default for all customers. In the default streaming scenario, completion content is buffered, the content filtering system runs on the buffered content, and – depending on the content filtering configuration – content is either returned to the user if it doesn't violate the content filtering policy (Microsoft's default or a custom user configuration), or it’s immediately blocked and returns a content filtering error, without returning the harmful completion content. This process is repeated until the end of the stream. Content is fully vetted according to the content filtering policy before it's returned to the user. Content isn't returned token-by-token in this case, but in “content chunks” of the respective buffer size.
823828
824-
### Asynchronous modified filter
829+
### Asynchronous Filter
825830
826-
Customers who have been approved for modified content filters can choose the asynchronous modified filter as an additional option, providing a new streaming experience. In this case, content filters are run asynchronously, and completion content is returned immediately with a smooth token-by-token streaming experience. No content is buffered, which allows for zero latency.
831+
Customers can choose the Asynchronous Filter as an additional option, providing a new streaming experience. In this case, content filters are run asynchronously, and completion content is returned immediately with a smooth token-by-token streaming experience. No content is buffered, which allows for a fast streaming experience with zero latency associated with content safety.
827832
828833
Customers must be aware that while the feature improves latency, it's a trade-off against the safety and real-time vetting of smaller sections of model output. Because content filters are run asynchronously, content moderation messages and policy violation signals are delayed, which means some sections of harmful content that would otherwise have been filtered immediately could be displayed to the user.
829834
830835
**Annotations**: Annotations and content moderation messages are continuously returned during the stream. We strongly recommend you consume annotations in your app and implement additional AI content safety mechanisms, such as redacting content or returning additional safety information to the user.
831836
832837
**Content filtering signal**: The content filtering error signal is delayed. In case of a policy violation, it’s returned as soon as it’s available, and the stream is stopped. The content filtering signal is guaranteed within a ~1,000-character window of the policy-violating content.
833838
834-
Approval for modified content filtering is required for access to the asynchronous modified filter. The application can be found [here](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xURE01NDY1OUhBRzQ3MkQxMUhZSE1ZUlJKTiQlQCN0PWcu). To enable it in Azure OpenAI Studio, follow the [Content filter how-to guide](/azure/ai-services/openai/how-to/content-filters) to create a new content filtering configuration, and select **Asynchronous Modified Filter** in the Streaming section.
839+
**Customer Copyright Commitment**: Content that is retroactively flagged as protected material may not be eligible for Customer Copyright Commitment coverage.
840+
841+
To enable Asynchronous Filter in Azure OpenAI Studio, follow the [Content filter how-to guide](/azure/ai-services/openai/how-to/content-filters) to create a new content filtering configuration, and select **Asynchronous Filter** in the Streaming section.
835842
836843
### Comparison of content filtering modes
837844
838-
| Compare | Streaming - Default | Streaming - Asynchronous Modified Filter |
845+
| Compare | Streaming - Default | Streaming - Asynchronous Filter |
839846
|---|---|---|
840847
|Status |GA |Public Preview |
841848
| Eligibility |All customers |Customers approved for modified content filtering |
@@ -925,7 +932,7 @@ data: {
925932
926933
#### Sample response stream (passes filters)
927934
928-
Below is a real chat completion response using asynchronous modified filter. Note how the prompt annotations aren't changed, completion tokens are sent without annotations, and new annotation messages are sent without tokens&mdash;they are instead associated with certain content filter offsets.
935+
Below is a real chat completion response using Asynchronous Filter. Note how the prompt annotations aren't changed, completion tokens are sent without annotations, and new annotation messages are sent without tokens&mdash;they are instead associated with certain content filter offsets.
929936
930937
`{"temperature": 0, "frequency_penalty": 0, "presence_penalty": 1.0, "top_p": 1.0, "max_tokens": 800, "messages": [{"role": "user", "content": "What is color?"}], "stream": true}`
931938

articles/ai-services/openai/includes/javascript.md

Lines changed: 64 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -8,10 +8,13 @@ ms.service: azure-ai-openai
88
ms.topic: include
99
author: mrbullwinkle
1010
ms.author: mbullwin
11-
ms.date: 07/26/2023
11+
ms.date: 05/20/2024
1212
---
1313

14-
[Source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/openai/openai) | [Package (npm)](https://www.npmjs.com/package/@azure/openai) | [Samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/openai/openai/samples)
14+
[Source code](https://github.com/openai/openai-node) | [Package (npm)](https://www.npmjs.com/package/openai) | [Samples](https://github.com/Azure/azure-sdk-for-js/tree/openai-azure-samples/sdk/openai/openai/samples/v1-beta/javascript)
15+
16+
> [!NOTE]
17+
> This article has been updated to use the [latest OpenAI npm package](https://www.npmjs.com/package/openai) which now fully supports Azure OpenAI.If you are looking for code examples for the legacy Azure OpenAI JavaScript SDK they are currently still [1available in this repo](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/openai/openai/samples/v1-beta/javascript).
1518
1619
## Prerequisites
1720

@@ -30,19 +33,14 @@ ms.date: 07/26/2023
3033

3134
[!INCLUDE [environment-variables](environment-variables.md)]
3235

33-
34-
In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app, and navigate to it. Then run the `npm init` command to create a node application with a _package.json_ file.
35-
36-
```console
37-
npm init
38-
```
36+
In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app, and navigate to it.
3937

4038
## Install the client library
4139

42-
Install the Azure OpenAI client library for JavaScript with npm:
40+
Install the required packages for JavaScript with npm from within the context of your new directory:
4341

4442
```console
45-
npm install @azure/openai
43+
npm install openai dotenv @azure/identity
4644
```
4745

4846
Your app's _package.json_ file will be updated with the dependencies.
@@ -55,34 +53,39 @@ Your app's _package.json_ file will be updated with the dependencies.
5553
Open a command prompt where you created the new project, and create a new file named Completion.js. Copy the following code into the Completion.js file.
5654

5755
```javascript
58-
const { OpenAIClient, AzureKeyCredential } = require("@azure/openai");
59-
const endpoint = process.env["AZURE_OPENAI_ENDPOINT"] ;
60-
const azureApiKey = process.env["AZURE_OPENAI_API_KEY"] ;
56+
const { AzureOpenAI } = require("openai");
57+
58+
// Load the .env file if it exists
59+
const dotenv = require("dotenv");
60+
dotenv.config();
61+
62+
// You will need to set these environment variables or edit the following values
63+
const endpoint = process.env["ENDPOINT"] || "<endpoint>";
64+
const apiKey = process.env["AZURE_API_KEY"] || "<api key>";
65+
const apiVersion = "2024-04-01-preview";
66+
const deployment = "gpt-35-turbo-instruct"; //The deployment name for your completions API model. The instruct model is the only new model that supports the legacy API.
6167

6268
const prompt = ["When was Microsoft founded?"];
6369

6470
async function main() {
6571
console.log("== Get completions Sample ==");
6672

67-
const client = new OpenAIClient(endpoint, new AzureKeyCredential(azureApiKey));
68-
const deploymentId = "gpt-35-turbo-instruct";
69-
const result = await client.getCompletions(deploymentId, prompt);
73+
const client = new AzureOpenAI({ endpoint, apiKey, apiVersion, deployment });
74+
75+
const result = await client.completions.create({ prompt, model: deployment, max_tokens: 128 });
7076

7177
for (const choice of result.choices) {
7278
console.log(choice.text);
7379
}
7480
}
7581

7682
main().catch((err) => {
77-
console.error("The sample encountered an error:", err);
83+
console.error("Error occurred:", err);
7884
});
7985

8086
module.exports = { main };
8187
```
8288

83-
> [!IMPORTANT]
84-
> For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../../key-vault/general/overview.md). For more information about credential security, see the Azure AI services [security](../../security-features.md) article.
85-
8689
Run the script with the following command:
8790

8891
```cmd
@@ -97,6 +100,47 @@ node.exe Completion.js
97100
Microsoft was founded on April 4, 1975.
98101
```
99102

103+
## Microsoft Entra ID
104+
105+
> [!IMPORTANT]
106+
> In the previous example we are demonstrating key-based authentication. Once you have tested with key-based authentication successfully, we recommend using the more secure [Microsoft Entra ID](/entra/fundamentals/whatis) for authentication which is demonstrated in the next code sample. Getting started with [Microsoft Entra ID] will require some additional [prerequisites](https://www.npmjs.com/package/@azure/identity).
107+
108+
```javascript
109+
const { AzureOpenAI } = require("openai");
110+
const { DefaultAzureCredential, getBearerTokenProvider } = require("@azure/identity");
111+
112+
// Set AZURE_OPENAI_ENDPOINT to the endpoint of your
113+
// OpenAI resource. You can find this in the Azure portal.
114+
// Load the .env file if it exists
115+
require("dotenv/config");
116+
117+
const prompt = ["When was Microsoft founded?"];
118+
119+
async function main() {
120+
console.log("== Get completions Sample ==");
121+
122+
const scope = "https://cognitiveservices.azure.com/.default";
123+
const azureADTokenProvider = getBearerTokenProvider(new DefaultAzureCredential(), scope);
124+
const deployment = "gpt-35-turbo-instruct";
125+
const apiVersion = "2024-04-01-preview";
126+
const client = new AzureOpenAI({ azureADTokenProvider, deployment, apiVersion });
127+
const result = await client.completions.create({ prompt, model: deployment, max_tokens: 128 });
128+
129+
for (const choice of result.choices) {
130+
console.log(choice.text);
131+
}
132+
}
133+
134+
main().catch((err) => {
135+
console.error("Error occurred:", err);
136+
});
137+
138+
module.exports = { main };
139+
```
140+
141+
> [!NOTE]
142+
> If your receive the error: *Error occurred: OpenAIError: The `apiKey` and `azureADTokenProvider` arguments are mutually exclusive; only one can be passed at a time.* You may need to remove a pre-existing environment variable for the API key from your system. Even though the Microsoft Entra ID code sample is not explicitly referencing the API key environment variable, if one is present on the system executing this sample, this error will still be generated.
143+
100144
> [!div class="nextstepaction"]
101145
> [I ran into an issue when running the code sample.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=JAVASCRIPT&Pillar=AOAI&&Product=gpt&Page=quickstart&Section=Create-application)
102146

articles/ai-services/openai/index.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -137,7 +137,7 @@ landingContent:
137137
- linkListType: concept
138138
links:
139139
- text: Asynchronous content filtering
140-
url: ./concepts/content-filter.md#asynchronous-modified-filter
140+
url: ./concepts/content-filter.md#asynchronous-filter
141141
- text: Red teaming large language models (LLMs)
142142
url: ./concepts/red-teaming.md
143143
- text: System message templates

0 commit comments

Comments
 (0)