Skip to content

Commit 047612b

Browse files
committed
Merge branch 'main' of github.com:MicrosoftDocs/azure-ai-docs-pr into sdg-patches
2 parents 62be931 + 3acd8c7 commit 047612b

File tree

1,703 files changed

+24211
-15238
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,703 files changed

+24211
-15238
lines changed

.gitignore

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@ _repo.*/
1111

1212
.openpublishing.buildcore.ps1
1313

14+
*sec.endpointdlp
15+
1416
# CoPilot instructions and prompts
1517
.github/copilot-instructions.md
16-
.github/prompts/*.md
18+
.github/prompts/*.md

.openpublishing.redirection.json

Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -120,6 +120,11 @@
120120
"redirect_url": "/azure/ai-services/agents/overview",
121121
"redirect_document_id": false
122122
},
123+
{
124+
"source_path_from_root": "/articles/ai-services/openai/assistants-quickstart.md",
125+
"redirect_url": "/azure/ai-services/agents/quickstart",
126+
"redirect_document_id": true
127+
},
123128
{
124129
"source_path_from_root": "/articles/ai-services/openai/how-to/use-your-data-securely.md",
125130
"redirect_url": "/azure/ai-services/openai/how-to/on-your-data-configuration",
@@ -284,6 +289,36 @@
284289
"source_path": "articles/ai-services/index.yml",
285290
"redirect_url": "/azure/ai-foundry",
286291
"redirect_document_id": false
292+
},
293+
{
294+
"source_path_from_root": "/articles/ai-services/speech-service/text-to-speech-avatar/custom-avatar-endpoint.md",
295+
"redirect_url": "/azure/ai-services/speech-service/custom-avatar-create",
296+
"redirect_document_id": false
297+
},
298+
{
299+
"source_path_from_root": "/articles/ai-services/speech-service/migration-overview-neural-voice.md",
300+
"redirect_url": "/azure/ai-services/speech-service/custom-neural-voice",
301+
"redirect_document_id": false
302+
},
303+
{
304+
"source_path_from_root": "/articles/ai-services/speech-service/how-to-migrate-to-custom-neural-voice.md",
305+
"redirect_url": "/azure/ai-services/speech-service/custom-neural-voice",
306+
"redirect_document_id": false
307+
},
308+
{
309+
"source_path_from_root": "/articles/ai-services/speech-service/how-to-migrate-to-prebuilt-neural-voice.md",
310+
"redirect_url": "/azure/ai-services/speech-service/custom-neural-voice",
311+
"redirect_document_id": false
312+
},
313+
{
314+
"source_path_from_root": "/articles/ai-foundry/quickstarts/hear-speak-playground.md",
315+
"redirect_url": "/azure/ai-foundry/quickstarts/get-started-playground",
316+
"redirect_document_id": false
317+
},
318+
{
319+
"source_path_from_root": "/articles/ai-services/language-service/tutorials/prompt-flow.md",
320+
"redirect_url": "/azure/ai-services/language-service/tutorials/power-automate",
321+
"redirect_document_id": false
287322
}
288323
]
289324
}

articles/ai-foundry/ai-services/content-safety-overview.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,14 +7,14 @@ ms.service: azure-ai-foundry
77
ms.custom:
88
- ignite-2024
99
ms.topic: overview
10-
ms.date: 05/01/2025
10+
ms.date: 05/31/2025
1111
ms.author: pafarley
1212
author: PatrickFarley
1313
---
1414

1515
# Content Safety in the Azure AI Foundry portal
1616

17-
Azure AI Content Safety is an AI service that detects harmful user-generated and AI-generated content in applications and services. Azure AI Content Safety includes APIs that allow you to detect and prevent the output of harmful content. The interactive Content Safety **try it out** page in [Azure AI Foundry portal](https://ai.azure.com) allows you to view, explore, and try out sample code for detecting harmful content across different modalities.
17+
Azure AI Content Safety is an AI service that detects harmful user-generated and AI-generated content in applications and services. Azure AI Content Safety includes APIs that allow you to detect and prevent the output of harmful content. The interactive Content Safety **try it out** page in [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs) allows you to view, explore, and try out sample code for detecting harmful content across different modalities.
1818

1919
## Features
2020

@@ -46,4 +46,4 @@ Refer to the [Content Safety overview](/azure/ai-services/content-safety/overvie
4646

4747
## Next step
4848

49-
Get started using Azure AI Content Safety in [Azure AI Foundry portal](https://ai.azure.com) by following the [How-to guide](/azure/ai-services/content-safety/how-to/foundry).
49+
Get started using Azure AI Content Safety in [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs) by following the [How-to guide](/azure/ai-services/content-safety/how-to/foundry).

articles/ai-foundry/concepts/ai-resources.md

Lines changed: 34 additions & 88 deletions
Large diffs are not rendered by default.

articles/ai-foundry/concepts/concept-model-distillation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ description: Learn how to do distillation in Azure AI Foundry portal.
55
manager: scottpolly
66
ms.service: azure-ai-foundry
77
ms.topic: how-to
8-
ms.date: 03/09/2025
8+
ms.date: 05/20/2025
99
ms.reviewer: vkann
1010
reviewer: anshirga
1111
ms.author: ssalgado

articles/ai-foundry/concepts/concept-synthetic-data.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ description: Learn how to generate a synthetic dataset in Azure AI Foundry porta
55
manager: scottpolly
66
ms.service: azure-ai-foundry
77
ms.topic: how-to
8-
ms.date: 03/11/2025
8+
ms.date: 05/20/2025
99
ms.reviewer: vkann
1010
reviewer: anshirga
1111
ms.author: ssalgado

articles/ai-foundry/concepts/content-filtering.md

Lines changed: 5 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -9,24 +9,24 @@ ms.custom:
99
- build-2024
1010
- ignite-2024
1111
ms.topic: conceptual
12-
ms.date: 04/29/2025
12+
ms.date: 05/31/2025
1313
ms.reviewer: eur
1414
ms.author: pafarley
1515
author: PatrickFarley
1616
---
1717

1818
# Content filtering in Azure AI Foundry portal
1919

20-
[Azure AI Foundry](https://ai.azure.com) includes a content filtering system that works alongside core models and image generation models.
20+
[Azure AI Foundry](https://ai.azure.com/?cid=learnDocs) includes a content filtering system that works alongside core models and image generation models.
2121

2222
> [!IMPORTANT]
2323
> The content filtering system isn't applied to prompts and completions processed by the Whisper model in Azure OpenAI in Azure AI Foundry Models. Learn more about the [Whisper model in Azure OpenAI](../../ai-services/openai/concepts/models.md).
2424
2525
## How it works
2626

27-
The content filtering system is powered by [Azure AI Content Safety](../../ai-services/content-safety/overview.md), and it works by running both the prompt input and completion output through a set of classification models designed to detect and prevent the output of harmful content. Variations in API configurations and application design might affect completions and thus filtering behavior.
27+
The content filtering system is powered by [Azure AI Content Safety](../../ai-services/content-safety/overview.md), and it works by running both the model prompt input and completion output through a set of classification models designed to detect and prevent the output of harmful content. Variations in API configurations and application design might affect completions and thus filtering behavior.
2828

29-
With Azure OpenAI model deployments, you can use the default content filter or create your own content filter (described later on). Models available through **standard deployments** have content filtering enabled by default. To learn more about the default content filter enabled for standard deployments, see [Content safety for Models Sold Directly by Azure ](model-catalog-content-safety.md).
29+
With Azure OpenAI model deployments, you can use the default content filter or create your own content filter (described later). Models available through **standard deployments** have content filtering enabled by default. To learn more about the default content filter enabled for standard deployments, see [Content safety for Models Sold Directly by Azure ](model-catalog-content-safety.md).
3030

3131
## Language support
3232

@@ -73,20 +73,7 @@ You can also enable the following special output filters:
7373

7474
### Configurability (preview)
7575

76-
The default content filtering configuration for the GPT model series is set to filter at the medium severity threshold for all four content harm categories (hate, violence, sexual, and self-harm) and applies to both prompts (text, multi-modal text/image) and completions (text). This means that content that is detected at severity level medium or high is filtered, while content detected at severity level low isn't filtered by the content filters. For DALL-E, the default severity threshold is set to low for both prompts (text) and completions (images), so content detected at severity levels low, medium, or high is filtered.
77-
78-
The configurability feature allows customers to adjust the settings, separately for prompts and completions, to filter content for each content category at different severity levels as described in the table below:
79-
80-
| Severity filtered | Configurable for prompts | Configurable for completions | Descriptions |
81-
|-------------------|--------------------------|------------------------------|--------------|
82-
| Low, medium, high | Yes | Yes | Strictest filtering configuration. Content detected at severity levels low, medium and high is filtered.|
83-
| Medium, high | Yes | Yes | Content detected at severity level low isn't filtered, content at medium and high is filtered.|
84-
| High | Yes| Yes | Content detected at severity levels low and medium isn't filtered. Only content at severity level high is filtered. Requires approval<sup>1</sup>.|
85-
| No filters | If approved<sup>1</sup>| If approved<sup>1</sup>| No content is filtered regardless of severity level detected. Requires approval<sup>1</sup>.|
86-
87-
<sup>1</sup> For Azure OpenAI models, only customers who have been approved for modified content filtering have full content filtering control, including configuring content filters at severity level high only or turning off content filters. Apply for modified content filters via these forms: [Azure OpenAI Limited Access Review: Modified Content Filters](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUMlBQNkZMR0lFRldORTdVQzQ0TEI5Q1ExOSQlQCN0PWcu), and [Modified Abuse Monitoring](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUOE9MUTFMUlpBNk5IQlZWWkcyUEpWWEhGOCQlQCN0PWcu).
88-
89-
Customers are responsible for ensuring that applications integrating Azure OpenAI comply with the [Code of Conduct](/legal/ai-code-of-conduct?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext).
76+
[!INCLUDE [content-filter-configurability](../../ai-services/openai/includes/content-filter-configurability.md)]
9077

9178

9279
## Related content

0 commit comments

Comments
 (0)