Skip to content

Commit e76affe

Browse files
Learn Build Service GitHub AppLearn Build Service GitHub App
authored andcommitted
Merging changes synced from https://github.com/MicrosoftDocs/azure-ai-docs-pr (branch live)
2 parents e5d5061 + aa40e51 commit e76affe

File tree

17 files changed

+79
-73
lines changed

17 files changed

+79
-73
lines changed

articles/ai-foundry/foundry-models/how-to/use-blocklists.md

Lines changed: 3 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -7,31 +7,28 @@ ms.service: azure-ai-foundry
77
ms.custom:
88
- ignite-2024
99
ms.topic: how-to
10-
ms.date: 07/28/2025
10+
ms.date: 11/21/2025
1111
ms.author: pafarley
1212
author: PatrickFarley
1313
monikerRange: 'foundry-classic || foundry'
1414
---
1515

16-
# How to use blocklists with Foundry Models in Microsoft Foundry services
16+
# Use blocklists with Foundry Models in Microsoft Foundry
1717

1818
The configurable content filters are sufficient for most content moderation needs. However, you might need to create custom blocklists in the [Microsoft Foundry portal](https://ai.azure.com/?cid=learnDocs) as part of your content filtering configurations to filter terms specific to your use case. This article shows how to create custom blocklists as part of your content filters in the [Foundry portal](https://ai.azure.com/?cid=learnDocs).
1919

2020
## Prerequisites
2121

2222
* An Azure subscription. If you're using [GitHub Models](https://docs.github.com/en/github-models/), you can upgrade your experience and create an Azure subscription in the process. For more information, see [Upgrade from GitHub Models to Foundry Models](../../model-inference/how-to/quickstart-github-models.md).
23-
2423
* A Foundry services resource. For more information, see [Create a Foundry Services resource](../../../ai-services/multi-service-resource.md?context=/azure/ai-services/model-inference/context/context).
25-
2624
* A Foundry project [connected to your Foundry services resource](../../model-inference/how-to/configure-project-connection.md).
27-
2825
* A model deployment. For more information, see [Add and configure models to Foundry services](../../model-inference/how-to/create-model-deployments.md).
2926

3027
> [!NOTE]
3128
> Blocklist (preview) support is limited to Azure OpenAI models.
3229
3330
[!INCLUDE [use-blocklists](../../includes/use-blocklists.md)]
3431

35-
## Next steps
32+
## Next step
3633

3734
* [Configure content filtering](../../model-inference/how-to/configure-content-filters.md)

articles/ai-foundry/openai/concepts/audio.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.author: pafarley
77
ms.service: azure-ai-foundry
88
ms.subservice: azure-ai-foundry-openai
99
ms.topic: conceptual
10-
ms.date: 7/12/2025
10+
ms.date: 11/21/2025
1111
ms.custom: template-concept
1212
manager: nitinme
1313
---
@@ -16,13 +16,14 @@ manager: nitinme
1616

1717
[!INCLUDE [classic-banner](../../includes/classic-banner.md)]
1818

19-
> [!IMPORTANT]
20-
> The content filtering system isn't applied to prompts and completions processed by the audio models such as Whisper in Azure OpenAI.
2119

22-
Audio models in Azure OpenAI are available via the `realtime`, `completions`, and `audio` APIs. The audio models are designed to handle a variety of tasks, including speech recognition, translation, and text to speech.
20+
Audio models in Azure OpenAI are available via the `realtime`, `completions`, and `audio` APIs. The audio models are designed to handle a variety of tasks including speech recognition, translation, and text to speech.
2321

2422
For information about the available audio models per region in Azure OpenAI, see the [audio models](models.md?tabs=standard-audio#standard-deployment-regional-models-by-endpoint), [standard models by endpoint](models.md?tabs=standard-audio#standard-deployment-regional-models-by-endpoint), and [global standard model availability](models.md?tabs=standard-audio#global-standard-model-availability) documentation.
2523

24+
> [!IMPORTANT]
25+
> The content filtering system isn't applied to prompts and completions processed by audio models in Azure OpenAI, such as Whisper.
26+
2627
## GPT-4o audio Realtime API
2728

2829
GPT real-time audio is designed to handle real-time, low-latency conversational interactions, making it a great fit for support agents, assistants, translators, and other use cases that need highly responsive back-and-forth with a user. For more information on how to use GPT real-time audio, see the [GPT real-time audio quickstart](../realtime-audio-quickstart.md) and [how to use GPT-4o audio](../how-to/realtime-audio.md).

articles/ai-foundry/openai/concepts/content-filter-personal-information.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Personally Identifiable Information (PII) Filter
33
description: Learn about the Personally Identifiable Information (PII) filter for identifying and flagging known personal information in large language model outputs.
44
author: ssalgadodev
55
ms.author: ssalgado
6-
ms.date: 07/03/2025
6+
ms.date: 11/21/2025
77
ms.topic: conceptual
88
ms.service: azure-ai-openai
99
monikerRange: 'foundry-classic || foundry'
@@ -24,4 +24,6 @@ There are many different types of PII, and you can specify which types you want
2424

2525
## Filtering modes
2626

27-
The PII filter can be configured to operate in two modes. **Annotate** mode flags PII that's returned in the model output. **Annotate and Block** mode blocks the entire output if PII is detected. The filtering mode can be set for each PII category individually.
27+
The PII filter can be configured to operate in two modes.
28+
- **Annotate** mode flags PII that's returned in the model output.
29+
- **Annotate and Block** mode blocks the entire output if PII is detected. The filtering mode can be set for each PII category individually.

articles/ai-foundry/openai/how-to/dall-e.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ OpenAI's image generation models create images from user-provided text prompts a
2929

3030
- An Azure subscription. You can [create one for free](https://azure.microsoft.com/pricing/purchase-options/azure-account?icid=ai-services).
3131
- An Azure OpenAI resource created in a supported region. See [Region availability](/azure/ai-foundry/openai/concepts/models#model-summary-table-and-region-availability).
32-
- Deploy a `dall-e-3` or `gpt-image-1` series model with your Azure OpenAI resource. For more information on deployments, see [Create a resource and deploy a model with Azure OpenAI](/azure/ai-foundry/openai/how-to/create-resource).
32+
- Deploy a `dall-e-3` or `gpt-image-1`-series model with your Azure OpenAI resource. For more information on deployments, see [Create a resource and deploy a model with Azure OpenAI](/azure/ai-foundry/openai/how-to/create-resource).
3333
- GPT-image-1 models are newer and feature a number of improvements over DALL-E 3. They are available in limited access: apply for access with [this form](https://aka.ms/oai/gptimage1access).
3434

3535
## Overview

articles/ai-foundry/openai/how-to/responses.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1288,7 +1288,7 @@ Compared to the standalone Image API, the Responses API offers several advantage
12881288
* **Flexible inputs**: Accept image File IDs as inputs, in addition to raw image bytes.
12891289

12901290
> [!NOTE]
1291-
> The image generation tool in the Responses API is only supported by the `gpt-image-1` series models. You can however call this model from this list of supported models - `gpt-4o`, `gpt-4o-mini`, `gpt-4.1`, `gpt-4.1-mini`, `gpt-4.1-nano`, `o3`, and `gpt-5` series models.<br><br>The Responses API image generation tool does not currently support streaming mode. To use streaming mode and generate partial images, call the [image generation API](./dall-e.md) directly outside of the Responses API.
1291+
> The image generation tool in the Responses API is only supported by the `gpt-image-1`-series models. You can however call this model from this list of supported models - `gpt-4o`, `gpt-4o-mini`, `gpt-4.1`, `gpt-4.1-mini`, `gpt-4.1-nano`, `o3`, and `gpt-5` series models.<br><br>The Responses API image generation tool does not currently support streaming mode. To use streaming mode and generate partial images, call the [image generation API](./dall-e.md) directly outside of the Responses API.
12921292

12931293
Use the Responses API if you want to build conversational image experiences with GPT Image.
12941294

articles/ai-foundry/responsible-ai/speech-service/text-to-speech/concepts-disclosure-guidelines.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.author: pafarley
77
manager: nitinme
88
ms.service: azure-ai-speech
99
ms.topic: article
10-
ms.date: 12/03/2019
10+
ms.date: 11/21/2025
1111
---
1212

1313
# Disclosure design guidelines for synthetic voices
@@ -24,7 +24,7 @@ Disclosure is a means of letting people know they're interacting with or listeni
2424

2525
The need to disclose the synthetic origins of a computer-generated voice is relatively new. In the past, computer-generated voices were obviously that—no one would ever mistake them for a real person. Every day, however, the realism of synthetic voices improves, and they become increasingly indistinguishable from human voices.
2626

27-
## Goals
27+
## Design principles
2828

2929
These are the principles to keep in mind when designing synthetic voice experiences:
3030

articles/ai-foundry/responsible-ai/speech-service/text-to-speech/concepts-disclosure-patterns.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.author: pafarley
77
manager: nitinme
88
ms.service: azure-ai-speech
99
ms.topic: article
10-
ms.date: 12/03/2019
10+
ms.date: 11/21/2025
1111
---
1212

1313
# Disclosure design patterns for synthetic voices
@@ -16,7 +16,7 @@ ms.date: 12/03/2019
1616

1717
Now that you've determined the right level of disclosure for your text to speech avatar experience, it's a good time to explore potential design patterns.
1818

19-
## Overview
19+
## Design pattern overview
2020

2121
There's a spectrum of disclosure design patterns you can apply to your synthetic voice experience. If the outcome of your disclosure assessment was 'High Disclosure', we recommend [explicit disclosure](#explicit-disclosure), which means communicating the origins of the synthetic voice outright. [Implicit disclosure](#implicit-disclosure) includes cues and interaction patterns that benefit voice experiences whether required disclosure levels are high or low.
2222

articles/ai-services/computer-vision/quickstarts-sdk/client-library.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: PatrickFarley
66
manager: nitinme
77
ms.service: azure-ai-vision
88
ms.topic: quickstart
9-
ms.date: 09/26/2025
9+
ms.date: 11/21/2025
1010
ms.author: pafarley
1111
ms.devlang: csharp
1212
# ms.devlang: csharp, golang, java, javascript, python

0 commit comments

Comments
 (0)