Skip to content

Commit e0ca3ae

Browse files
Merge pull request #7548 from MicrosoftDocs/main
Auto Publish – main to live - 2025-10-08 17:04 UTC
2 parents 2f0f904 + 01dd3ed commit e0ca3ae

29 files changed

+146
-98
lines changed

articles/ai-foundry/openai/concepts/default-safety-policies.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ Text models in the Azure OpenAI can take in and generate both text and code. The
5353

5454
### Image generation models
5555

56-
#### [GPT-image-1](#tab/gpt-image-1)
56+
#### [GPT-image-1 series](#tab/gpt-image-1)
5757

5858
| Risk Category | Prompt/Completion | Severity Threshold |
5959
|---------------------------------------------------|------------------------|---------------------|

articles/ai-foundry/openai/how-to/content-filters.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ Prompt shields and protected text and code models are optional and on by default
2626
> All customers have the ability to modify the content filters and configure the severity thresholds (low, medium, high). Approval is required for turning the content filters partially or fully off. Managed customers only may apply for full content filtering control via this form: [Limited Access Review: Modified Content Filters](https://ncv.microsoft.com/uEfCgnITdR). At this time, it is not possible to become a managed customer.
2727
2828
> [!IMPORTANT]
29-
> The GPT-image-1 model does not support content filtering configuration: only the default content filter is used.
29+
> The GPT-image-1 series models do not support content filtering configuration: only the default content filter is used.
3030
3131
Content filters can be configured at the resource level. Once a new configuration is created, it can be associated with one or more deployments. For more information about model deployment, see the [resource deployment guide](create-resource.md).
3232

articles/ai-foundry/openai/how-to/dall-e.md

Lines changed: 25 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -24,8 +24,24 @@ OpenAI's image generation models create images from user-provided text prompts a
2424

2525
- An Azure subscription. You can [create one for free](https://azure.microsoft.com/pricing/purchase-options/azure-account?icid=ai-services).
2626
- An Azure OpenAI resource created in a supported region. See [Region availability](/azure/ai-foundry/openai/concepts/models#model-summary-table-and-region-availability).
27-
- Deploy a `dall-e-3` or `gpt-image-1` model with your Azure OpenAI resource. For more information on deployments, see [Create a resource and deploy a model with Azure OpenAI](/azure/ai-foundry/openai/how-to/create-resource).
28-
- GPT-image-1 is the newer model and features a number of improvements over DALL-E 3. It's available in limited access: apply for access with [this form](https://aka.ms/oai/gptimage1access).
27+
- Deploy a `dall-e-3` or `gpt-image-1` series model with your Azure OpenAI resource. For more information on deployments, see [Create a resource and deploy a model with Azure OpenAI](/azure/ai-foundry/openai/how-to/create-resource).
28+
- GPT-image-1 models are newer and feature a number of improvements over DALL-E 3. They are available in limited access: apply for access with [this form](https://aka.ms/oai/gptimage1access).
29+
30+
## Overview
31+
32+
- Use image generation via [image generation API](https://int.ai.azure.com/doc/azure/ai-foundry/openai/dall-e-quickstart?tid=7f292395-a08f-4cc0-b3d0-a400b023b0d2) or [responses API](/azure/ai-foundry/openai/how-to/responses?tabs=python-key)
33+
- Experiment with image generation in the [image playground](https://int.ai.azure.com/doc/azure/ai-foundry/openai/dall-e-quickstart?tid=7f292395-a08f-4cc0-b3d0-a400b023b0d2)
34+
- Explore [content filtering](https://int.ai.azure.com/doc/azure/ai-foundry/openai/concepts/content-filter?tid=7f292395-a08f-4cc0-b3d0-a400b023b0d2) and apply to opt out [here]([url](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUMlBQNkZMR0lFRldORTdVQzQ0TEI5Q1ExOSQlQCN0PWcu):
35+
- Learn about [image generation tokens ](https://int.ai.azure.com/doc/azure/ai-foundry/openai/overview?tid=7f292395-a08f-4cc0-b3d0-a400b023b0d2)
36+
37+
| Aspect | GPT-Image-1 | DALL·E 3 |
38+
|--------|-------------|----------|
39+
| **Input / Output Modalities & Format** | Accepts **text + image** inputs; outputs images only in **base64** (no URL option). | Accepts **text (primary)** input; limited image editing inputs (with mask). Outputs as **URL or base64**. |
40+
| **Image Sizes / Resolutions** | 1024×1024, 1024×1536, 1536×1024 | 1024×1024, 1024×1792, 1792×1024 |
41+
| **Quality Options** | `low`, `medium`, `high` (default = high) | `standard`, `hd`; style options: `natural`, `vivid` |
42+
| **Number of Images per Request** | 1–10 images per request (`n` parameter) | Only **1 image** per request (`n` must be 1) |
43+
| **Editing (inpainting / variations)** | Yes — supports inpainting and variations with mask + prompt | Yes — supports inpainting and variations |
44+
| **Strengths** | Better **prompt fidelity**, realism, multimodal context use, strong in **editing instruction-following** | Strong at **prompt adherence**, natural text rendering, stylistic variety, coherent image generation |
2945

3046

3147
## Call the image generation API
@@ -321,6 +337,9 @@ The Image Edit API enables you to modify existing images based on text prompts y
321337
> [!IMPORTANT]
322338
> The input image must be less than 50 MB in size and must be a PNG or JPG file.
323339
340+
> [!IMPORTANT]
341+
> `gpt-image-1-mini` does not currently support image edits.
342+
324343
Send a POST request to:
325344

326345
```
@@ -381,7 +400,10 @@ The *image* value indicates the image file you want to edit.
381400

382401
The *input_fidelity* parameter controls how much effort the model puts into matching the style and features, especially facial features, of input images.
383402

384-
This parameter lets you make subtle edits to an image without changing unrelated areas. When you use high input fidelity, faces are preserved more accurately than in standard mode.
403+
This parameter lets you make subtle edits to an image without changing unrelated areas. When you use high input fidelity, faces are preserved more accurately than in standard mode.
404+
405+
> [!IMPORTANT]
406+
> Input fidelity is not supported by the `gpt-image-1-mini` model.
385407
386408

387409
#### Mask

articles/ai-foundry/openai/how-to/prompt-caching.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -72,9 +72,9 @@ Prompt caching is supported for:
7272

7373
|**Caching supported**|**Description**|**Supported models**|
7474
|--------|--------|--------|
75-
| **Messages** | The complete messages array: system, developer, user, and assistant content | `gpt-4o`<br/>`gpt-4o-mini`<br/>`gpt-4o-realtime-preview` (version 2024-12-17)<br/>`gpt-4o-mini-realtime-preview` (version 2024-12-17)<br>`gpt-realtime` (version 2025-08-28)<br> `o1` (version 2024-12-17) <br> `o3-mini` (version 2025-01-31) |
75+
| **Messages** | The complete messages array: system, developer, user, and assistant content | `gpt-4o`<br/>`gpt-4o-mini`<br/>`gpt-4o-realtime-preview` (version 2024-12-17)<br/>`gpt-4o-mini-realtime-preview` (version 2024-12-17)<br>`gpt-realtime` (version 2025-08-28)<br> `gpt-realtime-mini` (version 2025-10-06)<br>`o1` (version 2024-12-17) <br> `o3-mini` (version 2025-01-31) |
7676
| **Images** | Images included in user messages, both as links or as base64-encoded data. The detail parameter must be set the same across requests. | `gpt-4o`<br/>`gpt-4o-mini` <br> `o1` (version 2024-12-17) |
77-
| **Tool use** | Both the messages array and tool definitions. | `gpt-4o`<br/>`gpt-4o-mini`<br/>`gpt-4o-realtime-preview` (version 2024-12-17)<br/>`gpt-4o-mini-realtime-preview` (version 2024-12-17)<br>`gpt-realtime` (version 2025-08-28)<br> `o1` (version 2024-12-17) <br> `o3-mini` (version 2025-01-31) |
77+
| **Tool use** | Both the messages array and tool definitions. | `gpt-4o`<br/>`gpt-4o-mini`<br/>`gpt-4o-realtime-preview` (version 2024-12-17)<br/>`gpt-4o-mini-realtime-preview` (version 2024-12-17)<br>`gpt-realtime` (version 2025-08-28)<br>`gpt-realtime-mini` (version 2025-10-06)<br> `o1` (version 2024-12-17) <br> `o3-mini` (version 2025-01-31) |
7878
| **Structured outputs** | Structured output schema is appended as a prefix to the system message. | `gpt-4o`<br/>`gpt-4o-mini` <br> `o1` (version 2024-12-17) <br> `o3-mini` (version 2025-01-31) |
7979

8080
To improve the likelihood of cache hits occurring, you should structure your requests such that repetitive content occurs at the beginning of the messages array.

articles/ai-foundry/openai/how-to/realtime-audio-webrtc.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,7 @@ The GPT real-time models are available for global deployments in [East US 2 and
3434
- `gpt-4o-mini-realtime-preview` (2024-12-17)
3535
- `gpt-4o-realtime-preview` (2024-12-17)
3636
- `gpt-realtime` (version 2025-08-28)
37+
- `gpt-realtime-mini` (version 2025-10-06)
3738

3839
You should use API version `2025-04-01-preview` in the URL for the Realtime API. The API version is included in the sessions URL.
3940

@@ -45,7 +46,7 @@ Before you can use GPT real-time audio, you need:
4546

4647
- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.
4748
- An Azure OpenAI resource created in a [supported region](#supported-models). For more information, see [Create a resource and deploy a model with Azure OpenAI](create-resource.md).
48-
- You need a deployment of the `gpt-4o-realtime-preview`, `gpt-4o-mini-realtime-preview`, or `gpt-realtime` model in a supported region as described in the [supported models](#supported-models) section in this article. You can deploy the model from the [Azure AI Foundry model catalog](../../../ai-foundry/how-to/model-catalog-overview.md) or from your project in Azure AI Foundry portal.
49+
- You need a deployment of the `gpt-4o-realtime-preview`, `gpt-4o-mini-realtime-preview`, `gpt-realtime`, or `gpt-realtime-mini` model in a supported region as described in the [supported models](#supported-models) section in this article. You can deploy the model from the [Azure AI Foundry model catalog](../../../ai-foundry/how-to/model-catalog-overview.md) or from your project in Azure AI Foundry portal.
4950

5051
## Connection and authentication
5152

articles/ai-foundry/openai/how-to/realtime-audio-websockets.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,7 @@ The GPT real-time models are available for global deployments in [East US 2 and
3131
- `gpt-4o-mini-realtime-preview` (2024-12-17)
3232
- `gpt-4o-realtime-preview` (2024-12-17)
3333
- `gpt-realtime` (version 2025-08-28)
34+
- `gpt-realtime-mini` (version 2025-10-06)
3435

3536
You should use API version `2025-04-01-preview` in the URL for the Realtime API.
3637

@@ -42,7 +43,7 @@ Before you can use GPT real-time audio, you need:
4243

4344
- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.
4445
- An Azure OpenAI resource created in a [supported region](#supported-models). For more information, see [Create a resource and deploy a model with Azure OpenAI](create-resource.md).
45-
- You need a deployment of the `gpt-4o-realtime-preview`, `gpt-4o-mini-realtime-preview`, or `gpt-realtime` model in a supported region as described in the [supported models](#supported-models) section. You can deploy the model from the [Azure AI Foundry portal model catalog](../../../ai-foundry/how-to/model-catalog-overview.md) or from your project in Azure AI Foundry portal.
46+
- You need a deployment of the `gpt-4o-realtime-preview`, `gpt-4o-mini-realtime-preview`, `gpt-realtime`, or `gpt-realtime-mini` model in a supported region as described in the [supported models](#supported-models) section. You can deploy the model from the [Azure AI Foundry portal model catalog](../../../ai-foundry/how-to/model-catalog-overview.md) or from your project in Azure AI Foundry portal.
4647

4748
## Connection and authentication
4849

articles/ai-foundry/openai/how-to/realtime-audio.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,7 @@ The GPT real-time models are available for global deployments in [East US 2 and
2929
- `gpt-4o-mini-realtime-preview` (2024-12-17)
3030
- `gpt-4o-realtime-preview` (2024-12-17)
3131
- `gpt-realtime` (version 2025-08-28)
32+
- `gpt-realtime-mini` (version 2025-10-06)
3233

3334
You should use API version `2025-04-01-preview` in the URL for the Realtime API.
3435

@@ -40,10 +41,10 @@ Before you can use GPT real-time audio, you need:
4041

4142
- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.
4243
- An Azure OpenAI resource created in a [supported region](#supported-models). For more information, see [Create a resource and deploy a model with Azure OpenAI](create-resource.md).
43-
- You need a deployment of the `gpt-4o-realtime-preview`, `gpt-4o-mini-realtime-preview`, or `gpt-realtime` model in a supported region as described in the [supported models](#supported-models) section. You can deploy the model from the [Azure AI Foundry portal model catalog](../../../ai-foundry/how-to/model-catalog-overview.md) or from your project in Azure AI Foundry portal.
44+
- You need a deployment of the `gpt-4o-realtime-preview`, `gpt-4o-mini-realtime-preview`, `gpt-realtime`, or `gpt-realtime-mini` model in a supported region as described in the [supported models](#supported-models) section. You can deploy the model from the [Azure AI Foundry portal model catalog](../../../ai-foundry/how-to/model-catalog-overview.md) or from your project in Azure AI Foundry portal.
4445

4546
Here are some of the ways you can get started with the GPT Realtime API for speech and audio:
46-
- For steps to deploy and use the `gpt-4o-realtime-preview`, `gpt-4o-mini-realtime-preview`, or `gpt-realtime` model, see [the real-time audio quickstart](../realtime-audio-quickstart.md).
47+
- For steps to deploy and use the `gpt-4o-realtime-preview`, `gpt-4o-mini-realtime-preview`, `gpt-realtime`, or `gpt-realtime-mini` model, see [the real-time audio quickstart](../realtime-audio-quickstart.md).
4748
- Try the [WebRTC via HTML and JavaScript example](./realtime-audio-webrtc.md#webrtc-example-via-html-and-javascript) to get started with the Realtime API via WebRTC.
4849
- [The Azure-Samples/aisearch-openai-rag-audio repo](https://github.com/Azure-Samples/aisearch-openai-rag-audio) contains an example of how to implement RAG support in applications that use voice as their user interface, powered by the GPT realtime API for audio.
4950

@@ -281,7 +282,7 @@ A user might want to interrupt the assistant's response or ask the assistant to
281282

282283
## Image input
283284

284-
The `gpt-realtime` model supports image input as part of the conversation. The model can ground responses in what the user is currently seeing. You can send images to the model as part of a conversation item. The model can then generate responses that reference the images.
285+
The `gpt-realtime` and `gpt-realtime-mini` models support image input as part of the conversation. The model can ground responses in what the user is currently seeing. You can send images to the model as part of a conversation item. The model can then generate responses that reference the images.
285286

286287
The following example json body adds an image to the conversation:
287288

articles/ai-foundry/openai/how-to/responses.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -58,6 +58,7 @@ The responses API is currently available in the following regions:
5858
- `gpt-4.1-nano` (Version: `2025-04-14`)
5959
- `gpt-4.1-mini` (Version: `2025-04-14`)
6060
- `gpt-image-1` (Version: `2025-04-15`)
61+
- `gpt-image-1-mini` (Version: `2025-10-06`)
6162
- `o1` (Version: `2024-12-17`)
6263
- `o3-mini` (Version: `2025-01-31`)
6364
- `o3` (Version: `2025-04-16`)
@@ -1268,7 +1269,7 @@ Compared to the standalone Image API, the Responses API offers several advantage
12681269
* **Flexible inputs**: Accept image File IDs as inputs, in addition to raw image bytes.
12691270

12701271
> [!NOTE]
1271-
> The image generation tool in the Responses API is only supported by the `gpt-image-1` model. You can however call this model from this list of supported models - `gpt-4o`, `gpt-4o-mini`, `gpt-4.1`, `gpt-4.1-mini`, `gpt-4.1-nano`, `o3`, and `gpt-5` series models.<br><br>The Responses API image generation tool does not currently support streaming mode. To use streaming mode and generate partial images, call the [image generation API](./dall-e.md) directly outside of the Responses API.
1272+
> The image generation tool in the Responses API is only supported by the `gpt-image-1` series models. You can however call this model from this list of supported models - `gpt-4o`, `gpt-4o-mini`, `gpt-4.1`, `gpt-4.1-mini`, `gpt-4.1-nano`, `o3`, and `gpt-5` series models.<br><br>The Responses API image generation tool does not currently support streaming mode. To use streaming mode and generate partial images, call the [image generation API](./dall-e.md) directly outside of the Responses API.
12721273

12731274
Use the Responses API if you want to build conversational image experiences with GPT Image.
12741275

articles/ai-foundry/openai/includes/dall-e-rest.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ Use this guide to get started calling the Azure OpenAI in Azure AI Foundry Model
1919
- <a href="https://www.python.org/" target="_blank">Python 3.8 or later version</a>.
2020
- The following Python libraries installed: `os`, `requests`, `json`.
2121
- An Azure OpenAI resource created in a supported region. See [Region availability](/azure/ai-foundry/openai/concepts/models#model-summary-table-and-region-availability).
22-
- Then, you need to deploy a `gpt-image-1` or `dalle3` model with your Azure resource. For more information, see [Create a resource and deploy a model with Azure OpenAI](../how-to/create-resource.md).
22+
- Then, you need to deploy a `gpt-image-1`-series or `dalle3` model with your Azure resource. For more information, see [Create a resource and deploy a model with Azure OpenAI](../how-to/create-resource.md).
2323

2424
## Setup
2525

0 commit comments

Comments
 (0)