You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/how-to/content-filters.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,7 +26,7 @@ Prompt shields and protected text and code models are optional and on by default
26
26
> All customers have the ability to modify the content filters and configure the severity thresholds (low, medium, high). Approval is required for turning the content filters partially or fully off. Managed customers only may apply for full content filtering control via this form: [Limited Access Review: Modified Content Filters](https://ncv.microsoft.com/uEfCgnITdR). At this time, it is not possible to become a managed customer.
27
27
28
28
> [!IMPORTANT]
29
-
> The GPT-image-1 model does not support content filtering configuration: only the default content filter is used.
29
+
> The GPT-image-1 series models do not support content filtering configuration: only the default content filter is used.
30
30
31
31
Content filters can be configured at the resource level. Once a new configuration is created, it can be associated with one or more deployments. For more information about model deployment, see the [resource deployment guide](create-resource.md).
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/how-to/dall-e.md
+25-3Lines changed: 25 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,8 +24,24 @@ OpenAI's image generation models create images from user-provided text prompts a
24
24
25
25
- An Azure subscription. You can [create one for free](https://azure.microsoft.com/pricing/purchase-options/azure-account?icid=ai-services).
26
26
- An Azure OpenAI resource created in a supported region. See [Region availability](/azure/ai-foundry/openai/concepts/models#model-summary-table-and-region-availability).
27
-
- Deploy a `dall-e-3` or `gpt-image-1` model with your Azure OpenAI resource. For more information on deployments, see [Create a resource and deploy a model with Azure OpenAI](/azure/ai-foundry/openai/how-to/create-resource).
28
-
- GPT-image-1 is the newer model and features a number of improvements over DALL-E 3. It's available in limited access: apply for access with [this form](https://aka.ms/oai/gptimage1access).
27
+
- Deploy a `dall-e-3` or `gpt-image-1` series model with your Azure OpenAI resource. For more information on deployments, see [Create a resource and deploy a model with Azure OpenAI](/azure/ai-foundry/openai/how-to/create-resource).
28
+
- GPT-image-1 models are newer and feature a number of improvements over DALL-E 3. They are available in limited access: apply for access with [this form](https://aka.ms/oai/gptimage1access).
29
+
30
+
## Overview
31
+
32
+
- Use image generation via [image generation API](https://int.ai.azure.com/doc/azure/ai-foundry/openai/dall-e-quickstart?tid=7f292395-a08f-4cc0-b3d0-a400b023b0d2) or [responses API](/azure/ai-foundry/openai/how-to/responses?tabs=python-key)
33
+
- Experiment with image generation in the [image playground](https://int.ai.azure.com/doc/azure/ai-foundry/openai/dall-e-quickstart?tid=7f292395-a08f-4cc0-b3d0-a400b023b0d2)
34
+
- Explore [content filtering](https://int.ai.azure.com/doc/azure/ai-foundry/openai/concepts/content-filter?tid=7f292395-a08f-4cc0-b3d0-a400b023b0d2) and apply to opt out [here]([url](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUMlBQNkZMR0lFRldORTdVQzQ0TEI5Q1ExOSQlQCN0PWcu):
35
+
- Learn about [image generation tokens ](https://int.ai.azure.com/doc/azure/ai-foundry/openai/overview?tid=7f292395-a08f-4cc0-b3d0-a400b023b0d2)
36
+
37
+
| Aspect | GPT-Image-1 | DALL·E 3 |
38
+
|--------|-------------|----------|
39
+
|**Input / Output Modalities & Format**| Accepts **text + image** inputs; outputs images only in **base64** (no URL option). | Accepts **text (primary)** input; limited image editing inputs (with mask). Outputs as **URL or base64**. |
|**Number of Images per Request**| 1–10 images per request (`n` parameter) | Only **1 image** per request (`n` must be 1) |
43
+
|**Editing (inpainting / variations)**| Yes — supports inpainting and variations with mask + prompt | Yes — supports inpainting and variations |
44
+
|**Strengths**| Better **prompt fidelity**, realism, multimodal context use, strong in **editing instruction-following**| Strong at **prompt adherence**, natural text rendering, stylistic variety, coherent image generation |
29
45
30
46
31
47
## Call the image generation API
@@ -321,6 +337,9 @@ The Image Edit API enables you to modify existing images based on text prompts y
321
337
> [!IMPORTANT]
322
338
> The input image must be less than 50 MB in size and must be a PNG or JPG file.
323
339
340
+
> [!IMPORTANT]
341
+
> `gpt-image-1-mini` does not currently support image edits.
342
+
324
343
Send a POST request to:
325
344
326
345
```
@@ -381,7 +400,10 @@ The *image* value indicates the image file you want to edit.
381
400
382
401
The *input_fidelity* parameter controls how much effort the model puts into matching the style and features, especially facial features, of input images.
383
402
384
-
This parameter lets you make subtle edits to an image without changing unrelated areas. When you use high input fidelity, faces are preserved more accurately than in standard mode.
403
+
This parameter lets you make subtle edits to an image without changing unrelated areas. When you use high input fidelity, faces are preserved more accurately than in standard mode.
404
+
405
+
> [!IMPORTANT]
406
+
> Input fidelity is not supported by the `gpt-image-1-mini` model.
|**Images**| Images included in user messages, both as links or as base64-encoded data. The detail parameter must be set the same across requests. |`gpt-4o`<br/>`gpt-4o-mini` <br> `o1` (version 2024-12-17) |
77
-
|**Tool use**| Both the messages array and tool definitions. |`gpt-4o`<br/>`gpt-4o-mini`<br/>`gpt-4o-realtime-preview` (version 2024-12-17)<br/>`gpt-4o-mini-realtime-preview` (version 2024-12-17)<br>`gpt-realtime` (version 2025-08-28)<br> `o1` (version 2024-12-17) <br> `o3-mini` (version 2025-01-31) |
77
+
|**Tool use**| Both the messages array and tool definitions. |`gpt-4o`<br/>`gpt-4o-mini`<br/>`gpt-4o-realtime-preview` (version 2024-12-17)<br/>`gpt-4o-mini-realtime-preview` (version 2024-12-17)<br>`gpt-realtime` (version 2025-08-28)<br>`gpt-realtime-mini` (version 2025-10-06)<br>`o1` (version 2024-12-17) <br> `o3-mini` (version 2025-01-31) |
78
78
|**Structured outputs**| Structured output schema is appended as a prefix to the system message. |`gpt-4o`<br/>`gpt-4o-mini` <br> `o1` (version 2024-12-17) <br> `o3-mini` (version 2025-01-31) |
79
79
80
80
To improve the likelihood of cache hits occurring, you should structure your requests such that repetitive content occurs at the beginning of the messages array.
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/how-to/realtime-audio-webrtc.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,6 +34,7 @@ The GPT real-time models are available for global deployments in [East US 2 and
34
34
-`gpt-4o-mini-realtime-preview` (2024-12-17)
35
35
-`gpt-4o-realtime-preview` (2024-12-17)
36
36
-`gpt-realtime` (version 2025-08-28)
37
+
-`gpt-realtime-mini` (version 2025-10-06)
37
38
38
39
You should use API version `2025-04-01-preview` in the URL for the Realtime API. The API version is included in the sessions URL.
39
40
@@ -45,7 +46,7 @@ Before you can use GPT real-time audio, you need:
45
46
46
47
- An Azure subscription - <ahref="https://azure.microsoft.com/free/cognitive-services"target="_blank">Create one for free</a>.
47
48
- An Azure OpenAI resource created in a [supported region](#supported-models). For more information, see [Create a resource and deploy a model with Azure OpenAI](create-resource.md).
48
-
- You need a deployment of the `gpt-4o-realtime-preview`, `gpt-4o-mini-realtime-preview`, or `gpt-realtime` model in a supported region as described in the [supported models](#supported-models) section in this article. You can deploy the model from the [Azure AI Foundry model catalog](../../../ai-foundry/how-to/model-catalog-overview.md) or from your project in Azure AI Foundry portal.
49
+
- You need a deployment of the `gpt-4o-realtime-preview`, `gpt-4o-mini-realtime-preview`, `gpt-realtime`, or `gpt-realtime-mini` model in a supported region as described in the [supported models](#supported-models) section in this article. You can deploy the model from the [Azure AI Foundry model catalog](../../../ai-foundry/how-to/model-catalog-overview.md) or from your project in Azure AI Foundry portal.
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/how-to/realtime-audio-websockets.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,6 +31,7 @@ The GPT real-time models are available for global deployments in [East US 2 and
31
31
-`gpt-4o-mini-realtime-preview` (2024-12-17)
32
32
-`gpt-4o-realtime-preview` (2024-12-17)
33
33
-`gpt-realtime` (version 2025-08-28)
34
+
-`gpt-realtime-mini` (version 2025-10-06)
34
35
35
36
You should use API version `2025-04-01-preview` in the URL for the Realtime API.
36
37
@@ -42,7 +43,7 @@ Before you can use GPT real-time audio, you need:
42
43
43
44
- An Azure subscription - <ahref="https://azure.microsoft.com/free/cognitive-services"target="_blank">Create one for free</a>.
44
45
- An Azure OpenAI resource created in a [supported region](#supported-models). For more information, see [Create a resource and deploy a model with Azure OpenAI](create-resource.md).
45
-
- You need a deployment of the `gpt-4o-realtime-preview`, `gpt-4o-mini-realtime-preview`, or `gpt-realtime` model in a supported region as described in the [supported models](#supported-models) section. You can deploy the model from the [Azure AI Foundry portal model catalog](../../../ai-foundry/how-to/model-catalog-overview.md) or from your project in Azure AI Foundry portal.
46
+
- You need a deployment of the `gpt-4o-realtime-preview`, `gpt-4o-mini-realtime-preview`, `gpt-realtime`, or `gpt-realtime-mini` model in a supported region as described in the [supported models](#supported-models) section. You can deploy the model from the [Azure AI Foundry portal model catalog](../../../ai-foundry/how-to/model-catalog-overview.md) or from your project in Azure AI Foundry portal.
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/how-to/realtime-audio.md
+4-3Lines changed: 4 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,6 +29,7 @@ The GPT real-time models are available for global deployments in [East US 2 and
29
29
-`gpt-4o-mini-realtime-preview` (2024-12-17)
30
30
-`gpt-4o-realtime-preview` (2024-12-17)
31
31
-`gpt-realtime` (version 2025-08-28)
32
+
-`gpt-realtime-mini` (version 2025-10-06)
32
33
33
34
You should use API version `2025-04-01-preview` in the URL for the Realtime API.
34
35
@@ -40,10 +41,10 @@ Before you can use GPT real-time audio, you need:
40
41
41
42
- An Azure subscription - <ahref="https://azure.microsoft.com/free/cognitive-services"target="_blank">Create one for free</a>.
42
43
- An Azure OpenAI resource created in a [supported region](#supported-models). For more information, see [Create a resource and deploy a model with Azure OpenAI](create-resource.md).
43
-
- You need a deployment of the `gpt-4o-realtime-preview`, `gpt-4o-mini-realtime-preview`, or `gpt-realtime` model in a supported region as described in the [supported models](#supported-models) section. You can deploy the model from the [Azure AI Foundry portal model catalog](../../../ai-foundry/how-to/model-catalog-overview.md) or from your project in Azure AI Foundry portal.
44
+
- You need a deployment of the `gpt-4o-realtime-preview`, `gpt-4o-mini-realtime-preview`, `gpt-realtime`, or `gpt-realtime-mini` model in a supported region as described in the [supported models](#supported-models) section. You can deploy the model from the [Azure AI Foundry portal model catalog](../../../ai-foundry/how-to/model-catalog-overview.md) or from your project in Azure AI Foundry portal.
44
45
45
46
Here are some of the ways you can get started with the GPT Realtime API for speech and audio:
46
-
- For steps to deploy and use the `gpt-4o-realtime-preview`, `gpt-4o-mini-realtime-preview`, or `gpt-realtime` model, see [the real-time audio quickstart](../realtime-audio-quickstart.md).
47
+
- For steps to deploy and use the `gpt-4o-realtime-preview`, `gpt-4o-mini-realtime-preview`, `gpt-realtime`, or `gpt-realtime-mini` model, see [the real-time audio quickstart](../realtime-audio-quickstart.md).
47
48
- Try the [WebRTC via HTML and JavaScript example](./realtime-audio-webrtc.md#webrtc-example-via-html-and-javascript) to get started with the Realtime API via WebRTC.
48
49
-[The Azure-Samples/aisearch-openai-rag-audio repo](https://github.com/Azure-Samples/aisearch-openai-rag-audio) contains an example of how to implement RAG support in applications that use voice as their user interface, powered by the GPT realtime API for audio.
49
50
@@ -281,7 +282,7 @@ A user might want to interrupt the assistant's response or ask the assistant to
281
282
282
283
## Image input
283
284
284
-
The `gpt-realtime`model supports image input as part of the conversation. The model can ground responses in what the user is currently seeing. You can send images to the model as part of a conversation item. The model can then generate responses that reference the images.
285
+
The `gpt-realtime`and `gpt-realtime-mini` models support image input as part of the conversation. The model can ground responses in what the user is currently seeing. You can send images to the model as part of a conversation item. The model can then generate responses that reference the images.
285
286
286
287
The following example json body adds an image to the conversation:
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/how-to/responses.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -58,6 +58,7 @@ The responses API is currently available in the following regions:
58
58
-`gpt-4.1-nano` (Version: `2025-04-14`)
59
59
-`gpt-4.1-mini` (Version: `2025-04-14`)
60
60
-`gpt-image-1` (Version: `2025-04-15`)
61
+
-`gpt-image-1-mini` (Version: `2025-10-06`)
61
62
-`o1` (Version: `2024-12-17`)
62
63
-`o3-mini` (Version: `2025-01-31`)
63
64
-`o3` (Version: `2025-04-16`)
@@ -1268,7 +1269,7 @@ Compared to the standalone Image API, the Responses API offers several advantage
1268
1269
***Flexible inputs**: Accept image File IDs as inputs, in addition to raw image bytes.
1269
1270
1270
1271
> [!NOTE]
1271
-
> The image generation tool in the Responses APIis only supported by the `gpt-image-1`model. You can however call this model from this list of supported models -`gpt-4o`, `gpt-4o-mini`, `gpt-4.1`, `gpt-4.1-mini`, `gpt-4.1-nano`, `o3`, and`gpt-5` series models.<br><br>The Responses API image generation tool does not currently support streaming mode. To use streaming mode and generate partial images, call the [image generation API](./dall-e.md) directly outside of the Responses API.
1272
+
> The image generation tool in the Responses APIis only supported by the `gpt-image-1`series models. You can however call this model from this list of supported models -`gpt-4o`, `gpt-4o-mini`, `gpt-4.1`, `gpt-4.1-mini`, `gpt-4.1-nano`, `o3`, and`gpt-5` series models.<br><br>The Responses API image generation tool does not currently support streaming mode. To use streaming mode and generate partial images, call the [image generation API](./dall-e.md) directly outside of the Responses API.
1272
1273
1273
1274
Use the Responses APIif you want to build conversational image experiences withGPT Image.
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/includes/dall-e-rest.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@ Use this guide to get started calling the Azure OpenAI in Azure AI Foundry Model
19
19
- <ahref="https://www.python.org/"target="_blank">Python 3.8 or later version</a>.
20
20
- The following Python libraries installed: `os`, `requests`, `json`.
21
21
- An Azure OpenAI resource created in a supported region. See [Region availability](/azure/ai-foundry/openai/concepts/models#model-summary-table-and-region-availability).
22
-
- Then, you need to deploy a `gpt-image-1` or `dalle3` model with your Azure resource. For more information, see [Create a resource and deploy a model with Azure OpenAI](../how-to/create-resource.md).
22
+
- Then, you need to deploy a `gpt-image-1`-series or `dalle3` model with your Azure resource. For more information, see [Create a resource and deploy a model with Azure OpenAI](../how-to/create-resource.md).
0 commit comments