You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/how-to/dall-e.md
+34-27Lines changed: 34 additions & 27 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,11 +1,11 @@
1
1
---
2
-
title: How to use image generation models
2
+
title: How to Use Image Generation Models from OpenAI
3
3
titleSuffix: Azure OpenAI in Azure AI Foundry Models
4
-
description: Learn how to generate and edit images with image models, and learn about the configuration options that are available.
4
+
description: Learn how to generate and edit images using Azure OpenAI image generation models. Discover configuration options and start creating images today.
5
5
author: PatrickFarley
6
6
ms.author: pafarley
7
7
manager: nitinme
8
-
ms.date: 04/23/2025
8
+
ms.date: 09/02/2025
9
9
ms.service: azure-ai-openai
10
10
ms.topic: how-to
11
11
ms.custom:
@@ -15,22 +15,26 @@ ms.custom:
15
15
16
16
# How to use Azure OpenAI image generation models
17
17
18
-
OpenAI's image generation models render images based on user-provided text prompts and optionally provided images. This guide demonstrates how to use the image generation models and configure their options through REST API calls.
18
+
OpenAI's image generation models create images from user-provided text prompts and optional images. This article explains how to use these models, configure options, and benefit from advanced image generation capabilities in Azure.
19
19
20
20
21
21
## Prerequisites
22
22
23
+
23
24
- An Azure subscription. You can [create one for free](https://azure.microsoft.com/pricing/purchase-options/azure-account?icid=ai-services).
24
25
- An Azure OpenAI resource created in a supported region. See [Region availability](/azure/ai-foundry/openai/concepts/models#model-summary-table-and-region-availability).
25
26
- Deploy a `dall-e-3` or `gpt-image-1` model with your Azure OpenAI resource. For more information on deployments, see [Create a resource and deploy a model with Azure OpenAI](/azure/ai-foundry/openai/how-to/create-resource).
26
27
- GPT-image-1 is the newer model and features a number of improvements over DALL-E 3. It's available in limited access: apply for access with [this form](https://aka.ms/oai/gptimage1access).
27
28
28
-
## Call the Image Generation API
29
29
30
-
The following command shows the most basic way to use an image model with code. If this is your first time using these models programmatically, we recommend starting with the [quickstart](/azure/ai-foundry/openai/dall-e-quickstart).
30
+
## Call the image generation API
31
+
32
+
33
+
The following command shows the most basic way to use an image model with code. If this is your first time using these models programmatically, start with the [quickstart](/azure/ai-foundry/openai/dall-e-quickstart).
-`<your_resource_name>` is the name of your Azure OpenAI resource.
45
50
-`<your_deployment_name>` is the name of your DALL-E 3 or GPT-image-1 model deployment.
46
51
-`<api_version>` is the version of the API you want to use. For example, `2025-04-01-preview`.
47
52
53
+
48
54
**Required headers**:
55
+
49
56
-`Content-Type`: `application/json`
50
57
-`api-key`: `<your_API_key>`
51
58
59
+
52
60
**Body**:
53
61
54
62
The following is a sample request body. You specify a number of options, defined in later sections.
@@ -122,7 +130,7 @@ The response from a successful image generation API call looks like the followin
122
130
}
123
131
```
124
132
> [!NOTE]
125
-
> `response_format` parameter is not supported for GPT-image-1 which always returns base64-encoded images.
133
+
> The `response_format` parameter isn't supported for GPT-image-1, which always returns base64-encoded images.
126
134
127
135
#### [DALL-E 3](#tab/dalle-3)
128
136
@@ -144,7 +152,7 @@ The response from a successful image generation API call looks like the followin
144
152
145
153
### API call rejection
146
154
147
-
Prompts and images are filtered based on our content policy, returning an error when a prompt or image is flagged.
155
+
Prompts and images are filtered based on our content policy. The API returns an error when a prompt or image is flagged.
148
156
149
157
If your prompt is flagged, the `error.code` value in the message is set to `contentFilter`. Here's an example:
150
158
@@ -172,9 +180,9 @@ It's also possible that the generated image itself is filtered. In this case, th
172
180
}
173
181
```
174
182
175
-
### Write text-to-image prompts
183
+
### Write effective text-to-image prompts
176
184
177
-
Your prompts should describe the content you want to see in the image, and the visual style of image.
185
+
Your prompts should describe the content you want to see in the image and the visual style of the image.
178
186
179
187
When you write prompts, consider that the Image APIs come with a content moderation filter. If the service recognizes your prompt as harmful content, it doesn't generate an image. For more information, see [Content filtering](../concepts/content-filter.md).
180
188
@@ -197,7 +205,7 @@ Specify the size of the generated images. Must be one of `1024x1024`, `1024x1536
197
205
198
206
#### Quality
199
207
200
-
There are three options for image quality: `low`, `medium`, and `high`.Lower quality images can be generated faster.
208
+
There are three options for image quality: `low`, `medium`, and `high`.Lower quality images can be generated faster.
201
209
202
210
The default value is `high`.
203
211
@@ -207,22 +215,22 @@ You can generate between one and 10 images in a single API call. The default val
207
215
208
216
#### User ID
209
217
210
-
Use the *user* parameter to specify a unique identifier for the user making the request. This is useful for tracking and monitoring usage patterns. The value can be any string, such as a user ID or email address.
218
+
Use the *user* parameter to specify a unique identifier for the user making the request. This identifier is useful for tracking and monitoring usage patterns. The value can be any string, such as a user ID or email address.
211
219
212
220
#### Output format
213
221
214
222
Use the *output_format* parameter to specify the format of the generated image. Supported formats are `PNG` and `JPEG`. The default is `PNG`.
215
223
216
224
> [!NOTE]
217
-
> WEBP images are not supported in the Azure OpenAI in Azure AI Foundry Models.
225
+
> WEBP images aren't supported in the Azure OpenAI in Azure AI Foundry Models.
218
226
219
227
#### Compression
220
228
221
229
Use the *output_compression* parameter to specify the compression level for the generated image. Input an integer between `0` and `100`, where `0` is no compression and `100` is maximum compression. The default is `100`.
222
230
223
231
#### Streaming
224
232
225
-
Use the *stream* parameter to enable streaming responses. When set to `true`, the API returns partial images as they are generated. This provides faster visual feedback for users and improves perceived latency. Set the *partial_images* parameter to control how many partial images are generated (1-3).
233
+
Use the *stream* parameter to enable streaming responses. When set to `true`, the API returns partial images as they're generated. This feature provides faster visual feedback for users and improves perceived latency. Set the *partial_images* parameter to control how many partial images are generated (1-3).
226
234
227
235
228
236
#### [DALL-E 3](#tab/dalle-3)
@@ -251,23 +259,23 @@ The default value is `vivid`.
251
259
252
260
#### Quality
253
261
254
-
There are two options for image quality: `hd` and `standard`. The hd option creates images with finer details and greater consistency across the image. Standard images can be generated faster.
262
+
There are two options for image quality: `hd` and `standard`. The hd option creates images with finer details and greater consistency across the image. Standard images are faster to generate.
255
263
256
264
The default value is `standard`.
257
265
258
266
#### Number
259
267
260
-
With DALL-E 3, you can't generate more than one image in a single API call: the `n` parameter must be set to *1*. If you need to generate multiple images at once, make parallel requests.
268
+
With DALL-E 3, you can't generate more than one image in a single API call: the `n` parameter must be set to *1*. To generate multiple images at once, make parallel requests.
261
269
262
270
#### Response format
263
271
264
-
The format in which DALL-E 3 generated images are returned. Must be one of `url` or `b64_json`. This parameter isn't supported for GPT-image-1 which always returns base64-encoded images.
272
+
The format in which DALL-E 3 returns generated images. Must be one of `url` or `b64_json`. This parameter isn't supported for GPT-image-1, which always returns base64-encoded images.
265
273
266
274
---
267
275
268
-
## Call the Image Edit API
276
+
## Call the image edit API
269
277
270
-
The Image Edit API allows you to modify existing images based on text prompts you provide. The API call is similar to the image generation API call, but you also need to provide an input image.
278
+
The Image Edit API enables you to modify existing images based on text prompts you provide. The API call is similar to the image generation API call, but you also need to provide an input image.
271
279
272
280
273
281
#### [GPT-image-1](#tab/gpt-image-1)
@@ -308,8 +316,7 @@ The following is a sample request body. You specify a number of options, defined
308
316
-F "n=1" \
309
317
-F "quality=high"
310
318
```
311
-
312
-
### Output
319
+
### API response output
313
320
314
321
The response from a successful image editing API call looks like the following example. The `b64_json` field contains the output image data.
315
322
@@ -324,28 +331,28 @@ The response from a successful image editing API call looks like the following e
324
331
}
325
332
```
326
333
327
-
### Specify API options
334
+
### Specify image edit API options
328
335
329
336
The following API body parameters are available for image editing models, in addition to the ones available for image generation models.
330
337
331
-
### Image
338
+
####Image
332
339
333
340
The *image* value indicates the image file you want to edit.
334
341
335
342
#### Input fidelity
336
343
337
-
The *input_fidelity* parameter controls how much effort the model will exert to match the style and features, especially facial features, of input images
344
+
The *input_fidelity* parameter controls how much effort the model puts into matching the style and features, especially facial features, of input images.
338
345
339
-
This allows you to make subtle edits to an image without altering unrelated areas. When you use high input fidelity, faces are preserved more accurately than in standard mode.
346
+
This parameter lets you make subtle edits to an image without changing unrelated areas. When you use high input fidelity, faces are preserved more accurately than in standard mode.
340
347
341
348
342
349
#### Mask
343
350
344
-
The *mask* parameter is the same type as the main *image* input parameter. It defines the area of the image that you want the model to edit, using fully transparent pixels (alpha of zero) in those areas. The mask must be a PNG file and have the same dimensions as the input image.
351
+
The *mask* parameter uses the same type as the main *image* input parameter. It defines the area of the image that you want the model to edit, using fully transparent pixels (alpha of zero) in those areas. The mask must be a PNG file and have the same dimensions as the input image.
345
352
346
353
#### Streaming
347
354
348
-
Use the *stream* parameter to enable streaming responses. When set to `true`, the API returns partial images as they are generated. This provides faster visual feedback for users and improves perceived latency. Set the *partial_images* parameter to control how many partial images are generated (1-3).
355
+
Use the *stream* parameter to enable streaming responses. When set to `true`, the API returns partial images as they're generated. This feature provides faster visual feedback for users and improves perceived latency. Set the *partial_images* parameter to control how many partial images are generated (1-3).
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/how-to/model-router.md
+11-15Lines changed: 11 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ description: Learn how to use the model router in Azure OpenAI to select the bes
4
4
author: PatrickFarley
5
5
ms.author: pafarley
6
6
manager: nitinme
7
-
ms.date: 08/12/2025
7
+
ms.date: 09/02/2025
8
8
ms.service: azure-ai-openai
9
9
ms.topic: how-to
10
10
ms.custom:
@@ -14,9 +14,9 @@ ms.custom:
14
14
15
15
# Use model router for Azure AI Foundry (preview)
16
16
17
-
Model router for Azure AI Foundry is a deployable AI chat model that is trained to select the best large language model (LLM) to respond to a given prompt in real time. It uses a combination of preexisting models to provide high performance while saving on compute costs where possible, all packaged as a single model deployment. For more information on how model router works and its advantages and limitations, see the [Model router concepts guide](../concepts/model-router.md).
17
+
Model router for Azure AI Foundry is a deployable AI chat model that selects the best large language model (LLM) to respond to a prompt in real time. It uses different preexisting models to deliver high performance and save compute costs, all in one model deployment. To learn more about how model router works, its advantages, and limitations, see the [Model router concepts guide](../concepts/model-router.md).
18
18
19
-
You can access model router through the Completions API just as you would use a single base model like GPT-4. The steps are the same as in the [Chat completions guide](/azure/ai-foundry/openai/how-to/chatgpt).
19
+
Use model router through the Completions API like you use a single base model such as GPT-4. Follow the same steps as in the [Chat completions guide](/azure/ai-foundry/openai/how-to/chatgpt).
20
20
21
21
## Deploy a model router model
22
22
@@ -26,17 +26,14 @@ You can access model router through the Completions API just as you would use a
26
26
Model router is packaged as a single Azure AI Foundry model that you deploy. Follow the steps in the [resource deployment guide](/azure/ai-foundry/openai/how-to/create-resource). In the **Create new deployment** step, find `model-router` in the **Models** list. Select it, and then complete the rest of the deployment steps.
27
27
28
28
> [!NOTE]
29
-
> Consider that your deployment settings apply to all underlying chat models that model router uses.
30
-
> - You don't need to deploy the underlying chat models separately. Model router works independently of your other deployed models.
31
-
> - You select a content filter when you deploy the model router model (or you can apply a filter later). The content filter is applied to all content passed to and from the model router: you don't set content filters for each of the underlying chat models.
32
-
> - Your tokens-per-minute rate limit setting is applied to all activity to and from the model router: you don't set rate limits for each of the underlying chat models.
33
-
34
-
## Use model router in chats
29
+
> Your deployment settings apply to all underlying chat models that model router uses.
30
+
> - Don't deploy the underlying chat models separately. Model router works independently of your other deployed models.
31
+
> - Select a content filter when you deploy the model router model or apply a filter later. The content filter applies to all content passed to and from the model router; don't set content filters for each underlying chat model.
32
+
> - Your tokens-per-minute rate limit setting applies to all activity to and from the model router; don't set rate limits for each underlying chat model.## Use model router in chats
35
33
36
34
You can use model router through the [chat completions API](/azure/ai-foundry/openai/chatgpt-quickstart) in the same way you'd use other OpenAI chat models. Set the `model` parameter to the name of our model router deployment, and set the `messages` parameter to the messages you want to send to the model.
37
35
38
-
In the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs), you can navigate to your model router deployment on the **Models + endpoints** page and select it to enter the model playground. In the playground experience, you can enter messages and see the model's responses. Each response message shows which underlying model was selected to respond.
39
-
36
+
In the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs), go to your model router deployment on the **Models + endpoints** page and select it to open the model playground. In the playground, enter messages and see the model's responses. Each response shows which underlying model the router selected.
40
37
41
38
> [!IMPORTANT]
42
39
> You can set the `Temperature` and `Top_P` parameters to the values you prefer (see the [concepts guide](/azure/ai-foundry/openai/concepts/prompt-engineering?tabs=chat#temperature-and-top_p-parameters)), but note that reasoning models (o-series) don't support these parameters. If model router selects a reasoning model for your prompt, it ignores the `Temperature` and `Top_P` input parameters.
@@ -145,12 +142,11 @@ The JSON response you receive from a model router model is identical to the stan
145
142
146
143
### Monitor performance
147
144
148
-
You can monitor the performance of your model router deployment in Azure monitor (AzMon) in the Azure portal.
145
+
Monitor the performance of your model router deployment in Azure Monitor (AzMon) in the Azure portal.
149
146
150
-
1. Go to the **Monitoring**-> **Metrics** page for your Azure OpenAI resource in the Azure portal.
147
+
1. Go to the **Monitoring** > **Metrics** page for your Azure OpenAI resource in the Azure portal.
151
148
1. Filter by the deployment name of your model router model.
152
-
1. Optionally, split up the metrics by underlying models.
153
-
149
+
1. Split the metrics by underlying models if needed.
0 commit comments