Skip to content

Commit feda7d4

Browse files
Merge pull request #6905 from PatrickFarley/freshness-pass
Freshness pass
2 parents 1267500 + 91b8e86 commit feda7d4

14 files changed

+96
-93
lines changed

articles/ai-foundry/openai/how-to/dall-e.md

Lines changed: 34 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
---
2-
title: How to use image generation models
2+
title: How to Use Image Generation Models from OpenAI
33
titleSuffix: Azure OpenAI in Azure AI Foundry Models
4-
description: Learn how to generate and edit images with image models, and learn about the configuration options that are available.
4+
description: Learn how to generate and edit images using Azure OpenAI image generation models. Discover configuration options and start creating images today.
55
author: PatrickFarley
66
ms.author: pafarley
77
manager: nitinme
8-
ms.date: 04/23/2025
8+
ms.date: 09/02/2025
99
ms.service: azure-ai-openai
1010
ms.topic: how-to
1111
ms.custom:
@@ -15,22 +15,26 @@ ms.custom:
1515

1616
# How to use Azure OpenAI image generation models
1717

18-
OpenAI's image generation models render images based on user-provided text prompts and optionally provided images. This guide demonstrates how to use the image generation models and configure their options through REST API calls.
18+
OpenAI's image generation models create images from user-provided text prompts and optional images. This article explains how to use these models, configure options, and benefit from advanced image generation capabilities in Azure.
1919

2020

2121
## Prerequisites
2222

23+
2324
- An Azure subscription. You can [create one for free](https://azure.microsoft.com/pricing/purchase-options/azure-account?icid=ai-services).
2425
- An Azure OpenAI resource created in a supported region. See [Region availability](/azure/ai-foundry/openai/concepts/models#model-summary-table-and-region-availability).
2526
- Deploy a `dall-e-3` or `gpt-image-1` model with your Azure OpenAI resource. For more information on deployments, see [Create a resource and deploy a model with Azure OpenAI](/azure/ai-foundry/openai/how-to/create-resource).
2627
- GPT-image-1 is the newer model and features a number of improvements over DALL-E 3. It's available in limited access: apply for access with [this form](https://aka.ms/oai/gptimage1access).
2728

28-
## Call the Image Generation API
2929

30-
The following command shows the most basic way to use an image model with code. If this is your first time using these models programmatically, we recommend starting with the [quickstart](/azure/ai-foundry/openai/dall-e-quickstart).
30+
## Call the image generation API
31+
32+
33+
The following command shows the most basic way to use an image model with code. If this is your first time using these models programmatically, start with the [quickstart](/azure/ai-foundry/openai/dall-e-quickstart).
3134

3235

3336
#### [GPT-image-1](#tab/gpt-image-1)
37+
3438
Send a POST request to:
3539

3640
```
@@ -41,14 +45,18 @@ https://<your_resource_name>.openai.azure.com/openai/deployments/<your_deploymen
4145
**URL**:
4246

4347
Replace the following values:
48+
4449
- `<your_resource_name>` is the name of your Azure OpenAI resource.
4550
- `<your_deployment_name>` is the name of your DALL-E 3 or GPT-image-1 model deployment.
4651
- `<api_version>` is the version of the API you want to use. For example, `2025-04-01-preview`.
4752

53+
4854
**Required headers**:
55+
4956
- `Content-Type`: `application/json`
5057
- `api-key`: `<your_API_key>`
5158

59+
5260
**Body**:
5361

5462
The following is a sample request body. You specify a number of options, defined in later sections.
@@ -122,7 +130,7 @@ The response from a successful image generation API call looks like the followin
122130
}
123131
```
124132
> [!NOTE]
125-
> `response_format` parameter is not supported for GPT-image-1 which always returns base64-encoded images.
133+
> The `response_format` parameter isn't supported for GPT-image-1, which always returns base64-encoded images.
126134
127135
#### [DALL-E 3](#tab/dalle-3)
128136

@@ -144,7 +152,7 @@ The response from a successful image generation API call looks like the followin
144152

145153
### API call rejection
146154

147-
Prompts and images are filtered based on our content policy, returning an error when a prompt or image is flagged.
155+
Prompts and images are filtered based on our content policy. The API returns an error when a prompt or image is flagged.
148156

149157
If your prompt is flagged, the `error.code` value in the message is set to `contentFilter`. Here's an example:
150158

@@ -172,9 +180,9 @@ It's also possible that the generated image itself is filtered. In this case, th
172180
}
173181
```
174182

175-
### Write text-to-image prompts
183+
### Write effective text-to-image prompts
176184

177-
Your prompts should describe the content you want to see in the image, and the visual style of image.
185+
Your prompts should describe the content you want to see in the image and the visual style of the image.
178186

179187
When you write prompts, consider that the Image APIs come with a content moderation filter. If the service recognizes your prompt as harmful content, it doesn't generate an image. For more information, see [Content filtering](../concepts/content-filter.md).
180188

@@ -197,7 +205,7 @@ Specify the size of the generated images. Must be one of `1024x1024`, `1024x1536
197205

198206
#### Quality
199207

200-
There are three options for image quality: `low`, `medium`, and `high`.Lower quality images can be generated faster.
208+
There are three options for image quality: `low`, `medium`, and `high`. Lower quality images can be generated faster.
201209

202210
The default value is `high`.
203211

@@ -207,22 +215,22 @@ You can generate between one and 10 images in a single API call. The default val
207215

208216
#### User ID
209217

210-
Use the *user* parameter to specify a unique identifier for the user making the request. This is useful for tracking and monitoring usage patterns. The value can be any string, such as a user ID or email address.
218+
Use the *user* parameter to specify a unique identifier for the user making the request. This identifier is useful for tracking and monitoring usage patterns. The value can be any string, such as a user ID or email address.
211219

212220
#### Output format
213221

214222
Use the *output_format* parameter to specify the format of the generated image. Supported formats are `PNG` and `JPEG`. The default is `PNG`.
215223

216224
> [!NOTE]
217-
> WEBP images are not supported in the Azure OpenAI in Azure AI Foundry Models.
225+
> WEBP images aren't supported in the Azure OpenAI in Azure AI Foundry Models.
218226
219227
#### Compression
220228

221229
Use the *output_compression* parameter to specify the compression level for the generated image. Input an integer between `0` and `100`, where `0` is no compression and `100` is maximum compression. The default is `100`.
222230

223231
#### Streaming
224232

225-
Use the *stream* parameter to enable streaming responses. When set to `true`, the API returns partial images as they are generated. This provides faster visual feedback for users and improves perceived latency. Set the *partial_images* parameter to control how many partial images are generated (1-3).
233+
Use the *stream* parameter to enable streaming responses. When set to `true`, the API returns partial images as they're generated. This feature provides faster visual feedback for users and improves perceived latency. Set the *partial_images* parameter to control how many partial images are generated (1-3).
226234

227235

228236
#### [DALL-E 3](#tab/dalle-3)
@@ -251,23 +259,23 @@ The default value is `vivid`.
251259

252260
#### Quality
253261

254-
There are two options for image quality: `hd` and `standard`. The hd option creates images with finer details and greater consistency across the image. Standard images can be generated faster.
262+
There are two options for image quality: `hd` and `standard`. The hd option creates images with finer details and greater consistency across the image. Standard images are faster to generate.
255263

256264
The default value is `standard`.
257265

258266
#### Number
259267

260-
With DALL-E 3, you can't generate more than one image in a single API call: the `n` parameter must be set to *1*. If you need to generate multiple images at once, make parallel requests.
268+
With DALL-E 3, you can't generate more than one image in a single API call: the `n` parameter must be set to *1*. To generate multiple images at once, make parallel requests.
261269

262270
#### Response format
263271

264-
The format in which DALL-E 3 generated images are returned. Must be one of `url` or `b64_json`. This parameter isn't supported for GPT-image-1 which always returns base64-encoded images.
272+
The format in which DALL-E 3 returns generated images. Must be one of `url` or `b64_json`. This parameter isn't supported for GPT-image-1, which always returns base64-encoded images.
265273

266274
---
267275

268-
## Call the Image Edit API
276+
## Call the image edit API
269277

270-
The Image Edit API allows you to modify existing images based on text prompts you provide. The API call is similar to the image generation API call, but you also need to provide an input image.
278+
The Image Edit API enables you to modify existing images based on text prompts you provide. The API call is similar to the image generation API call, but you also need to provide an input image.
271279

272280

273281
#### [GPT-image-1](#tab/gpt-image-1)
@@ -308,8 +316,7 @@ The following is a sample request body. You specify a number of options, defined
308316
-F "n=1" \
309317
-F "quality=high"
310318
```
311-
312-
### Output
319+
### API response output
313320

314321
The response from a successful image editing API call looks like the following example. The `b64_json` field contains the output image data.
315322

@@ -324,28 +331,28 @@ The response from a successful image editing API call looks like the following e
324331
}
325332
```
326333

327-
### Specify API options
334+
### Specify image edit API options
328335

329336
The following API body parameters are available for image editing models, in addition to the ones available for image generation models.
330337

331-
### Image
338+
#### Image
332339

333340
The *image* value indicates the image file you want to edit.
334341

335342
#### Input fidelity
336343

337-
The *input_fidelity* parameter controls how much effort the model will exert to match the style and features, especially facial features, of input images
344+
The *input_fidelity* parameter controls how much effort the model puts into matching the style and features, especially facial features, of input images.
338345

339-
This allows you to make subtle edits to an image without altering unrelated areas. When you use high input fidelity, faces are preserved more accurately than in standard mode.
346+
This parameter lets you make subtle edits to an image without changing unrelated areas. When you use high input fidelity, faces are preserved more accurately than in standard mode.
340347

341348

342349
#### Mask
343350

344-
The *mask* parameter is the same type as the main *image* input parameter. It defines the area of the image that you want the model to edit, using fully transparent pixels (alpha of zero) in those areas. The mask must be a PNG file and have the same dimensions as the input image.
351+
The *mask* parameter uses the same type as the main *image* input parameter. It defines the area of the image that you want the model to edit, using fully transparent pixels (alpha of zero) in those areas. The mask must be a PNG file and have the same dimensions as the input image.
345352

346353
#### Streaming
347354

348-
Use the *stream* parameter to enable streaming responses. When set to `true`, the API returns partial images as they are generated. This provides faster visual feedback for users and improves perceived latency. Set the *partial_images* parameter to control how many partial images are generated (1-3).
355+
Use the *stream* parameter to enable streaming responses. When set to `true`, the API returns partial images as they're generated. This feature provides faster visual feedback for users and improves perceived latency. Set the *partial_images* parameter to control how many partial images are generated (1-3).
349356

350357
#### [DALL-E 3](#tab/dalle-3)
351358

articles/ai-foundry/openai/how-to/model-router.md

Lines changed: 11 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: Learn how to use the model router in Azure OpenAI to select the bes
44
author: PatrickFarley
55
ms.author: pafarley
66
manager: nitinme
7-
ms.date: 08/12/2025
7+
ms.date: 09/02/2025
88
ms.service: azure-ai-openai
99
ms.topic: how-to
1010
ms.custom:
@@ -14,9 +14,9 @@ ms.custom:
1414

1515
# Use model router for Azure AI Foundry (preview)
1616

17-
Model router for Azure AI Foundry is a deployable AI chat model that is trained to select the best large language model (LLM) to respond to a given prompt in real time. It uses a combination of preexisting models to provide high performance while saving on compute costs where possible, all packaged as a single model deployment. For more information on how model router works and its advantages and limitations, see the [Model router concepts guide](../concepts/model-router.md).
17+
Model router for Azure AI Foundry is a deployable AI chat model that selects the best large language model (LLM) to respond to a prompt in real time. It uses different preexisting models to deliver high performance and save compute costs, all in one model deployment. To learn more about how model router works, its advantages, and limitations, see the [Model router concepts guide](../concepts/model-router.md).
1818

19-
You can access model router through the Completions API just as you would use a single base model like GPT-4. The steps are the same as in the [Chat completions guide](/azure/ai-foundry/openai/how-to/chatgpt).
19+
Use model router through the Completions API like you use a single base model such as GPT-4. Follow the same steps as in the [Chat completions guide](/azure/ai-foundry/openai/how-to/chatgpt).
2020

2121
## Deploy a model router model
2222

@@ -26,17 +26,14 @@ You can access model router through the Completions API just as you would use a
2626
Model router is packaged as a single Azure AI Foundry model that you deploy. Follow the steps in the [resource deployment guide](/azure/ai-foundry/openai/how-to/create-resource). In the **Create new deployment** step, find `model-router` in the **Models** list. Select it, and then complete the rest of the deployment steps.
2727

2828
> [!NOTE]
29-
> Consider that your deployment settings apply to all underlying chat models that model router uses.
30-
> - You don't need to deploy the underlying chat models separately. Model router works independently of your other deployed models.
31-
> - You select a content filter when you deploy the model router model (or you can apply a filter later). The content filter is applied to all content passed to and from the model router: you don't set content filters for each of the underlying chat models.
32-
> - Your tokens-per-minute rate limit setting is applied to all activity to and from the model router: you don't set rate limits for each of the underlying chat models.
33-
34-
## Use model router in chats
29+
> Your deployment settings apply to all underlying chat models that model router uses.
30+
> - Don't deploy the underlying chat models separately. Model router works independently of your other deployed models.
31+
> - Select a content filter when you deploy the model router model or apply a filter later. The content filter applies to all content passed to and from the model router; don't set content filters for each underlying chat model.
32+
> - Your tokens-per-minute rate limit setting applies to all activity to and from the model router; don't set rate limits for each underlying chat model.## Use model router in chats
3533
3634
You can use model router through the [chat completions API](/azure/ai-foundry/openai/chatgpt-quickstart) in the same way you'd use other OpenAI chat models. Set the `model` parameter to the name of our model router deployment, and set the `messages` parameter to the messages you want to send to the model.
3735

38-
In the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs), you can navigate to your model router deployment on the **Models + endpoints** page and select it to enter the model playground. In the playground experience, you can enter messages and see the model's responses. Each response message shows which underlying model was selected to respond.
39-
36+
In the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs), go to your model router deployment on the **Models + endpoints** page and select it to open the model playground. In the playground, enter messages and see the model's responses. Each response shows which underlying model the router selected.
4037

4138
> [!IMPORTANT]
4239
> You can set the `Temperature` and `Top_P` parameters to the values you prefer (see the [concepts guide](/azure/ai-foundry/openai/concepts/prompt-engineering?tabs=chat#temperature-and-top_p-parameters)), but note that reasoning models (o-series) don't support these parameters. If model router selects a reasoning model for your prompt, it ignores the `Temperature` and `Top_P` input parameters.
@@ -145,12 +142,11 @@ The JSON response you receive from a model router model is identical to the stan
145142

146143
### Monitor performance
147144

148-
You can monitor the performance of your model router deployment in Azure monitor (AzMon) in the Azure portal.
145+
Monitor the performance of your model router deployment in Azure Monitor (AzMon) in the Azure portal.
149146

150-
1. Go to the **Monitoring** -> **Metrics** page for your Azure OpenAI resource in the Azure portal.
147+
1. Go to the **Monitoring** > **Metrics** page for your Azure OpenAI resource in the Azure portal.
151148
1. Filter by the deployment name of your model router model.
152-
1. Optionally, split up the metrics by underlying models.
153-
149+
1. Split the metrics by underlying models if needed.
154150

155151
### Monitor costs
156152

articles/ai-services/computer-vision/concept-brand-detection.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ manager: nitinme
77

88
ms.service: azure-ai-vision
99
ms.topic: conceptual
10-
ms.date: 01/22/2025
10+
ms.date: 09/02/2025
1111
ms.author: pafarley
1212
---
1313

articles/ai-services/computer-vision/how-to/identity-detect-faces.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.service: azure-ai-vision
1010
ms.subservice: azure-ai-face
1111
ms.update-cycle: 90-days
1212
ms.topic: how-to
13-
ms.date: 08/21/2025
13+
ms.date: 09/02/2025
1414
ms.author: pafarley
1515
ms.devlang: csharp
1616
ms.custom: devx-track-csharp

0 commit comments

Comments
 (0)