You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/concepts/architecture.md
+7-3Lines changed: 7 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ ms.custom:
7
7
- build-2024
8
8
- ignite-2024
9
9
ms.topic: concept-article
10
-
ms.date: 07/22/2025
10
+
ms.date: 09/03/2025
11
11
ms.reviewer: deeikele
12
12
ms.author: sgilley
13
13
author: sdgilley
@@ -79,12 +79,16 @@ Users can optionally connect their own Azure Storage accounts. Foundry tools can
79
79
***Customer-Managed Key Encryption**:
80
80
By default, Azure services use Microsoft-managed encryption keys to encrypt data in transit and at rest. Data is encrypted and decrypted using FIPS 140-2 compliant 256-bit AES encryption. Encryption and decryption are transparent, meaning encryption and access are managed for you. Your data is secure by default and you don't need to modify your code or applications to take advantage of encryption.
81
81
82
-
When using customer-managed keys, your data on Microsoft-managed infrastructure is encrypted using your keys.
82
+
***Bring your own Key Vault**:
83
+
By default, AI Foundry stores all API key-based connection secrets in a managed Azure Key Vault. For users that prefer to manage this themselves, they can connect to their key vault to the Foundry resource. One Azure Key Vault connection will manage all project and resource level connection secrets. Go to learn [how to set up an Azure Key Vault connection to AI Foundry](../how-to/set-up-key-vault-connection.md).
84
+
85
+
When using customer-managed keys, your data on Microsoft-managed infrastructure is encrypted using your keys.
86
+
83
87
To learn more about data encryption, see [customer-managed keys for encryption with Azure AI Foundry](encryption-keys-portal.md).
84
88
85
89
## Next steps
86
90
87
91
*[Azure AI Foundry rollout across my organization](planning.md)
88
92
*[Customer-managed keys for encryption with Azure AI Foundry](encryption-keys-portal.md)
89
93
*[How to configure a private link for Azure AI Foundry](../how-to/configure-private-link.md)
90
-
*[Bring-your-own resources with the Agent service](../agents/how-to/use-your-own-resources.md)
94
+
*[Bring-your-own resources with the Agent service](../agents/how-to/use-your-own-resources.md)
Copy file name to clipboardExpand all lines: articles/ai-foundry/how-to/connections-add.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -59,6 +59,7 @@ Here's a table of some of the available connection types in Azure AI Foundry por
59
59
| Custom ||| Custom connections allow you to securely store and access keys while storing related properties, such as targets and versions. Custom connections are useful when you have many targets or cases where you wouldn't need a credential to access. LangChain scenarios are a good example where you would use custom service connections. Custom connections don't manage authentication, so you have to manage authentication on your own. |
60
60
| Serverless Model | ✅ || Serverless Model connections allow you to serverless API deployment. |
61
61
| Azure Databricks | ✅ | | Azure Databricks connector allows you to connect your Azure AI Foundry Agents to Azure Databricks to access workflows and Genie Spaces during runtime. It supports three connection types - __Jobs__, __Genie__, and __Other__. You can pick the Job or Genie space you want associated with this connection while setting up the connection in the Foundry UI. You can also use the Other connection type and allow your agent to access workspace operations in Azure Databricks. Authentication is handled through Microsoft Entra ID for users or service principals. For examples of using this connector, see [Jobs](https://github.com/Azure-Samples/AI-Foundry-Connections/blob/main/src/samples/python/sample_agent_adb_job.py) and [Genie](https://github.com/Azure-Samples/AI-Foundry-Connections/blob/main/src/samples/python/sample_agent_adb_genie.py). Note: Usage of this connection is available only via the Foundry SDK in code and is integrated into agents as a FunctionTool (please see the samples above for details). Usage of this connection in AI Foundry Playground is currently not supported.|
62
+
| Azure Key Vault| ✅ || Azure service for securely storing and accessing secrets. AI Foundry stores connections details in a managed Azure Key Vault if no Key Vault connection is created. Users that prefer to manage their secrets themselves can bring their own Azure Key Vault via a connection. (See [limitations](#limits)) |
62
63
63
64
## Agent knowledge tool connections
64
65
@@ -167,4 +168,4 @@ For more on how to set private endpoints to your connected resources, see the fo
167
168
## Related content
168
169
169
170
-[How to create vector indexes](../how-to/index-add.md)
170
-
-[How to configure a managed network](configure-managed-network.md)
171
+
-[How to configure a managed network](configure-managed-network.md)
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/how-to/dall-e.md
+34-27Lines changed: 34 additions & 27 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,11 +1,11 @@
1
1
---
2
-
title: How to use image generation models
2
+
title: How to Use Image Generation Models from OpenAI
3
3
titleSuffix: Azure OpenAI in Azure AI Foundry Models
4
-
description: Learn how to generate and edit images with image models, and learn about the configuration options that are available.
4
+
description: Learn how to generate and edit images using Azure OpenAI image generation models. Discover configuration options and start creating images today.
5
5
author: PatrickFarley
6
6
ms.author: pafarley
7
7
manager: nitinme
8
-
ms.date: 04/23/2025
8
+
ms.date: 09/02/2025
9
9
ms.service: azure-ai-openai
10
10
ms.topic: how-to
11
11
ms.custom:
@@ -15,22 +15,26 @@ ms.custom:
15
15
16
16
# How to use Azure OpenAI image generation models
17
17
18
-
OpenAI's image generation models render images based on user-provided text prompts and optionally provided images. This guide demonstrates how to use the image generation models and configure their options through REST API calls.
18
+
OpenAI's image generation models create images from user-provided text prompts and optional images. This article explains how to use these models, configure options, and benefit from advanced image generation capabilities in Azure.
19
19
20
20
21
21
## Prerequisites
22
22
23
+
23
24
- An Azure subscription. You can [create one for free](https://azure.microsoft.com/pricing/purchase-options/azure-account?icid=ai-services).
24
25
- An Azure OpenAI resource created in a supported region. See [Region availability](/azure/ai-foundry/openai/concepts/models#model-summary-table-and-region-availability).
25
26
- Deploy a `dall-e-3` or `gpt-image-1` model with your Azure OpenAI resource. For more information on deployments, see [Create a resource and deploy a model with Azure OpenAI](/azure/ai-foundry/openai/how-to/create-resource).
26
27
- GPT-image-1 is the newer model and features a number of improvements over DALL-E 3. It's available in limited access: apply for access with [this form](https://aka.ms/oai/gptimage1access).
27
28
28
-
## Call the Image Generation API
29
29
30
-
The following command shows the most basic way to use an image model with code. If this is your first time using these models programmatically, we recommend starting with the [quickstart](/azure/ai-foundry/openai/dall-e-quickstart).
30
+
## Call the image generation API
31
+
32
+
33
+
The following command shows the most basic way to use an image model with code. If this is your first time using these models programmatically, start with the [quickstart](/azure/ai-foundry/openai/dall-e-quickstart).
-`<your_resource_name>` is the name of your Azure OpenAI resource.
45
50
-`<your_deployment_name>` is the name of your DALL-E 3 or GPT-image-1 model deployment.
46
51
-`<api_version>` is the version of the API you want to use. For example, `2025-04-01-preview`.
47
52
53
+
48
54
**Required headers**:
55
+
49
56
-`Content-Type`: `application/json`
50
57
-`api-key`: `<your_API_key>`
51
58
59
+
52
60
**Body**:
53
61
54
62
The following is a sample request body. You specify a number of options, defined in later sections.
@@ -122,7 +130,7 @@ The response from a successful image generation API call looks like the followin
122
130
}
123
131
```
124
132
> [!NOTE]
125
-
> `response_format` parameter is not supported for GPT-image-1 which always returns base64-encoded images.
133
+
> The `response_format` parameter isn't supported for GPT-image-1, which always returns base64-encoded images.
126
134
127
135
#### [DALL-E 3](#tab/dalle-3)
128
136
@@ -144,7 +152,7 @@ The response from a successful image generation API call looks like the followin
144
152
145
153
### API call rejection
146
154
147
-
Prompts and images are filtered based on our content policy, returning an error when a prompt or image is flagged.
155
+
Prompts and images are filtered based on our content policy. The API returns an error when a prompt or image is flagged.
148
156
149
157
If your prompt is flagged, the `error.code` value in the message is set to `contentFilter`. Here's an example:
150
158
@@ -172,9 +180,9 @@ It's also possible that the generated image itself is filtered. In this case, th
172
180
}
173
181
```
174
182
175
-
### Write text-to-image prompts
183
+
### Write effective text-to-image prompts
176
184
177
-
Your prompts should describe the content you want to see in the image, and the visual style of image.
185
+
Your prompts should describe the content you want to see in the image and the visual style of the image.
178
186
179
187
When you write prompts, consider that the Image APIs come with a content moderation filter. If the service recognizes your prompt as harmful content, it doesn't generate an image. For more information, see [Content filtering](../concepts/content-filter.md).
180
188
@@ -197,7 +205,7 @@ Specify the size of the generated images. Must be one of `1024x1024`, `1024x1536
197
205
198
206
#### Quality
199
207
200
-
There are three options for image quality: `low`, `medium`, and `high`.Lower quality images can be generated faster.
208
+
There are three options for image quality: `low`, `medium`, and `high`.Lower quality images can be generated faster.
201
209
202
210
The default value is `high`.
203
211
@@ -207,22 +215,22 @@ You can generate between one and 10 images in a single API call. The default val
207
215
208
216
#### User ID
209
217
210
-
Use the *user* parameter to specify a unique identifier for the user making the request. This is useful for tracking and monitoring usage patterns. The value can be any string, such as a user ID or email address.
218
+
Use the *user* parameter to specify a unique identifier for the user making the request. This identifier is useful for tracking and monitoring usage patterns. The value can be any string, such as a user ID or email address.
211
219
212
220
#### Output format
213
221
214
222
Use the *output_format* parameter to specify the format of the generated image. Supported formats are `PNG` and `JPEG`. The default is `PNG`.
215
223
216
224
> [!NOTE]
217
-
> WEBP images are not supported in the Azure OpenAI in Azure AI Foundry Models.
225
+
> WEBP images aren't supported in the Azure OpenAI in Azure AI Foundry Models.
218
226
219
227
#### Compression
220
228
221
229
Use the *output_compression* parameter to specify the compression level for the generated image. Input an integer between `0` and `100`, where `0` is no compression and `100` is maximum compression. The default is `100`.
222
230
223
231
#### Streaming
224
232
225
-
Use the *stream* parameter to enable streaming responses. When set to `true`, the API returns partial images as they are generated. This provides faster visual feedback for users and improves perceived latency. Set the *partial_images* parameter to control how many partial images are generated (1-3).
233
+
Use the *stream* parameter to enable streaming responses. When set to `true`, the API returns partial images as they're generated. This feature provides faster visual feedback for users and improves perceived latency. Set the *partial_images* parameter to control how many partial images are generated (1-3).
226
234
227
235
228
236
#### [DALL-E 3](#tab/dalle-3)
@@ -251,23 +259,23 @@ The default value is `vivid`.
251
259
252
260
#### Quality
253
261
254
-
There are two options for image quality: `hd` and `standard`. The hd option creates images with finer details and greater consistency across the image. Standard images can be generated faster.
262
+
There are two options for image quality: `hd` and `standard`. The hd option creates images with finer details and greater consistency across the image. Standard images are faster to generate.
255
263
256
264
The default value is `standard`.
257
265
258
266
#### Number
259
267
260
-
With DALL-E 3, you can't generate more than one image in a single API call: the `n` parameter must be set to *1*. If you need to generate multiple images at once, make parallel requests.
268
+
With DALL-E 3, you can't generate more than one image in a single API call: the `n` parameter must be set to *1*. To generate multiple images at once, make parallel requests.
261
269
262
270
#### Response format
263
271
264
-
The format in which DALL-E 3 generated images are returned. Must be one of `url` or `b64_json`. This parameter isn't supported for GPT-image-1 which always returns base64-encoded images.
272
+
The format in which DALL-E 3 returns generated images. Must be one of `url` or `b64_json`. This parameter isn't supported for GPT-image-1, which always returns base64-encoded images.
265
273
266
274
---
267
275
268
-
## Call the Image Edit API
276
+
## Call the image edit API
269
277
270
-
The Image Edit API allows you to modify existing images based on text prompts you provide. The API call is similar to the image generation API call, but you also need to provide an input image.
278
+
The Image Edit API enables you to modify existing images based on text prompts you provide. The API call is similar to the image generation API call, but you also need to provide an input image.
271
279
272
280
273
281
#### [GPT-image-1](#tab/gpt-image-1)
@@ -308,8 +316,7 @@ The following is a sample request body. You specify a number of options, defined
308
316
-F "n=1" \
309
317
-F "quality=high"
310
318
```
311
-
312
-
### Output
319
+
### API response output
313
320
314
321
The response from a successful image editing API call looks like the following example. The `b64_json` field contains the output image data.
315
322
@@ -324,28 +331,28 @@ The response from a successful image editing API call looks like the following e
324
331
}
325
332
```
326
333
327
-
### Specify API options
334
+
### Specify image edit API options
328
335
329
336
The following API body parameters are available for image editing models, in addition to the ones available for image generation models.
330
337
331
-
### Image
338
+
####Image
332
339
333
340
The *image* value indicates the image file you want to edit.
334
341
335
342
#### Input fidelity
336
343
337
-
The *input_fidelity* parameter controls how much effort the model will exert to match the style and features, especially facial features, of input images
344
+
The *input_fidelity* parameter controls how much effort the model puts into matching the style and features, especially facial features, of input images.
338
345
339
-
This allows you to make subtle edits to an image without altering unrelated areas. When you use high input fidelity, faces are preserved more accurately than in standard mode.
346
+
This parameter lets you make subtle edits to an image without changing unrelated areas. When you use high input fidelity, faces are preserved more accurately than in standard mode.
340
347
341
348
342
349
#### Mask
343
350
344
-
The *mask* parameter is the same type as the main *image* input parameter. It defines the area of the image that you want the model to edit, using fully transparent pixels (alpha of zero) in those areas. The mask must be a PNG file and have the same dimensions as the input image.
351
+
The *mask* parameter uses the same type as the main *image* input parameter. It defines the area of the image that you want the model to edit, using fully transparent pixels (alpha of zero) in those areas. The mask must be a PNG file and have the same dimensions as the input image.
345
352
346
353
#### Streaming
347
354
348
-
Use the *stream* parameter to enable streaming responses. When set to `true`, the API returns partial images as they are generated. This provides faster visual feedback for users and improves perceived latency. Set the *partial_images* parameter to control how many partial images are generated (1-3).
355
+
Use the *stream* parameter to enable streaming responses. When set to `true`, the API returns partial images as they're generated. This feature provides faster visual feedback for users and improves perceived latency. Set the *partial_images* parameter to control how many partial images are generated (1-3).
0 commit comments