Skip to content

Commit d87afef

Browse files
committed
Merge branch 'main' into release-aug-2025-search
2 parents 7c367f5 + 9615d22 commit d87afef

32 files changed

+317
-297
lines changed

.github/policies/disallow-edits.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -62,15 +62,15 @@ configuration:
6262
- isActivitySender:
6363
user: laujan
6464
- isActivitySender:
65-
user: patrickfarley
65+
user: PatrickFarley
6666
- isActivitySender:
67-
user: heidisteen
67+
user: HeidiSteen
6868
- isActivitySender:
6969
user: haileytap
7070
then:
7171
- addReply:
7272
reply: >-
73-
@${issueAuthor} - Please don't sign off on this PR. The area owners will sign off once they've reviewed your contribution.
73+
${issueAuthor} - Please don't sign off on this PR. The area owners will sign off once they've reviewed your contribution.
7474
- mentionUsers:
7575
mentionees:
7676
- eric-urban

articles/ai-foundry/agents/how-to/tools/function-calling.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ Azure AI Agents supports function calling, which allows you to describe the stru
2323
2424
### Usage support
2525

26-
|Azure AI foundry support | Python SDK | C# SDK | JavaScript SDK | REST API | Basic agent setup | Standard agent setup |
26+
|Azure AI foundry support | Python SDK | C# SDK | JavaScript SDK | REST API | Basic agent setup | Standard agent setup |
2727
|---------|---------|---------|---------|---------|---------|---------|
2828
| | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
2929

@@ -381,6 +381,7 @@ Finally, clean up the created resources by deleting the thread and the agent.
381381
client.Threads.DeleteThread(threadId: thread.Id);
382382
// Delete the agent definition
383383
client.Administration.DeleteAgent(agentId: agent.Id);
384+
```
384385

385386
::: zone-end
386387

articles/ai-foundry/concepts/architecture.md

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.custom:
77
- build-2024
88
- ignite-2024
99
ms.topic: concept-article
10-
ms.date: 07/22/2025
10+
ms.date: 09/03/2025
1111
ms.reviewer: deeikele
1212
ms.author: sgilley
1313
author: sdgilley
@@ -79,12 +79,16 @@ Users can optionally connect their own Azure Storage accounts. Foundry tools can
7979
* **Customer-Managed Key Encryption**:
8080
By default, Azure services use Microsoft-managed encryption keys to encrypt data in transit and at rest. Data is encrypted and decrypted using FIPS 140-2 compliant 256-bit AES encryption. Encryption and decryption are transparent, meaning encryption and access are managed for you. Your data is secure by default and you don't need to modify your code or applications to take advantage of encryption.
8181

82-
When using customer-managed keys, your data on Microsoft-managed infrastructure is encrypted using your keys.
82+
* **Bring your own Key Vault**:
83+
By default, AI Foundry stores all API key-based connection secrets in a managed Azure Key Vault. For users that prefer to manage this themselves, they can connect to their key vault to the Foundry resource. One Azure Key Vault connection will manage all project and resource level connection secrets. Go to learn [how to set up an Azure Key Vault connection to AI Foundry](../how-to/set-up-key-vault-connection.md).
84+
85+
When using customer-managed keys, your data on Microsoft-managed infrastructure is encrypted using your keys.
86+
8387
To learn more about data encryption, see [customer-managed keys for encryption with Azure AI Foundry](encryption-keys-portal.md).
8488

8589
## Next steps
8690

8791
* [Azure AI Foundry rollout across my organization](planning.md)
8892
* [Customer-managed keys for encryption with Azure AI Foundry](encryption-keys-portal.md)
8993
* [How to configure a private link for Azure AI Foundry](../how-to/configure-private-link.md)
90-
* [Bring-your-own resources with the Agent service](../agents/how-to/use-your-own-resources.md)
94+
* [Bring-your-own resources with the Agent service](../agents/how-to/use-your-own-resources.md)

articles/ai-foundry/how-to/connections-add.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,6 +59,7 @@ Here's a table of some of the available connection types in Azure AI Foundry por
5959
| Custom | | | Custom connections allow you to securely store and access keys while storing related properties, such as targets and versions. Custom connections are useful when you have many targets or cases where you wouldn't need a credential to access. LangChain scenarios are a good example where you would use custom service connections. Custom connections don't manage authentication, so you have to manage authentication on your own. |
6060
| Serverless Model || | Serverless Model connections allow you to serverless API deployment. |
6161
| Azure Databricks | ✅ | | Azure Databricks connector allows you to connect your Azure AI Foundry Agents to Azure Databricks to access workflows and Genie Spaces during runtime. It supports three connection types - __Jobs__, __Genie__, and __Other__. You can pick the Job or Genie space you want associated with this connection while setting up the connection in the Foundry UI. You can also use the Other connection type and allow your agent to access workspace operations in Azure Databricks. Authentication is handled through Microsoft Entra ID for users or service principals. For examples of using this connector, see [Jobs](https://github.com/Azure-Samples/AI-Foundry-Connections/blob/main/src/samples/python/sample_agent_adb_job.py) and [Genie](https://github.com/Azure-Samples/AI-Foundry-Connections/blob/main/src/samples/python/sample_agent_adb_genie.py). Note: Usage of this connection is available only via the Foundry SDK in code and is integrated into agents as a FunctionTool (please see the samples above for details). Usage of this connection in AI Foundry Playground is currently not supported.|
62+
| Azure Key Vault|| | Azure service for securely storing and accessing secrets. AI Foundry stores connections details in a managed Azure Key Vault if no Key Vault connection is created. Users that prefer to manage their secrets themselves can bring their own Azure Key Vault via a connection. (See [limitations](#limits)) |
6263

6364
## Agent knowledge tool connections
6465

@@ -167,4 +168,4 @@ For more on how to set private endpoints to your connected resources, see the fo
167168
## Related content
168169

169170
- [How to create vector indexes](../how-to/index-add.md)
170-
- [How to configure a managed network](configure-managed-network.md)
171+
- [How to configure a managed network](configure-managed-network.md)

articles/ai-foundry/openai/concepts/models.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -321,7 +321,7 @@ Details about maximum request tokens and training data are available in the foll
321321
|`gpt-4o-realtime-preview` (2025-06-03) <br> GPT-4o audio | Audio model for real-time audio processing. |Input: 128,000 <br> Output: 4,096 | October 2023 |
322322
|`gpt-4o-realtime-preview` (2024-12-17) <br> GPT-4o audio | Audio model for real-time audio processing. |Input: 128,000 <br> Output: 4,096 | October 2023 |
323323
|`gpt-4o-mini-realtime-preview` (2024-12-17) <br> GPT-4o audio | Audio model for real-time audio processing. |Input: 128,000 <br> Output: 4,096 | October 2023 |
324-
|`gpt-4o-realtime` (2025-08-28) <br> GPT-4o audio | Audio model for real-time audio processing. |Input: 28,672 <br> Output: 4,096 | October 2023 |
324+
|`gpt-realtime` (2025-08-28) <br> GPT-4o audio | Audio model for real-time audio processing. |Input: 28,672 <br> Output: 4,096 | October 2023 |
325325

326326
To compare the availability of GPT-4o audio models across all regions, refer to the [models table](#global-standard-model-availability).
327327

articles/ai-foundry/openai/how-to/dall-e.md

Lines changed: 34 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
---
2-
title: How to use image generation models
2+
title: How to Use Image Generation Models from OpenAI
33
titleSuffix: Azure OpenAI in Azure AI Foundry Models
4-
description: Learn how to generate and edit images with image models, and learn about the configuration options that are available.
4+
description: Learn how to generate and edit images using Azure OpenAI image generation models. Discover configuration options and start creating images today.
55
author: PatrickFarley
66
ms.author: pafarley
77
manager: nitinme
8-
ms.date: 04/23/2025
8+
ms.date: 09/02/2025
99
ms.service: azure-ai-openai
1010
ms.topic: how-to
1111
ms.custom:
@@ -15,22 +15,26 @@ ms.custom:
1515

1616
# How to use Azure OpenAI image generation models
1717

18-
OpenAI's image generation models render images based on user-provided text prompts and optionally provided images. This guide demonstrates how to use the image generation models and configure their options through REST API calls.
18+
OpenAI's image generation models create images from user-provided text prompts and optional images. This article explains how to use these models, configure options, and benefit from advanced image generation capabilities in Azure.
1919

2020

2121
## Prerequisites
2222

23+
2324
- An Azure subscription. You can [create one for free](https://azure.microsoft.com/pricing/purchase-options/azure-account?icid=ai-services).
2425
- An Azure OpenAI resource created in a supported region. See [Region availability](/azure/ai-foundry/openai/concepts/models#model-summary-table-and-region-availability).
2526
- Deploy a `dall-e-3` or `gpt-image-1` model with your Azure OpenAI resource. For more information on deployments, see [Create a resource and deploy a model with Azure OpenAI](/azure/ai-foundry/openai/how-to/create-resource).
2627
- GPT-image-1 is the newer model and features a number of improvements over DALL-E 3. It's available in limited access: apply for access with [this form](https://aka.ms/oai/gptimage1access).
2728

28-
## Call the Image Generation API
2929

30-
The following command shows the most basic way to use an image model with code. If this is your first time using these models programmatically, we recommend starting with the [quickstart](/azure/ai-foundry/openai/dall-e-quickstart).
30+
## Call the image generation API
31+
32+
33+
The following command shows the most basic way to use an image model with code. If this is your first time using these models programmatically, start with the [quickstart](/azure/ai-foundry/openai/dall-e-quickstart).
3134

3235

3336
#### [GPT-image-1](#tab/gpt-image-1)
37+
3438
Send a POST request to:
3539

3640
```
@@ -41,14 +45,18 @@ https://<your_resource_name>.openai.azure.com/openai/deployments/<your_deploymen
4145
**URL**:
4246

4347
Replace the following values:
48+
4449
- `<your_resource_name>` is the name of your Azure OpenAI resource.
4550
- `<your_deployment_name>` is the name of your DALL-E 3 or GPT-image-1 model deployment.
4651
- `<api_version>` is the version of the API you want to use. For example, `2025-04-01-preview`.
4752

53+
4854
**Required headers**:
55+
4956
- `Content-Type`: `application/json`
5057
- `api-key`: `<your_API_key>`
5158

59+
5260
**Body**:
5361

5462
The following is a sample request body. You specify a number of options, defined in later sections.
@@ -122,7 +130,7 @@ The response from a successful image generation API call looks like the followin
122130
}
123131
```
124132
> [!NOTE]
125-
> `response_format` parameter is not supported for GPT-image-1 which always returns base64-encoded images.
133+
> The `response_format` parameter isn't supported for GPT-image-1, which always returns base64-encoded images.
126134
127135
#### [DALL-E 3](#tab/dalle-3)
128136

@@ -144,7 +152,7 @@ The response from a successful image generation API call looks like the followin
144152

145153
### API call rejection
146154

147-
Prompts and images are filtered based on our content policy, returning an error when a prompt or image is flagged.
155+
Prompts and images are filtered based on our content policy. The API returns an error when a prompt or image is flagged.
148156

149157
If your prompt is flagged, the `error.code` value in the message is set to `contentFilter`. Here's an example:
150158

@@ -172,9 +180,9 @@ It's also possible that the generated image itself is filtered. In this case, th
172180
}
173181
```
174182

175-
### Write text-to-image prompts
183+
### Write effective text-to-image prompts
176184

177-
Your prompts should describe the content you want to see in the image, and the visual style of image.
185+
Your prompts should describe the content you want to see in the image and the visual style of the image.
178186

179187
When you write prompts, consider that the Image APIs come with a content moderation filter. If the service recognizes your prompt as harmful content, it doesn't generate an image. For more information, see [Content filtering](../concepts/content-filter.md).
180188

@@ -197,7 +205,7 @@ Specify the size of the generated images. Must be one of `1024x1024`, `1024x1536
197205

198206
#### Quality
199207

200-
There are three options for image quality: `low`, `medium`, and `high`.Lower quality images can be generated faster.
208+
There are three options for image quality: `low`, `medium`, and `high`. Lower quality images can be generated faster.
201209

202210
The default value is `high`.
203211

@@ -207,22 +215,22 @@ You can generate between one and 10 images in a single API call. The default val
207215

208216
#### User ID
209217

210-
Use the *user* parameter to specify a unique identifier for the user making the request. This is useful for tracking and monitoring usage patterns. The value can be any string, such as a user ID or email address.
218+
Use the *user* parameter to specify a unique identifier for the user making the request. This identifier is useful for tracking and monitoring usage patterns. The value can be any string, such as a user ID or email address.
211219

212220
#### Output format
213221

214222
Use the *output_format* parameter to specify the format of the generated image. Supported formats are `PNG` and `JPEG`. The default is `PNG`.
215223

216224
> [!NOTE]
217-
> WEBP images are not supported in the Azure OpenAI in Azure AI Foundry Models.
225+
> WEBP images aren't supported in the Azure OpenAI in Azure AI Foundry Models.
218226
219227
#### Compression
220228

221229
Use the *output_compression* parameter to specify the compression level for the generated image. Input an integer between `0` and `100`, where `0` is no compression and `100` is maximum compression. The default is `100`.
222230

223231
#### Streaming
224232

225-
Use the *stream* parameter to enable streaming responses. When set to `true`, the API returns partial images as they are generated. This provides faster visual feedback for users and improves perceived latency. Set the *partial_images* parameter to control how many partial images are generated (1-3).
233+
Use the *stream* parameter to enable streaming responses. When set to `true`, the API returns partial images as they're generated. This feature provides faster visual feedback for users and improves perceived latency. Set the *partial_images* parameter to control how many partial images are generated (1-3).
226234

227235

228236
#### [DALL-E 3](#tab/dalle-3)
@@ -251,23 +259,23 @@ The default value is `vivid`.
251259

252260
#### Quality
253261

254-
There are two options for image quality: `hd` and `standard`. The hd option creates images with finer details and greater consistency across the image. Standard images can be generated faster.
262+
There are two options for image quality: `hd` and `standard`. The hd option creates images with finer details and greater consistency across the image. Standard images are faster to generate.
255263

256264
The default value is `standard`.
257265

258266
#### Number
259267

260-
With DALL-E 3, you can't generate more than one image in a single API call: the `n` parameter must be set to *1*. If you need to generate multiple images at once, make parallel requests.
268+
With DALL-E 3, you can't generate more than one image in a single API call: the `n` parameter must be set to *1*. To generate multiple images at once, make parallel requests.
261269

262270
#### Response format
263271

264-
The format in which DALL-E 3 generated images are returned. Must be one of `url` or `b64_json`. This parameter isn't supported for GPT-image-1 which always returns base64-encoded images.
272+
The format in which DALL-E 3 returns generated images. Must be one of `url` or `b64_json`. This parameter isn't supported for GPT-image-1, which always returns base64-encoded images.
265273

266274
---
267275

268-
## Call the Image Edit API
276+
## Call the image edit API
269277

270-
The Image Edit API allows you to modify existing images based on text prompts you provide. The API call is similar to the image generation API call, but you also need to provide an input image.
278+
The Image Edit API enables you to modify existing images based on text prompts you provide. The API call is similar to the image generation API call, but you also need to provide an input image.
271279

272280

273281
#### [GPT-image-1](#tab/gpt-image-1)
@@ -308,8 +316,7 @@ The following is a sample request body. You specify a number of options, defined
308316
-F "n=1" \
309317
-F "quality=high"
310318
```
311-
312-
### Output
319+
### API response output
313320

314321
The response from a successful image editing API call looks like the following example. The `b64_json` field contains the output image data.
315322

@@ -324,28 +331,28 @@ The response from a successful image editing API call looks like the following e
324331
}
325332
```
326333

327-
### Specify API options
334+
### Specify image edit API options
328335

329336
The following API body parameters are available for image editing models, in addition to the ones available for image generation models.
330337

331-
### Image
338+
#### Image
332339

333340
The *image* value indicates the image file you want to edit.
334341

335342
#### Input fidelity
336343

337-
The *input_fidelity* parameter controls how much effort the model will exert to match the style and features, especially facial features, of input images
344+
The *input_fidelity* parameter controls how much effort the model puts into matching the style and features, especially facial features, of input images.
338345

339-
This allows you to make subtle edits to an image without altering unrelated areas. When you use high input fidelity, faces are preserved more accurately than in standard mode.
346+
This parameter lets you make subtle edits to an image without changing unrelated areas. When you use high input fidelity, faces are preserved more accurately than in standard mode.
340347

341348

342349
#### Mask
343350

344-
The *mask* parameter is the same type as the main *image* input parameter. It defines the area of the image that you want the model to edit, using fully transparent pixels (alpha of zero) in those areas. The mask must be a PNG file and have the same dimensions as the input image.
351+
The *mask* parameter uses the same type as the main *image* input parameter. It defines the area of the image that you want the model to edit, using fully transparent pixels (alpha of zero) in those areas. The mask must be a PNG file and have the same dimensions as the input image.
345352

346353
#### Streaming
347354

348-
Use the *stream* parameter to enable streaming responses. When set to `true`, the API returns partial images as they are generated. This provides faster visual feedback for users and improves perceived latency. Set the *partial_images* parameter to control how many partial images are generated (1-3).
355+
Use the *stream* parameter to enable streaming responses. When set to `true`, the API returns partial images as they're generated. This feature provides faster visual feedback for users and improves perceived latency. Set the *partial_images* parameter to control how many partial images are generated (1-3).
349356

350357
#### [DALL-E 3](#tab/dalle-3)
351358

0 commit comments

Comments
 (0)