@@ -5,7 +5,7 @@ description: Learn how to use Azure OpenAI's new stateful Responses API.
5
5
author : mrbullwinkle
6
6
ms.author : mbullwin
7
7
manager : nitinme
8
- ms.date : 05/25 /2025
8
+ ms.date : 06/20 /2025
9
9
ms.service : azure-ai-openai
10
10
ms.topic : include
11
11
ms.custom :
@@ -33,8 +33,10 @@ The responses API is currently available in the following regions:
33
33
- francecentral
34
34
- japaneast
35
35
- norwayeast
36
+ - polandcentral
36
37
- southindia
37
38
- swedencentral
39
+ - switzerlandnorth
38
40
- uaenorth
39
41
- uksouth
40
42
- westus
@@ -58,9 +60,12 @@ Not every model is available in the regions supported by the responses API. Chec
58
60
> Not currently supported:
59
61
> - The web search tool
60
62
> - Fine-tuned models
61
- > - Image generation via streaming. Coming soon.
63
+ > - Image generation using multi-turn editing and streaming - coming soon
62
64
> - Images can't be uploaded as a file and then referenced as input. Coming soon.
63
- > - There's a known issue with performance when background mode is used with streaming. The issue is expected to be resolved soon.
65
+ >
66
+ > There's a known issue with the following:
67
+ > - PDF as an input file is not yet supported.
68
+ > - Performance when background mode is used with streaming. The issue is expected to be resolved soon.
64
69
65
70
### Reference documentation
66
71
@@ -1071,7 +1076,6 @@ The Responses API enables image generation as part of conversations and multi-st
1071
1076
1072
1077
Compared to the standalone Image API , the Responses API offers several advantages:
1073
1078
1074
- * ** Multi- turn editing** : Iteratively refine and edit images using natural language prompts.
1075
1079
* ** Streaming** : Display partial image outputs during generation to improve perceived latency.
1076
1080
* ** Flexible inputs** : Accept image File IDs as inputs, in addition to raw image bytes .
1077
1081
@@ -1081,7 +1085,6 @@ Compared to the standalone Image API, the Responses API offers several advantage
1081
1085
Use the Responses API if you want to:
1082
1086
1083
1087
* Build conversational image experiences with GPT Image.
1084
- * Enable iterative image editing through multi- turn prompts.
1085
1088
* Stream partial image results during generation for a smoother user experience.
1086
1089
1087
1090
Generate an image
@@ -1121,57 +1124,6 @@ if image_data:
1121
1124
f.write(base64.b64decode(image_base64))
1122
1125
```
1123
1126
1124
- You can perform multi- turn image generation by using the output of image generation in subsequent calls or just using the `1previous_response_id ` .
1125
-
1126
- ```python
1127
- from openai import AzureOpenAI
1128
- from azure.identity import DefaultAzureCredential, get_bearer_token_provider
1129
-
1130
- token_provider = get_bearer_token_provider(
1131
- DefaultAzureCredential(), " https://cognitiveservices.azure.com/.default"
1132
- )
1133
-
1134
- client = AzureOpenAI(
1135
- base_url = " https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/" ,
1136
- azure_ad_token_provider = token_provider,
1137
- api_version = " preview" ,
1138
- default_headers = {" x-ms-oai-image-generation-deployment" :" YOUR-GPT-IMAGE1-DEPLOYMENT-NAME" }
1139
- )
1140
-
1141
- image_data = [
1142
- output.result
1143
- for output in response.output
1144
- if output.type == " image_generation_call"
1145
- ]
1146
-
1147
- if image_data:
1148
- image_base64 = image_data[0 ]
1149
-
1150
- with open (" cat_and_otter.png" , " wb" ) as f:
1151
- f.write(base64.b64decode(image_base64))
1152
-
1153
-
1154
- # Follow up
1155
-
1156
- response_followup = client.responses.create(
1157
- model = " gpt-4.1-mini" ,
1158
- previous_response_id = response.id,
1159
- input = " Now make it look realistic" ,
1160
- tools = [{" type" : " image_generation" }],
1161
- )
1162
-
1163
- image_data_followup = [
1164
- output.result
1165
- for output in response_followup.output
1166
- if output.type == " image_generation_call"
1167
- ]
1168
-
1169
- if image_data_followup:
1170
- image_base64 = image_data_followup[0 ]
1171
- with open (" cat_and_otter_realistic.png" , " wb" ) as f:
1172
- f.write(base64.b64decode(image_base64))
1173
- ```
1174
-
1175
1127
# ## Streaming
1176
1128
1177
1129
You can stream partial images using Responses API . The `partial_images` can be used to receive 1 - 3 partial images
0 commit comments