Skip to content

Commit 887275b

Browse files
Merge pull request #7049 from mrbullwinkle/mrb_09_12_2025_partial_image_support
[Azure OpenAI] Image gen steaming + partial image support update
2 parents 9e8a788 + bc3b212 commit 887275b

File tree

2 files changed

+40
-44
lines changed

2 files changed

+40
-44
lines changed

articles/ai-foundry/openai/how-to/dall-e.md

Lines changed: 38 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -71,8 +71,6 @@ The following is a sample request body. You specify a number of options, defined
7171
}
7272
```
7373

74-
75-
7674
#### [DALL-E 3](#tab/dalle-3)
7775

7876
Send a POST request to:
@@ -147,8 +145,46 @@ The response from a successful image generation API call looks like the followin
147145
]
148146
}
149147
```
148+
150149
---
151150

151+
### Streaming
152+
153+
You can stream image generation requests to `gpt-image-1` by setting the `stream` parameter to `true`, and setting the `partial_images` parameter to a value between 0 and 3.
154+
155+
```python
156+
from openai import OpenAI
157+
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
158+
159+
token_provider = get_bearer_token_provider(
160+
DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"
161+
)
162+
163+
client = OpenAI(
164+
base_url = "https://RESOURCE-NAME-HERE/openai/v1/",
165+
api_key=token_provider,
166+
default_headers={"x-ms-oai-image-generation-deployment":"gpt-image-1", "api_version":"preview"}
167+
)
168+
169+
stream = client.images.generate(
170+
model="gpt-image-1",
171+
prompt="A cute baby sea otter",
172+
n=1,
173+
size="1024x1024",
174+
stream=True,
175+
partial_images = 2
176+
)
177+
178+
for event in stream:
179+
if event.type == "image_generation.partial_image":
180+
idx = event.partial_image_index
181+
image_base64 = event.b64_json
182+
image_bytes = base64.b64decode(image_base64)
183+
with open(f"river{idx}.png", "wb") as f:
184+
f.write(image_bytes)
185+
186+
```
187+
152188

153189
### API call rejection
154190

articles/ai-foundry/openai/how-to/responses.md

Lines changed: 2 additions & 42 deletions
Original file line numberDiff line numberDiff line change
@@ -1265,14 +1265,9 @@ Compared to the standalone Image API, the Responses API offers several advantage
12651265
* **Flexible inputs**: Accept image File IDs as inputs, in addition to raw image bytes.
12661266

12671267
> [!NOTE]
1268-
> The image generation tool in the Responses API is only supported by the `gpt-image-1` model. You can however call this model from this list of supported models - `gpt-4o`, `gpt-4o-mini`, `gpt-4.1`, `gpt-4.1-mini`, `gpt-4.1-nano`, `o3`.
1268+
> The image generation tool in the Responses API is only supported by the `gpt-image-1` model. You can however call this model from this list of supported models - `gpt-4o`, `gpt-4o-mini`, `gpt-4.1`, `gpt-4.1-mini`, `gpt-4.1-nano`, `o3`, and `gpt-5` series models.<br><br>The Responses API image generation tool does not currently support streaming mode. To use streaming mode and generate partial images, call the [image generation API](./dall-e.md) directly outside of the Responses API.
12691269

1270-
Use the Responses API if you want to:
1271-
1272-
* Build conversational image experiences with GPT Image.
1273-
* Stream partial image results during generation for a smoother user experience.
1274-
1275-
Generate an image
1270+
Use the Responses API if you want to build conversational image experiences with GPT Image.
12761271

12771272

12781273
```python
@@ -1309,41 +1304,6 @@ if image_data:
13091304
f.write(base64.b64decode(image_base64))
13101305
```
13111306

1312-
### Streaming
1313-
1314-
You can stream partial images using Responses API. The `partial_images` can be used to receive 1-3 partial images
1315-
1316-
```python
1317-
from openai import AzureOpenAI
1318-
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
1319-
1320-
token_provider = get_bearer_token_provider(
1321-
DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"
1322-
)
1323-
1324-
client = AzureOpenAI(
1325-
base_url = "https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/",
1326-
azure_ad_token_provider=token_provider,
1327-
api_version="preview",
1328-
default_headers={"x-ms-oai-image-generation-deployment":"YOUR-GPT-IMAGE1-DEPLOYMENT-NAME"}
1329-
)
1330-
1331-
stream = client.responses.create(
1332-
model="gpt-4.1",
1333-
input="Draw a gorgeous image of a river made of white owl feathers, snaking its way through a serene winter landscape",
1334-
stream=True,
1335-
tools=[{"type": "image_generation", "partial_images": 2}],
1336-
)
1337-
1338-
for event in stream:
1339-
if event.type == "response.image_generation_call.partial_image":
1340-
idx = event.partial_image_index
1341-
image_base64 = event.partial_image_b64
1342-
image_bytes = base64.b64decode(image_base64)
1343-
with open(f"river{idx}.png", "wb") as f:
1344-
f.write(image_bytes)
1345-
```
1346-
13471307
## Reasoning models
13481308

13491309
For examples of how to use reasoning models with the responses API see the [reasoning models guide](./reasoning.md#reasoning-summary).

0 commit comments

Comments
 (0)