Skip to content

Commit c5feb12

Browse files
Merge pull request #6775 from mrbullwinkle/mrb_08_26_2025_v1_008
[Release Branch] [Azure OpenAI] v1 updates
2 parents ce89298 + ee77449 commit c5feb12

File tree

1 file changed

+14
-107
lines changed

1 file changed

+14
-107
lines changed

articles/ai-foundry/openai/how-to/responses.md

Lines changed: 14 additions & 107 deletions
Original file line numberDiff line numberDiff line change
@@ -1168,23 +1168,18 @@ print(f"Final status: {response.status}\nOutput:\n{response.output_text}")
11681168
You can cancel an in-progress background task using the `cancel` endpoint. Canceling is idempotent—subsequent calls will return the final response object.
11691169

11701170
```bash
1171-
curl -X POST https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/responses/resp_1234567890/cancel?api-version=preview \
1171+
curl -X POST https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/responses/resp_1234567890/cancel \
11721172
-H "Content-Type: application/json" \
11731173
-H "Authorization: Bearer $AZURE_OPENAI_AUTH_TOKEN"
11741174
```
11751175

11761176
```python
1177-
from openai import AzureOpenAI
1178-
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
1179-
1180-
token_provider = get_bearer_token_provider(
1181-
DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"
1182-
)
1177+
import os
1178+
from openai import OpenAI
11831179

1184-
client = AzureOpenAI(
1185-
base_url = "https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/",
1186-
azure_ad_token_provider=token_provider,
1187-
api_version="preview"
1180+
client = OpenAI(
1181+
base_url = "https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/",
1182+
api_key=os.getenv("AZURE_OPENAI_API_KEY")
11881183
)
11891184

11901185
response = client.responses.cancel("resp_1234567890")
@@ -1197,7 +1192,7 @@ print(response.status)
11971192
To stream a background response, set both `background` and `stream` to true. This is useful if you want to resume streaming later in case of a dropped connection. Use the sequence_number from each event to track your position.
11981193

11991194
```bash
1200-
curl https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/responses?api-version=preview \
1195+
curl https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/responses \
12011196
-H "Content-Type: application/json" \
12021197
-H "Authorization: Bearer $AZURE_OPENAI_AUTH_TOKEN" \
12031198
-d '{
@@ -1210,17 +1205,12 @@ curl https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/responses?api-version
12101205
```
12111206

12121207
```python
1213-
from openai import AzureOpenAI
1214-
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
1215-
1216-
token_provider = get_bearer_token_provider(
1217-
DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"
1218-
)
1208+
import os
1209+
from openai import OpenAI
12191210

1220-
client = AzureOpenAI(
1221-
base_url = "https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/",
1222-
azure_ad_token_provider=token_provider,
1223-
api_version="preview"
1211+
client = OpenAI(
1212+
base_url = "https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/",
1213+
api_key=os.getenv("AZURE_OPENAI_API_KEY")
12241214
)
12251215

12261216
# Fire off an async response but also start streaming immediately
@@ -1249,7 +1239,7 @@ for event in stream:
12491239
### Resume streaming from a specific point
12501240

12511241
```bash
1252-
curl https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/responses/resp_1234567890?stream=true&starting_after=42&api-version=2025-04-01-preview \
1242+
curl https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/responses/resp_1234567890?stream=true&starting_after=42 \
12531243
-H "Content-Type: application/json" \
12541244
-H "Authorization: Bearer $AZURE_OPENAI_AUTH_TOKEN"
12551245
```
@@ -1261,7 +1251,7 @@ When using the Responses API in stateless mode — either by setting `store` to
12611251
To retain reasoning items across turns, add `reasoning.encrypted_content` to the `include` parameter in your request. This ensures that the response includes an encrypted version of the reasoning trace, which can be passed along in future requests.
12621252

12631253
```bash
1264-
curl https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/responses?api-version=preview \
1254+
curl https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/responses \
12651255
-H "Content-Type: application/json" \
12661256
-H "Authorization: Bearer $AZURE_OPENAI_AUTH_TOKEN" \
12671257
-d '{
@@ -1362,89 +1352,6 @@ for event in stream:
13621352
f.write(image_bytes)
13631353
```
13641354

1365-
1366-
### Edit images
1367-
1368-
```python
1369-
from openai import AzureOpenAI
1370-
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
1371-
import base64
1372-
1373-
client = AzureOpenAI(
1374-
base_url = "https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/",
1375-
azure_ad_token_provider=token_provider,
1376-
api_version="preview",
1377-
default_headers={"x-ms-oai-image-generation-deployment":"YOUR-GPT-IMAGE1-DEPLOYMENT-NAME"}
1378-
)
1379-
1380-
def create_file(file_path):
1381-
with open(file_path, "rb") as file_content:
1382-
result = client.files.create(
1383-
file=file_content,
1384-
purpose="vision",
1385-
)
1386-
return result.id
1387-
1388-
def encode_image(file_path):
1389-
with open(file_path, "rb") as f:
1390-
base64_image = base64.b64encode(f.read()).decode("utf-8")
1391-
return base64_image
1392-
1393-
prompt = """Generate a photorealistic image of a gift basket on a white background
1394-
labeled 'Relax & Unwind' with a ribbon and handwriting-like font,
1395-
containing all the items in the reference pictures."""
1396-
1397-
base64_image1 = encode_image("image1.png")
1398-
base64_image2 = encode_image("image2.png")
1399-
file_id1 = create_file("image3.png")
1400-
file_id2 = create_file("image4.png")
1401-
1402-
response = client.responses.create(
1403-
model="gpt-4.1",
1404-
input=[
1405-
{
1406-
"role": "user",
1407-
"content": [
1408-
{"type": "input_text", "text": prompt},
1409-
{
1410-
"type": "input_image",
1411-
"image_url": f"data:image/jpeg;base64,{base64_image1}",
1412-
},
1413-
{
1414-
"type": "input_image",
1415-
"image_url": f"data:image/jpeg;base64,{base64_image2}",
1416-
},
1417-
{
1418-
"type": "input_image",
1419-
"file_id": file_id1,
1420-
},
1421-
{
1422-
"type": "input_image",
1423-
"file_id": file_id2,
1424-
}
1425-
],
1426-
}
1427-
],
1428-
tools=[{"type": "image_generation"}],
1429-
)
1430-
1431-
image_generation_calls = [
1432-
output
1433-
for output in response.output
1434-
if output.type == "image_generation_call"
1435-
]
1436-
1437-
image_data = [output.result for output in image_generation_calls]
1438-
1439-
if image_data:
1440-
image_base64 = image_data[0]
1441-
with open("gift-basket.png", "wb") as f:
1442-
f.write(base64.b64decode(image_base64))
1443-
else:
1444-
print(response.output.content)
1445-
```
1446-
1447-
14481355
## Reasoning models
14491356

14501357
For examples of how to use reasoning models with the responses API see the [reasoning models guide](./reasoning.md#reasoning-summary).

0 commit comments

Comments
 (0)