You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/responses.md
+94-21Lines changed: 94 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ description: Learn how to use Azure OpenAI's new stateful Responses API.
5
5
manager: nitinme
6
6
ms.service: azure-ai-openai
7
7
ms.topic: include
8
-
ms.date: 03/21/2025
8
+
ms.date: 04/23/2025
9
9
author: mrbullwinkle
10
10
ms.author: mbullwin
11
11
ms.custom: references_regions
@@ -56,9 +56,9 @@ Not every model is available in the regions supported by the responses API. Chec
56
56
> - Structured outputs
57
57
> - tool_choice
58
58
> - image_url pointing to an internet address
59
-
> - The web search tool is also not supported, and is not part of the `2025-03-01-preview` API.
59
+
> - The web search tool is also not supported, and isn't part of the `2025-03-01-preview` API.
60
60
>
61
-
> There is also a known issue with vision performance when using the Responses API, particularly with OCR tasks. As a temporary workaround set image detail to `high`. This article will be updated once this issue is resolved and as any additional feature support is added.
61
+
> There's also a known issue with vision performance when using the Responses API, particularly with OCR tasks. As a temporary workaround set image detail to `high`. This article will be updated once this issue is resolved and as any additional feature support is added.
"text": "It looks like you're testing out how this works! How can I assist you today?",
171
194
"type": "output_text"
172
195
}
173
196
],
174
197
"role": "assistant",
175
-
"status": null,
198
+
"status": "completed",
176
199
"type": "message"
177
200
}
178
201
],
179
-
"output_text": "Great! How can I help you today?",
180
-
"parallel_tool_calls": null,
202
+
"parallel_tool_calls": true,
181
203
"temperature": 1.0,
182
-
"tool_choice": null,
204
+
"tool_choice": "auto",
183
205
"tools": [],
184
206
"top_p": 1.0,
185
207
"max_output_tokens": null,
186
208
"previous_response_id": null,
187
-
"reasoning": null,
209
+
"reasoning": {
210
+
"effort": null,
211
+
"generate_summary": null,
212
+
"summary": null
213
+
},
214
+
"service_tier": null,
188
215
"status": "completed",
189
-
"text": null,
190
-
"truncation": null,
216
+
"text": {
217
+
"format": {
218
+
"type": "text"
219
+
}
220
+
},
221
+
"truncation": "disabled",
191
222
"usage": {
192
-
"input_tokens": 20,
193
-
"output_tokens": 11,
223
+
"input_tokens": 12,
224
+
"input_tokens_details": {
225
+
"cached_tokens": 0
226
+
},
227
+
"output_tokens": 18,
194
228
"output_tokens_details": {
195
229
"reasoning_tokens": 0
196
230
},
197
-
"total_tokens": 31
231
+
"total_tokens": 30
198
232
},
199
233
"user": null,
200
-
"reasoning_effort": null
234
+
"store": true
201
235
}
202
236
```
203
237
204
238
---
205
239
240
+
Unlike the chat completions API, the responses API is asynchronous. More complex requests may not be completed by the time that an initial response is returned by the API. This is similar to how the Assistants API handles [thread/run status](/azure/ai-services/openai/how-to/assistant#retrieve-thread-status).
241
+
242
+
Note in the response output that the response object contains a `status` which can be monitored to determine when the response is finally complete. `status` can contain a value of `completed`, `failed`, `in_progress`, or `incomplete`.
243
+
244
+
### Retrieve an individual response status
245
+
246
+
In the previous Python examples we created a variable `response_id` and set it equal to the `response.id` of our `client.response.create()` call. We can then pass client.response.retrieve() to pull the current status of our response.
Depending on the complexity of your request it isn't uncommon to have an initial response with a status of `in_progress` with message output not yet generated. In that case you can create a loop to monitor the status of the response with code. The example below is for demonstration purposes only and is intended to be run in a Jupyter notebook. This code assumes you have already run the two previous Python examples and the Azure OpenAI client as well as `retrieve_response` have already been defined:
257
+
258
+
```python
259
+
import time
260
+
from IPython.display import clear_output
261
+
262
+
start_time = time.time()
263
+
264
+
status = retrieve_response.status
265
+
266
+
while status notin ["completed", "failed", "incomplete"]:
This function captures the current browser state as an image and returns it as a base64-encoded string, ready to be sent to the model. We'll constantly do this in a loop after each step allowing the model to see if the command it tried to execute was successful or not, which then allows it to adjust based on the contents of the screenshot. We could let the model decide if it needs to take a screenshot, but for simplicity we will force a screenshot to be taken for each iteration.
961
+
This function captures the current browser state as an image and returns it as a base64-encoded string, ready to be sent to the model. We'll constantly do this in a loop after each step allowing the model to see if the command it tried to execute was successful or not, which then allows it to adjust based on the contents of the screenshot. We could let the model decide if it needs to take a screenshot, but for simplicity we'll force a screenshot to be taken for each iteration.
0 commit comments