You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/responses.md
+21-94Lines changed: 21 additions & 94 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ description: Learn how to use Azure OpenAI's new stateful Responses API.
5
5
manager: nitinme
6
6
ms.service: azure-ai-openai
7
7
ms.topic: include
8
-
ms.date: 04/23/2025
8
+
ms.date: 03/21/2025
9
9
author: mrbullwinkle
10
10
ms.author: mbullwin
11
11
ms.custom: references_regions
@@ -56,9 +56,9 @@ Not every model is available in the regions supported by the responses API. Chec
56
56
> - Structured outputs
57
57
> - tool_choice
58
58
> - image_url pointing to an internet address
59
-
> - The web search tool is also not supported, and isn't part of the `2025-03-01-preview` API.
59
+
> - The web search tool is also not supported, and is not part of the `2025-03-01-preview` API.
60
60
>
61
-
> There's also a known issue with vision performance when using the Responses API, particularly with OCR tasks. As a temporary workaround set image detail to `high`. This article will be updated once this issue is resolved and as any additional feature support is added.
61
+
> There is also a known issue with vision performance when using the Responses API, particularly with OCR tasks. As a temporary workaround set image detail to `high`. This article will be updated once this issue is resolved and as any additional feature support is added.
"text": "It looks like you're testing out how this works! How can I assist you today?",
170
+
"text": "Great! How can I help you today?",
194
171
"type": "output_text"
195
172
}
196
173
],
197
174
"role": "assistant",
198
-
"status": "completed",
175
+
"status": null,
199
176
"type": "message"
200
177
}
201
178
],
202
-
"parallel_tool_calls": true,
179
+
"output_text": "Great! How can I help you today?",
180
+
"parallel_tool_calls": null,
203
181
"temperature": 1.0,
204
-
"tool_choice": "auto",
182
+
"tool_choice": null,
205
183
"tools": [],
206
184
"top_p": 1.0,
207
185
"max_output_tokens": null,
208
186
"previous_response_id": null,
209
-
"reasoning": {
210
-
"effort": null,
211
-
"generate_summary": null,
212
-
"summary": null
213
-
},
214
-
"service_tier": null,
187
+
"reasoning": null,
215
188
"status": "completed",
216
-
"text": {
217
-
"format": {
218
-
"type": "text"
219
-
}
220
-
},
221
-
"truncation": "disabled",
189
+
"text": null,
190
+
"truncation": null,
222
191
"usage": {
223
-
"input_tokens": 12,
224
-
"input_tokens_details": {
225
-
"cached_tokens": 0
226
-
},
227
-
"output_tokens": 18,
192
+
"input_tokens": 20,
193
+
"output_tokens": 11,
228
194
"output_tokens_details": {
229
195
"reasoning_tokens": 0
230
196
},
231
-
"total_tokens": 30
197
+
"total_tokens": 31
232
198
},
233
199
"user": null,
234
-
"store": true
200
+
"reasoning_effort": null
235
201
}
236
202
```
237
203
238
204
---
239
205
240
-
Unlike the chat completions API, the responses API is asynchronous. More complex requests may not be completed by the time that an initial response is returned by the API. This is similar to how the Assistants API handles [thread/run status](/azure/ai-services/openai/how-to/assistant#retrieve-thread-status).
241
-
242
-
Note in the response output that the response object contains a `status` which can be monitored to determine when the response is finally complete. `status` can contain a value of `completed`, `failed`, `in_progress`, or `incomplete`.
243
-
244
-
### Retrieve an individual response status
245
-
246
-
In the previous Python examples we created a variable `response_id` and set it equal to the `response.id` of our `client.response.create()` call. We can then pass client.response.retrieve() to pull the current status of our response.
Depending on the complexity of your request it isn't uncommon to have an initial response with a status of `in_progress` with message output not yet generated. In that case you can create a loop to monitor the status of the response with code. The example below is for demonstration purposes only and is intended to be run in a Jupyter notebook. This code assumes you have already run the two previous Python examples and the Azure OpenAI client as well as `retrieve_response` have already been defined:
257
-
258
-
```python
259
-
import time
260
-
from IPython.display import clear_output
261
-
262
-
start_time = time.time()
263
-
264
-
status = retrieve_response.status
265
-
266
-
while status notin ["completed", "failed", "incomplete"]:
This function captures the current browser state as an image and returns it as a base64-encoded string, ready to be sent to the model. We'll constantly do this in a loop after each step allowing the model to see if the command it tried to execute was successful or not, which then allows it to adjust based on the contents of the screenshot. We could let the model decide if it needs to take a screenshot, but for simplicity we'll force a screenshot to be taken for each iteration.
888
+
This function captures the current browser state as an image and returns it as a base64-encoded string, ready to be sent to the model. We'll constantly do this in a loop after each step allowing the model to see if the command it tried to execute was successful or not, which then allows it to adjust based on the contents of the screenshot. We could let the model decide if it needs to take a screenshot, but for simplicity we will force a screenshot to be taken for each iteration.
0 commit comments