Skip to content

Commit 8056fac

Browse files
committed
update
1 parent 8cb126e commit 8056fac

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

articles/ai-services/openai/how-to/responses.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -56,9 +56,9 @@ Not every model is available in the regions supported by the responses API. Chec
5656
> - Structured outputs
5757
> - tool_choice
5858
> - image_url pointing to an internet address
59-
> - The web search tool is also not supported, and is not part of the `2025-03-01-preview` API.
59+
> - The web search tool is also not supported, and isn't part of the `2025-03-01-preview` API.
6060
>
61-
> There is also a known issue with vision performance when using the Responses API, particularly with OCR tasks. As a temporary workaround set image detail to `high`. This article will be updated once this issue is resolved and as any additional feature support is added.
61+
> There's also a known issue with vision performance when using the Responses API, particularly with OCR tasks. As a temporary workaround set image detail to `high`. This article will be updated once this issue is resolved and as any additional feature support is added.
6262
6363

6464
### Reference documentation
@@ -239,7 +239,7 @@ Response Status: completed
239239

240240
Unlike the chat completions API, the responses API is asynchronous. More complex requests may not be completed by the time that an initial response is returned by the API. This is similar to how the Assistants API handles [thread/run status](/azure/ai-services/openai/how-to/assistant#retrieve-thread-status).
241241

242-
Note in the response ouput that the response object contains a `status` which can be monitored to determine when the response is finally complete. `status` can contain a value of `completed`, `failed`, `in_progress`, or `incomplete`.
242+
Note in the response output that the response object contains a `status` which can be monitored to determine when the response is finally complete. `status` can contain a value of `completed`, `failed`, `in_progress`, or `incomplete`.
243243

244244
### Retrieve an individual response status
245245

@@ -253,7 +253,7 @@ print(retrieve_response.status)
253253

254254
### Monitor response status
255255

256-
Depending on the complexity of your request it is not uncommon to have an initial response with a status of `in_progress` with message output not yet generated. In that case you can create a loop to monitor the status of the response with code. The example below is for demonstration purposes only and is intended to be run in a Jupyter notebook. This code assumes you have already run the two previous Python examples and the client as well as `retrieve_response` have already been defined:
256+
Depending on the complexity of your request it isn't uncommon to have an initial response with a status of `in_progress` with message output not yet generated. In that case you can create a loop to monitor the status of the response with code. The example below is for demonstration purposes only and is intended to be run in a Jupyter notebook. This code assumes you have already run the two previous Python examples and the Azure OpenAI client as well as `retrieve_response` have already been defined:
257257

258258
```python
259259
import time
@@ -678,7 +678,7 @@ print(response.model_dump_json(indent=2))
678678

679679
## Image input
680680

681-
There is a known issue with image url based image input. Currently only base64 encoded images are supported.
681+
There's a known issue with image url based image input. Currently only base64 encoded images are supported.
682682

683683
### Image url
684684

@@ -958,7 +958,7 @@ async def take_screenshot(page):
958958
return last_successful_screenshot
959959
```
960960

961-
This function captures the current browser state as an image and returns it as a base64-encoded string, ready to be sent to the model. We'll constantly do this in a loop after each step allowing the model to see if the command it tried to execute was successful or not, which then allows it to adjust based on the contents of the screenshot. We could let the model decide if it needs to take a screenshot, but for simplicity we will force a screenshot to be taken for each iteration.
961+
This function captures the current browser state as an image and returns it as a base64-encoded string, ready to be sent to the model. We'll constantly do this in a loop after each step allowing the model to see if the command it tried to execute was successful or not, which then allows it to adjust based on the contents of the screenshot. We could let the model decide if it needs to take a screenshot, but for simplicity we'll force a screenshot to be taken for each iteration.
962962

963963
### Model response processing
964964

0 commit comments

Comments
 (0)