Skip to content

Commit 3bd938d

Browse files
authored
Update gpt-with-vision.md
For Computer vision with images it is dataSources not data_sources
1 parent b8b41b7 commit 3bd938d

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

articles/ai-services/openai/how-to/gpt-with-vision.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -271,7 +271,7 @@ Send a POST request to `https://{RESOURCE_NAME}.openai.azure.com/openai/deployme
271271

272272
The format is similar to that of the chat completions API for GPT-4, but the message content can be an array containing strings and images (either a valid HTTP or HTTPS URL to an image, or a base-64-encoded image).
273273

274-
You must also include the `enhancements` and `data_sources` objects. `enhancements` represents the specific Vision enhancement features requested in the chat. It has a `grounding` and `ocr` property, which both have a boolean `enabled` property. Use these to request the OCR service and/or the object detection/grounding service. `data_sources` represents the Computer Vision resource data that's needed for Vision enhancement. It has a `type` property which should be `"AzureComputerVision"` and a `parameters` property. Set the `endpoint` and `key` to the endpoint URL and access key of your Computer Vision resource.
274+
You must also include the `enhancements` and `dataSources` objects. `enhancements` represents the specific Vision enhancement features requested in the chat. It has a `grounding` and `ocr` property, which both have a boolean `enabled` property. Use these to request the OCR service and/or the object detection/grounding service. `dataSources` represents the Computer Vision resource data that's needed for Vision enhancement. It has a `type` property which should be `"AzureComputerVision"` and a `parameters` property. Set the `endpoint` and `key` to the endpoint URL and access key of your Computer Vision resource.
275275

276276
> [!IMPORTANT]
277277
> Remember to set a `"max_tokens"` value, or the return output will be cut off.
@@ -287,7 +287,7 @@ You must also include the `enhancements` and `data_sources` objects. `enhancemen
287287
"enabled": true
288288
}
289289
},
290-
"data_sources": [
290+
"dataSources": [
291291
{
292292
"type": "AzureComputerVision",
293293
"parameters": {
@@ -323,11 +323,11 @@ You must also include the `enhancements` and `data_sources` objects. `enhancemen
323323

324324
#### [Python](#tab/python)
325325

326-
You call the same method as in the previous step, but include the new *extra_body* parameter. It contains the `enhancements` and `data_sources` fields.
326+
You call the same method as in the previous step, but include the new *extra_body* parameter. It contains the `enhancements` and `dataSources` fields.
327327

328328
`enhancements` represents the specific Vision enhancement features requested in the chat. It has a `grounding` and `ocr` field, which both have a boolean `enabled` property. Use these to request the OCR service and/or the object detection/grounding service.
329329

330-
`data_sources` represents the Computer Vision resource data that's needed for Vision enhancement. It has a `type` field which should be `"AzureComputerVision"` and a `parameters` field. Set the `endpoint` and `key` to the endpoint URL and access key of your Computer Vision resource. R
330+
`dataSources` represents the Computer Vision resource data that's needed for Vision enhancement. It has a `type` field which should be `"AzureComputerVision"` and a `parameters` field. Set the `endpoint` and `key` to the endpoint URL and access key of your Computer Vision resource. R
331331

332332
> [!IMPORTANT]
333333
> Remember to set a `"max_tokens"` value, or the return output will be cut off.
@@ -352,7 +352,7 @@ response = client.chat.completions.create(
352352
] }
353353
],
354354
extra_body={
355-
"data_sources": [
355+
"dataSources": [
356356
{
357357
"type": "AzureComputerVision",
358358
"parameters": {
@@ -583,7 +583,7 @@ To use a User assigned identity on your Azure AI Services resource, follow these
583583
"enabled": true
584584
}
585585
},
586-
"data_sources": [
586+
"dataSources": [
587587
{
588588
"type": "AzureComputerVisionVideoIndex",
589589
"parameters": {
@@ -616,7 +616,7 @@ To use a User assigned identity on your Azure AI Services resource, follow these
616616
}
617617
```
618618
619-
The request includes the `enhancements` and `data_sources` objects. `enhancements` represents the specific Vision enhancement features requested in the chat. `data_sources` represents the Computer Vision resource data that's needed for Vision enhancement. It has a `type` property which should be `"AzureComputerVisionVideoIndex"` and a `parameters` property which contains your AI Vision and video information.
619+
The request includes the `enhancements` and `dataSources` objects. `enhancements` represents the specific Vision enhancement features requested in the chat. `dataSources` represents the Computer Vision resource data that's needed for Vision enhancement. It has a `type` property which should be `"AzureComputerVisionVideoIndex"` and a `parameters` property which contains your AI Vision and video information.
620620
1. Fill in all the `<placeholder>` fields above with your own information: enter the endpoint URLs and keys of your OpenAI and AI Vision resources where appropriate, and retrieve the video index information from the earlier step.
621621
1. Send the POST request to the API endpoint. It should contain your OpenAI and AI Vision credentials, the name of your video index, and the ID and SAS URL of a single video.
622622

0 commit comments

Comments
 (0)