Skip to content

Commit 221c01e

Browse files
Merge pull request #120648 from imharvol/patch-1
Rename dataSources to data_sources in gpt-with-vision.md
2 parents 2dfe4ad + f095fbc commit 221c01e

File tree

1 file changed

+14
-14
lines changed

1 file changed

+14
-14
lines changed

articles/ai-services/openai/how-to/gpt-with-vision.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -271,7 +271,7 @@ Send a POST request to `https://{RESOURCE_NAME}.openai.azure.com/openai/deployme
271271

272272
The format is similar to that of the chat completions API for GPT-4, but the message content can be an array containing strings and images (either a valid HTTP or HTTPS URL to an image, or a base-64-encoded image).
273273

274-
You must also include the `enhancements` and `dataSources` objects. `enhancements` represents the specific Vision enhancement features requested in the chat. It has a `grounding` and `ocr` property, which both have a boolean `enabled` property. Use these to request the OCR service and/or the object detection/grounding service. `dataSources` represents the Computer Vision resource data that's needed for Vision enhancement. It has a `type` property which should be `"AzureComputerVision"` and a `parameters` property. Set the `endpoint` and `key` to the endpoint URL and access key of your Computer Vision resource.
274+
You must also include the `enhancements` and `data_sources` objects. `enhancements` represents the specific Vision enhancement features requested in the chat. It has a `grounding` and `ocr` property, which both have a boolean `enabled` property. Use these to request the OCR service and/or the object detection/grounding service. `data_sources` represents the Computer Vision resource data that's needed for Vision enhancement. It has a `type` property which should be `"AzureComputerVision"` and a `parameters` property. Set the `endpoint` and `key` to the endpoint URL and access key of your Computer Vision resource.
275275

276276
> [!IMPORTANT]
277277
> Remember to set a `"max_tokens"` value, or the return output will be cut off.
@@ -287,7 +287,7 @@ You must also include the `enhancements` and `dataSources` objects. `enhancement
287287
"enabled": true
288288
}
289289
},
290-
"dataSources": [
290+
"data_sources": [
291291
{
292292
"type": "AzureComputerVision",
293293
"parameters": {
@@ -323,11 +323,11 @@ You must also include the `enhancements` and `dataSources` objects. `enhancement
323323

324324
#### [Python](#tab/python)
325325

326-
You call the same method as in the previous step, but include the new *extra_body* parameter. It contains the `enhancements` and `dataSources` fields.
326+
You call the same method as in the previous step, but include the new *extra_body* parameter. It contains the `enhancements` and `data_sources` fields.
327327

328328
`enhancements` represents the specific Vision enhancement features requested in the chat. It has a `grounding` and `ocr` field, which both have a boolean `enabled` property. Use these to request the OCR service and/or the object detection/grounding service.
329329

330-
`dataSources` represents the Computer Vision resource data that's needed for Vision enhancement. It has a `type` field which should be `"AzureComputerVision"` and a `parameters` field. Set the `endpoint` and `key` to the endpoint URL and access key of your Computer Vision resource. R
330+
`data_sources` represents the Computer Vision resource data that's needed for Vision enhancement. It has a `type` field which should be `"AzureComputerVision"` and a `parameters` field. Set the `endpoint` and `key` to the endpoint URL and access key of your Computer Vision resource. R
331331

332332
> [!IMPORTANT]
333333
> Remember to set a `"max_tokens"` value, or the return output will be cut off.
@@ -352,7 +352,7 @@ response = client.chat.completions.create(
352352
] }
353353
],
354354
extra_body={
355-
"dataSources": [
355+
"data_sources": [
356356
{
357357
"type": "AzureComputerVision",
358358
"parameters": {
@@ -583,7 +583,7 @@ To use a User assigned identity on your Azure AI Services resource, follow these
583583
"enabled": true
584584
}
585585
},
586-
"dataSources": [
586+
"data_sources": [
587587
{
588588
"type": "AzureComputerVisionVideoIndex",
589589
"parameters": {
@@ -616,15 +616,15 @@ To use a User assigned identity on your Azure AI Services resource, follow these
616616
}
617617
```
618618
619-
The request includes the `enhancements` and `dataSources` objects. `enhancements` represents the specific Vision enhancement features requested in the chat. `dataSources` represents the Computer Vision resource data that's needed for Vision enhancement. It has a `type` property which should be `"AzureComputerVisionVideoIndex"` and a `parameters` property which contains your AI Vision and video information.
619+
The request includes the `enhancements` and `data_sources` objects. `enhancements` represents the specific Vision enhancement features requested in the chat. `data_sources` represents the Computer Vision resource data that's needed for Vision enhancement. It has a `type` property which should be `"AzureComputerVisionVideoIndex"` and a `parameters` property which contains your AI Vision and video information.
620620
1. Fill in all the `<placeholder>` fields above with your own information: enter the endpoint URLs and keys of your OpenAI and AI Vision resources where appropriate, and retrieve the video index information from the earlier step.
621621
1. Send the POST request to the API endpoint. It should contain your OpenAI and AI Vision credentials, the name of your video index, and the ID and SAS URL of a single video.
622622

623623
#### [Python](#tab/python)
624624

625-
In your Python script, call the client's **create** method as in the previous sections, but include the *extra_body* parameter. Here, it contains the `enhancements` and `dataSources` fields. `enhancements` represents the specific Vision enhancement features requested in the chat. It has a `video` field, which has a boolean `enabled` property. Use this to request the video retrieval service.
625+
In your Python script, call the client's **create** method as in the previous sections, but include the *extra_body* parameter. Here, it contains the `enhancements` and `data_sources` fields. `enhancements` represents the specific Vision enhancement features requested in the chat. It has a `video` field, which has a boolean `enabled` property. Use this to request the video retrieval service.
626626
627-
`dataSources` represents the external resource data that's needed for Vision enhancement. It has a `type` field which should be `"AzureComputerVisionVideoIndex"` and a `parameters` field.
627+
`data_sources` represents the external resource data that's needed for Vision enhancement. It has a `type` field which should be `"AzureComputerVisionVideoIndex"` and a `parameters` field.
628628

629629
Set the `computerVisionBaseUrl` and `computerVisionApiKey` to the endpoint URL and access key of your Computer Vision resource. Set `indexName` to the name of your video index. Set `videoUrls` to a list of SAS URLs of your videos.
630630

@@ -648,7 +648,7 @@ response = client.chat.completions.create(
648648
] }
649649
],
650650
extra_body={
651-
"dataSources": [
651+
"data_sources": [
652652
{
653653
"type": "AzureComputerVisionVideoIndex",
654654
"parameters": {
@@ -672,12 +672,12 @@ print(response)
672672
---
673673
674674
> [!IMPORTANT]
675-
> The `"dataSources"` object's content varies depending on which Azure resource type and authentication method you're using. See the following reference:
675+
> The `"data_sources"` object's content varies depending on which Azure resource type and authentication method you're using. See the following reference:
676676
>
677677
> #### [Azure OpenAI resource](#tab/resource)
678678
>
679679
> ```json
680-
> "dataSources": [
680+
> "data_sources": [
681681
> {
682682
> "type": "AzureComputerVisionVideoIndex",
683683
> "parameters": {
@@ -692,7 +692,7 @@ print(response)
692692
> #### [Azure AIServices resource + SAS authentication](#tab/resource-sas)
693693
>
694694
> ```json
695-
> "dataSources": [
695+
> "data_sources": [
696696
> {
697697
> "type": "AzureComputerVisionVideoIndex",
698698
> "parameters": {
@@ -705,7 +705,7 @@ print(response)
705705
> #### [Azure AIServices resource + Managed Identities](#tab/resource-mi)
706706
>
707707
> ```json
708-
> "dataSources": [
708+
> "data_sources": [
709709
> {
710710
> "type": "AzureComputerVisionVideoIndex",
711711
> "parameters": {

0 commit comments

Comments
 (0)