Skip to content

Commit cac8af7

Browse files
Merge pull request #267281 from PatrickFarley/openai-gptnext
Openai add managed identity support
2 parents 2ccf4f9 + c6ab7e6 commit cac8af7

File tree

2 files changed

+86
-2
lines changed

2 files changed

+86
-2
lines changed

articles/ai-services/computer-vision/reference-video-search.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -332,6 +332,7 @@ Represents the create ingestion request model for the JSON document.
332332
| videos | [ [IngestionDocumentRequestModel](#ingestiondocumentrequestmodel) ] | Gets or sets the list of video document ingestion requests in the JSON document. | No |
333333
| moderation | boolean | Gets or sets the moderation flag, indicating if the content should be moderated. | No |
334334
| generateInsightIntervals | boolean | Gets or sets the interval generation flag, indicating if insight intervals should be generated. | No |
335+
| documentAuthenticationKind | string | Gets or sets the authentication kind that is to be used for downloading the documents.<br> _Enum:_ `"none"`, `"managedIdentity"` | No |
335336
| filterDefectedFrames | boolean | Frame filter flag indicating frames will be evaluated and all defected (e.g. blurry, lowlight, overexposure) frames will be filtered out. | No |
336337
| includeSpeechTranscript | boolean | Gets or sets the transcript generation flag, indicating if transcript should be generated. | No |
337338

articles/ai-services/openai/how-to/gpt-with-vision.md

Lines changed: 85 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -251,7 +251,7 @@ The **Optical character recognition (OCR)** integration allows the model to prod
251251
The **object grounding** integration brings a new layer to data analysis and user interaction, as the feature can visually distinguish and highlight important elements in the images it processes.
252252

253253
> [!IMPORTANT]
254-
> To use Vision enhancement, you need a Computer Vision resource. It must be in the paid (S1) tier and in the same Azure region as your GPT-4 Turbo with Vision resource.
254+
> To use the Vision enhancement with an Azure OpenAI resource, you need to specify a Computer Vision resource. It must be in the paid (S1) tier and in the same Azure region as your GPT-4 Turbo with Vision resource. If you're using an Azure AI Services resource, you don't need an additional Computer Vision resource.
255255
256256
> [!CAUTION]
257257
> Azure AI enhancements for GPT-4 Turbo with Vision will be billed separately from the core functionalities. Each specific Azure AI enhancement for GPT-4 Turbo with Vision has its own distinct charges. For details, see the [special pricing information](../concepts/gpt-with-vision.md#special-pricing-information).
@@ -445,14 +445,52 @@ GPT-4 Turbo with Vision provides exclusive access to Azure AI Services tailored
445445
Follow these steps to set up a video retrieval system and integrate it with your AI chat model.
446446

447447
> [!IMPORTANT]
448-
> To use Vision enhancement, you need an Azure AI Vision resource. It must be in the paid (S1) tier and in the same Azure region as your GPT-4 Turbo with Vision resource.
448+
> To use the Vision enhancement with an Azure OpenAI resource, you need to specify a Computer Vision resource. It must be in the paid (S1) tier and in the same Azure region as your GPT-4 Turbo with Vision resource. If you're using an Azure AI Services resource, you don't need an additional Computer Vision resource.
449449
450450
> [!CAUTION]
451451
> Azure AI enhancements for GPT-4 Turbo with Vision will be billed separately from the core functionalities. Each specific Azure AI enhancement for GPT-4 Turbo with Vision has its own distinct charges. For details, see the [special pricing information](../concepts/gpt-with-vision.md#special-pricing-information).
452452
453453
> [!TIP]
454454
> If you prefer, you can carry out the below steps using a Jupyter notebook instead: [Video chat completions notebook](https://github.com/Azure-Samples/azureai-samples/blob/main/scenarios/GPT-4V/video/video_chatcompletions_example_restapi.ipynb).
455455
456+
### Upload videos to Azure Blob Storage
457+
458+
You need to upload your videos to an Azure Blob Storage container. [Create a new storage account](https://ms.portal.azure.com/#create/Microsoft.StorageAccount) if you don't have one already.
459+
460+
Once your videos are uploaded, you can get their SAS URLs, which you use to access them in later steps.
461+
462+
#### Ensure proper read access
463+
464+
Depending on your authentication method, you may need to do some extra steps to grant access to the Azure Blob Storage container. If you're using an Azure AI Services resource instead of an Azure OpenAI resource, you need to use Managed Identities to grant it **read** access to Azure Blob Storage:
465+
466+
#### [using System assigned identities](#tab/system-assigned)
467+
468+
Enable System assigned identities on your Azure AI Services resource by following these steps:
469+
1. From your AI Services resource in Azure portal select **Resource Management** -> **Identity** and toggle the status to **ON**.
470+
1. Assign **Storage Blob Data Read** access to the AI Services resource: From the **Identity** page, select **Azure role assignments**, and then **Add role assignment** with the following settings:
471+
- scope: storage
472+
- subscription: {your subscription}
473+
- Resource: {select the Azure Blob Storage resource}
474+
- Role: Storage Blob Data Reader
475+
1. Save your settings.
476+
477+
#### [using User assigned identities](#tab/user-assigned)
478+
479+
To use a User assigned identity on your Azure AI Services resource, follow these steps:
480+
1. Create a new Managed Identity resource in the Azure portal.
481+
1. Navigate to the new resource, then to **Azure Role Assignments**.
482+
1. Add a **New Role Assignment** with the following settings:
483+
- scope: storage
484+
- subscription: {your subscription}
485+
- Resource: {select the Azure Blob Storage resource}
486+
- Role: Storage Blob Data Reader
487+
1. Save your new configuration.
488+
1. Navigate to your AI Services resource's **Identity** page.
489+
1. Select the **User Assigned** Tab, then click **+Add** to select the newly created Managed Identity.
490+
1. Save your configuration.
491+
492+
---
493+
456494
### Create a video retrieval index
457495

458496
1. Get an Azure AI Vision resource in the same region as the Azure OpenAI resource you're using.
@@ -633,6 +671,51 @@ print(response)
633671
```
634672
---
635673
674+
> [!IMPORTANT]
675+
> The `"dataSources"` object's content varies depending on which Azure resource type and authentication method you're using. See the following reference:
676+
>
677+
> #### [Azure OpenAI resource](#tab/resource)
678+
>
679+
> ```json
680+
> "dataSources": [
681+
> {
682+
> "type": "AzureComputerVisionVideoIndex",
683+
> "parameters": {
684+
> "endpoint": "<your_computer_vision_endpoint>",
685+
> "computerVisionApiKey": "<your_computer_vision_key>",
686+
> "indexName": "<name_of_your_index>",
687+
> "videoUrls": ["<your_video_SAS_URL>"]
688+
> }
689+
> }],
690+
> ```
691+
>
692+
> #### [Azure AIServices resource + SAS authentication](#tab/resource-sas)
693+
>
694+
> ```json
695+
> "dataSources": [
696+
> {
697+
> "type": "AzureComputerVisionVideoIndex",
698+
> "parameters": {
699+
> "indexName": "<name_of_your_index>",
700+
> "videoUrls": ["<your_video_SAS_URL>"]
701+
> }
702+
> }],
703+
> ```
704+
>
705+
> #### [Azure AIServices resource + Managed Identities](#tab/resource-mi)
706+
>
707+
> ```json
708+
> "dataSources": [
709+
> {
710+
> "type": "AzureComputerVisionVideoIndex",
711+
> "parameters": {
712+
> "indexName": "<name_of_your_index>",
713+
> "documentAuthenticationKind": "managedidentity",
714+
> }
715+
> }],
716+
> ```
717+
> ---
718+
636719
### Output
637720
638721
The chat responses you receive from the model should include information about the video. The API response should look like the following.

0 commit comments

Comments
 (0)