You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/search/search-get-started-portal-import-vectors.md
+9-7Lines changed: 9 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -46,9 +46,9 @@ Use an embedding model on an Azure AI platform in the [same region as Azure AI S
46
46
47
47
| Provider | Supported models |
48
48
|---|---|
49
-
|[Azure OpenAI Service](https://aka.ms/oai/access)| text-embedding-ada-002, text-embedding-3-large, or text-embedding-3-small.|
50
-
|[Azure AI Foundry model catalog](/azure/ai-studio/what-is-ai-studio)| Azure, Cohere, and Facebook embedding models.|
51
-
|[Azure AI services multi-service account](/azure/ai-services/multi-service-resource)|[Azure AI Vision multimodal](/azure/ai-services/computer-vision/how-to/image-retrieval) for image and text vectorization. Azure AI Vision multimodal is available in selected regions. [Check the documentation](/azure/ai-services/computer-vision/how-to/image-retrieval?tabs=csharp) for an updated list. Depending on how you [attach the multi-service resource](cognitive-search-attach-cognitive-services.md), the account might need to be in the same region as Azure AI Search. |
|[Azure AI Foundry model catalog](/azure/ai-studio/what-is-ai-studio)|For text: <br>Cohere-embed-v3-english <br>Cohere-embed-v3-multilingual <br>For images: <br>Facebook-DinoV2-Image-Embeddings-ViT-Base <br>Facebook-DinoV2-Image-Embeddings-ViT-Giant|
51
+
|[Azure AI services multi-service account](/azure/ai-services/multi-service-resource)|[Azure AI Vision multimodal](/azure/ai-services/computer-vision/how-to/image-retrieval) for image and text vectorization, [available in selected regions](/azure/ai-services/computer-vision/how-to/image-retrieval?tabs=csharp). Depending on how you [attach the multi-service resource](cognitive-search-attach-cognitive-services.md), the multi-service account might need to be in the same region as Azure AI Search. |
52
52
53
53
If you use the Azure OpenAI Service, the endpoint must have an associated [custom subdomain](/azure/ai-services/cognitive-services-custom-subdomains). A custom subdomain is an endpoint that includes a unique name (for example, `https://hereismyuniquename.cognitiveservices.azure.com`). If the service was created through the Azure portal, this subdomain is automatically generated as part of your service setup. Ensure that your service includes a custom subdomain before using it with the Azure AI Search integration.
54
54
@@ -202,13 +202,15 @@ After you finish these steps, you should be able to select the Azure AI Vision v
202
202
203
203
### [Azure AI Foundry model catalog](#tab/model-catalog)
204
204
205
-
The wizard supports Azure, Cohere, and Facebook embedding models in the Azure AI Foundry model catalog, but it doesn't currently support the OpenAI CLIP model. Internally, the wizard calls the [AML skill](cognitive-search-aml-skill.md) to connect to the catalog.
205
+
The wizard supports Azure, Cohere, and Facebook embedding models in the Azure AI Foundry model catalog, but it doesn't currently support the OpenAI CLIP models. Internally, the wizard calls the [AML skill](cognitive-search-aml-skill.md) to connect to the catalog.
206
206
207
207
1. For the model catalog, you should have an [Azure OpenAI resource](/azure/ai-services/openai/how-to/create-resource), a [hub in Azure AI Foundry portal](/azure/ai-studio/how-to/create-projects), and a [project](/azure/ai-studio/how-to/create-projects). Hubs and projects having the same name can share connection information and permissions.
208
208
209
-
1. Deploy a supported embedding model to the model catalog in your project.
209
+
1. Deploy an embedding model to the model catalog in your project.
210
210
211
-
1. For role-based connections, create two role assignments: one for Azure AI Search, and another for the AI Foundry project. Assign the [Cognitive Services OpenAI User](/azure/ai-services/openai/how-to/role-based-access-control) role for embeddings and vectorization.
211
+
1. Select **Models + Endpoints**, and then select **Deploy a model**. Choose **Deploy base model**.
212
+
1. Filter by inference task set to *Embeddings*.
213
+
1. Deploy one of the [supported embedding models](#supported-embedding-models).
212
214
213
215
---
214
216
@@ -321,7 +323,7 @@ Chunking is built in and nonconfigurable. The effective settings are:
321
323
322
324
+ For Azure OpenAI, choose an existing deployment of text-embedding-ada-002, text-embedding-3-large, or text-embedding-3-small.
323
325
324
-
+ For AI Foundry catalog, choose an existing deployment of an Azure, Cohere, and Facebook embedding model.
326
+
+ For AI Foundry catalog, choose an existing deployment of an Azure or Cohere embedding model.
325
327
326
328
+ For AI Vision multimodal embeddings, select the account.
| Azure AI Vision | multimodal 4.0 <sup>1</sup> |[AzureAIVision](cognitive-search-skill-vision-vectorize.md)|[AzureAIVision](vector-search-vectorizer-ai-services-vision.md)|
69
-
| Azure AI Foundry model catalog |OpenAI-CLIP-Image-Text-Embeddings-vit-base-patch32, <br>OpenAI-CLIP-Image-Text-Embeddings-ViT-Large-Patch14-336, <br>Facebook-DinoV2-Image-Embeddings-ViT-Base, <br>Facebook-DinoV2-Image-Embeddings-ViT-Giant, <br>Cohere-embed-v3-english, <br>Cohere-embed-v3-multilingual |[AML](cognitive-search-aml-skill.md) <sup>2</sup> |[Azure AI Foundry model catalog](vector-search-vectorizer-azure-machine-learning-ai-studio-catalog.md)|
69
+
| Azure AI Foundry model catalog | Facebook-DinoV2-Image-Embeddings-ViT-Base, <br>Facebook-DinoV2-Image-Embeddings-ViT-Giant, <br>Cohere-embed-v3-english, <br>Cohere-embed-v3-multilingual |[AML](cognitive-search-aml-skill.md) <sup>2</sup> |[Azure AI Foundry model catalog](vector-search-vectorizer-azure-machine-learning-ai-studio-catalog.md)|
70
70
71
71
<sup>1</sup> Supports image and text vectorization.
Copy file name to clipboardExpand all lines: articles/search/vector-search-vectorizer-azure-machine-learning-ai-studio-catalog.md
+3-7Lines changed: 3 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,12 +8,12 @@ ms.service: azure-ai-search
8
8
ms.custom:
9
9
- build-2024
10
10
ms.topic: reference
11
-
ms.date: 08/05/2024
11
+
ms.date: 12/03/2024
12
12
---
13
13
14
14
# Azure AI Foundry model catalog vectorizer
15
15
16
-
> [!IMPORTANT]
16
+
> [!IMPORTANT]
17
17
> This vectorizer is in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). The [2024-05-01-Preview REST API](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2024-05-01-Preview&preserve-view=true) supports this feature.
18
18
19
19
The **Azure AI Foundry model catalog** vectorizer connects to an embedding model that was deployed via [the Azure AI Foundry model catalog](/azure/ai-studio/how-to/model-catalog) to an Azure Machine Learning endpoint. Your data is processed in the [Geo](https://azure.microsoft.com/explore/global-infrastructure/data-residency/) where your model is deployed.
@@ -27,7 +27,7 @@ Parameters are case-sensitive. Which parameters you choose to use depends on wha
27
27
| Parameter name | Description |
28
28
|--------------------|-------------|
29
29
|`uri`| (Required) The [URI of the AML online endpoint](../machine-learning/how-to-authenticate-online-endpoint.md) to which the _JSON_ payload is sent. Only the **https** URI scheme is allowed. |
30
-
|`modelName`| (Required) The model ID from the AI Foundry model catalog that is deployed at the provided endpoint. Currently supported values are <ul><li>OpenAI-CLIP-Image-Text-Embeddings-vit-base-patch32 </li><li>OpenAI-CLIP-Image-Text-Embeddings-ViT-Large-Patch14-336 </li><li>Facebook-DinoV2-Image-Embeddings-ViT-Base </li><li>Facebook-DinoV2-Image-Embeddings-ViT-Giant </li><li>Cohere-embed-v3-english </li><li>Cohere-embed-v3-multilingual</ul> |
30
+
|`modelName`| (Required) The model ID from the AI Foundry model catalog that is deployed at the provided endpoint. Currently supported models are <ul><li>Facebook-DinoV2-Image-Embeddings-ViT-Base </li><li>Facebook-DinoV2-Image-Embeddings-ViT-Giant </li><li>Cohere-embed-v3-english </li><li>Cohere-embed-v3-multilingual</ul> |
31
31
|`key`| (Required for [key authentication](#WhatParametersToUse)) The [key for the AML online endpoint](../machine-learning/how-to-authenticate-online-endpoint.md). |
32
32
|`resourceId`| (Required for [token authentication](#WhatParametersToUse)). The Azure Resource Manager resource ID of the AML online endpoint. It should be in the format subscriptions/{guid}/resourceGroups/{resource-group-name}/Microsoft.MachineLearningServices/workspaces/{workspace-name}/onlineendpoints/{endpoint_name}. |
33
33
|`region`| (Optional for [token authentication](#WhatParametersToUse)). The [region](https://azure.microsoft.com/global-infrastructure/regions/) the AML online endpoint is deployed in. Needed if the region is different from the region of the search service. |
@@ -51,8 +51,6 @@ Which vector query types are supported by the AI Foundry model catalog vectorize
0 commit comments