Skip to content

Commit 93c4433

Browse files
committed
check formatting'
1 parent 7dc32a7 commit 93c4433

4 files changed

+15
-19
lines changed

articles/search/search-get-started-portal-import-vectors.md

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -46,9 +46,9 @@ Use an embedding model on an Azure AI platform in the [same region as Azure AI S
4646

4747
| Provider | Supported models |
4848
|---|---|
49-
| [Azure OpenAI Service](https://aka.ms/oai/access) | text-embedding-ada-002, text-embedding-3-large, or text-embedding-3-small. |
50-
| [Azure AI Foundry model catalog](/azure/ai-studio/what-is-ai-studio) | Azure, Cohere, and Facebook embedding models. |
51-
| [Azure AI services multi-service account](/azure/ai-services/multi-service-resource) | [Azure AI Vision multimodal](/azure/ai-services/computer-vision/how-to/image-retrieval) for image and text vectorization. Azure AI Vision multimodal is available in selected regions. [Check the documentation](/azure/ai-services/computer-vision/how-to/image-retrieval?tabs=csharp) for an updated list. Depending on how you [attach the multi-service resource](cognitive-search-attach-cognitive-services.md), the account might need to be in the same region as Azure AI Search. |
49+
| [Azure OpenAI Service](https://aka.ms/oai/access) | text-embedding-ada-002 <br>text-embedding-3-large <br>text-embedding-3-small |
50+
| [Azure AI Foundry model catalog](/azure/ai-studio/what-is-ai-studio) | For text: <br>Cohere-embed-v3-english <br>Cohere-embed-v3-multilingual <br>For images: <br>Facebook-DinoV2-Image-Embeddings-ViT-Base <br>Facebook-DinoV2-Image-Embeddings-ViT-Giant |
51+
| [Azure AI services multi-service account](/azure/ai-services/multi-service-resource) | [Azure AI Vision multimodal](/azure/ai-services/computer-vision/how-to/image-retrieval) for image and text vectorization, [available in selected regions](/azure/ai-services/computer-vision/how-to/image-retrieval?tabs=csharp). Depending on how you [attach the multi-service resource](cognitive-search-attach-cognitive-services.md), the multi-service account might need to be in the same region as Azure AI Search. |
5252

5353
If you use the Azure OpenAI Service, the endpoint must have an associated [custom subdomain](/azure/ai-services/cognitive-services-custom-subdomains). A custom subdomain is an endpoint that includes a unique name (for example, `https://hereismyuniquename.cognitiveservices.azure.com`). If the service was created through the Azure portal, this subdomain is automatically generated as part of your service setup. Ensure that your service includes a custom subdomain before using it with the Azure AI Search integration.
5454

@@ -202,13 +202,15 @@ After you finish these steps, you should be able to select the Azure AI Vision v
202202
203203
### [Azure AI Foundry model catalog](#tab/model-catalog)
204204

205-
The wizard supports Azure, Cohere, and Facebook embedding models in the Azure AI Foundry model catalog, but it doesn't currently support the OpenAI CLIP model. Internally, the wizard calls the [AML skill](cognitive-search-aml-skill.md) to connect to the catalog.
205+
The wizard supports Azure, Cohere, and Facebook embedding models in the Azure AI Foundry model catalog, but it doesn't currently support the OpenAI CLIP models. Internally, the wizard calls the [AML skill](cognitive-search-aml-skill.md) to connect to the catalog.
206206

207207
1. For the model catalog, you should have an [Azure OpenAI resource](/azure/ai-services/openai/how-to/create-resource), a [hub in Azure AI Foundry portal](/azure/ai-studio/how-to/create-projects), and a [project](/azure/ai-studio/how-to/create-projects). Hubs and projects having the same name can share connection information and permissions.
208208

209-
1. Deploy a supported embedding model to the model catalog in your project.
209+
1. Deploy an embedding model to the model catalog in your project.
210210

211-
1. For role-based connections, create two role assignments: one for Azure AI Search, and another for the AI Foundry project. Assign the [Cognitive Services OpenAI User](/azure/ai-services/openai/how-to/role-based-access-control) role for embeddings and vectorization.
211+
1. Select **Models + Endpoints**, and then select **Deploy a model**. Choose **Deploy base model**.
212+
1. Filter by inference task set to *Embeddings*.
213+
1. Deploy one of the [supported embedding models](#supported-embedding-models).
212214

213215
---
214216

@@ -321,7 +323,7 @@ Chunking is built in and nonconfigurable. The effective settings are:
321323

322324
+ For Azure OpenAI, choose an existing deployment of text-embedding-ada-002, text-embedding-3-large, or text-embedding-3-small.
323325

324-
+ For AI Foundry catalog, choose an existing deployment of an Azure, Cohere, and Facebook embedding model.
326+
+ For AI Foundry catalog, choose an existing deployment of an Azure or Cohere embedding model.
325327

326328
+ For AI Vision multimodal embeddings, select the account.
327329

articles/search/tutorial-rag-build-solution-models.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.author: heidist
88
ms.service: azure-ai-search
99
ms.topic: tutorial
1010
ms.custom: references_regions
11-
ms.date: 10/25/2024
11+
ms.date: 12/03/2024
1212

1313
---
1414

@@ -66,7 +66,7 @@ Azure AI Search provides skill and vectorizer support for the following embeddin
6666
|--------|------------------|-------|------------|
6767
| Azure OpenAI | text-embedding-ada-002, <br>text-embedding-3-large, <br>text-embedding-3-small | [AzureOpenAIEmbedding](cognitive-search-skill-azure-openai-embedding.md) | [AzureOpenAIEmbedding](vector-search-vectorizer-azure-open-ai.md) |
6868
| Azure AI Vision | multimodal 4.0 <sup>1</sup> | [AzureAIVision](cognitive-search-skill-vision-vectorize.md) | [AzureAIVision](vector-search-vectorizer-ai-services-vision.md) |
69-
| Azure AI Foundry model catalog | OpenAI-CLIP-Image-Text-Embeddings-vit-base-patch32, <br>OpenAI-CLIP-Image-Text-Embeddings-ViT-Large-Patch14-336, <br>Facebook-DinoV2-Image-Embeddings-ViT-Base, <br>Facebook-DinoV2-Image-Embeddings-ViT-Giant, <br>Cohere-embed-v3-english, <br>Cohere-embed-v3-multilingual | [AML](cognitive-search-aml-skill.md) <sup>2</sup> | [Azure AI Foundry model catalog](vector-search-vectorizer-azure-machine-learning-ai-studio-catalog.md) |
69+
| Azure AI Foundry model catalog | Facebook-DinoV2-Image-Embeddings-ViT-Base, <br>Facebook-DinoV2-Image-Embeddings-ViT-Giant, <br>Cohere-embed-v3-english, <br>Cohere-embed-v3-multilingual | [AML](cognitive-search-aml-skill.md) <sup>2</sup> | [Azure AI Foundry model catalog](vector-search-vectorizer-azure-machine-learning-ai-studio-catalog.md) |
7070

7171
<sup>1</sup> Supports image and text vectorization.
7272

articles/search/vector-search-integrated-vectorization-ai-studio.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.service: azure-ai-search
88
ms.custom:
99
- build-2024
1010
ms.topic: how-to
11-
ms.date: 10/29/2024
11+
ms.date: 12/03/2024
1212
---
1313

1414
# How to implement integrated vectorization using models from Azure AI Foundry
@@ -105,8 +105,6 @@ The URI and key are generated when you deploy the model from the catalog. For mo
105105

106106
This AML skill payload works with the following models from AI Foundry:
107107

108-
+ OpenAI-CLIP-Image-Text-Embeddings-vit-base-patch32
109-
+ OpenAI-CLIP-Image-Text-Embeddings-ViT-Large-Patch14-336
110108
+ Facebook-DinoV2-Image-Embeddings-ViT-Base
111109
+ Facebook-DinoV2-Image-Embeddings-ViT-Giant
112110

articles/search/vector-search-vectorizer-azure-machine-learning-ai-studio-catalog.md

Lines changed: 3 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -8,12 +8,12 @@ ms.service: azure-ai-search
88
ms.custom:
99
- build-2024
1010
ms.topic: reference
11-
ms.date: 08/05/2024
11+
ms.date: 12/03/2024
1212
---
1313

1414
# Azure AI Foundry model catalog vectorizer
1515

16-
> [!IMPORTANT]
16+
> [!IMPORTANT]
1717
> This vectorizer is in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). The [2024-05-01-Preview REST API](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2024-05-01-Preview&preserve-view=true) supports this feature.
1818
1919
The **Azure AI Foundry model catalog** vectorizer connects to an embedding model that was deployed via [the Azure AI Foundry model catalog](/azure/ai-studio/how-to/model-catalog) to an Azure Machine Learning endpoint. Your data is processed in the [Geo](https://azure.microsoft.com/explore/global-infrastructure/data-residency/) where your model is deployed.
@@ -27,7 +27,7 @@ Parameters are case-sensitive. Which parameters you choose to use depends on wha
2727
| Parameter name | Description |
2828
|--------------------|-------------|
2929
| `uri` | (Required) The [URI of the AML online endpoint](../machine-learning/how-to-authenticate-online-endpoint.md) to which the _JSON_ payload is sent. Only the **https** URI scheme is allowed. |
30-
| `modelName` | (Required) The model ID from the AI Foundry model catalog that is deployed at the provided endpoint. Currently supported values are <ul><li>OpenAI-CLIP-Image-Text-Embeddings-vit-base-patch32 </li><li>OpenAI-CLIP-Image-Text-Embeddings-ViT-Large-Patch14-336 </li><li>Facebook-DinoV2-Image-Embeddings-ViT-Base </li><li>Facebook-DinoV2-Image-Embeddings-ViT-Giant </li><li>Cohere-embed-v3-english </li><li>Cohere-embed-v3-multilingual</ul> |
30+
| `modelName` | (Required) The model ID from the AI Foundry model catalog that is deployed at the provided endpoint. Currently supported models are <ul><li>Facebook-DinoV2-Image-Embeddings-ViT-Base </li><li>Facebook-DinoV2-Image-Embeddings-ViT-Giant </li><li>Cohere-embed-v3-english </li><li>Cohere-embed-v3-multilingual</ul> |
3131
| `key` | (Required for [key authentication](#WhatParametersToUse)) The [key for the AML online endpoint](../machine-learning/how-to-authenticate-online-endpoint.md). |
3232
| `resourceId` | (Required for [token authentication](#WhatParametersToUse)). The Azure Resource Manager resource ID of the AML online endpoint. It should be in the format subscriptions/{guid}/resourceGroups/{resource-group-name}/Microsoft.MachineLearningServices/workspaces/{workspace-name}/onlineendpoints/{endpoint_name}. |
3333
| `region` | (Optional for [token authentication](#WhatParametersToUse)). The [region](https://azure.microsoft.com/global-infrastructure/regions/) the AML online endpoint is deployed in. Needed if the region is different from the region of the search service. |
@@ -51,8 +51,6 @@ Which vector query types are supported by the AI Foundry model catalog vectorize
5151

5252
| `modelName` | Supports `text` query | Supports `imageUrl` query | Supports `imageBinary` query |
5353
|--------------------|-------------|-------------|-------------|
54-
| OpenAI-CLIP-Image-Text-Embeddings-vit-base-patch32 | X | X | X |
55-
| OpenAI-CLIP-Image-Text-Embeddings-ViT-Large-Patch14-336 | X | X | X |
5654
| Facebook-DinoV2-Image-Embeddings-ViT-Base | | X | X |
5755
| Facebook-DinoV2-Image-Embeddings-ViT-Giant | | X | X |
5856
| Cohere-embed-v3-english | X | | |
@@ -64,8 +62,6 @@ The expected field dimensions for a field configured with an AI Foundry model ca
6462

6563
| `modelName` | Expected dimensions |
6664
|--------------------|-------------|
67-
| OpenAI-CLIP-Image-Text-Embeddings-vit-base-patch32 | 512 |
68-
| OpenAI-CLIP-Image-Text-Embeddings-ViT-Large-Patch14-336 | 768 |
6965
| Facebook-DinoV2-Image-Embeddings-ViT-Base | 768 |
7066
| Facebook-DinoV2-Image-Embeddings-ViT-Giant | 1536 |
7167
| Cohere-embed-v3-english | 1024 |

0 commit comments

Comments
 (0)