You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/search/search-get-started-portal-image-search.md
+4-2Lines changed: 4 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ ms.author: haileytapia
7
7
ms.service: azure-ai-search
8
8
ms.update-cycle: 90-days
9
9
ms.topic: quickstart
10
-
ms.date: 06/11/2025
10
+
ms.date: 07/16/2025
11
11
ms.custom:
12
12
- references_regions
13
13
---
@@ -52,7 +52,7 @@ For content embedding, you can choose either image verbalization (followed by te
52
52
| Method | Description | Supported models |
53
53
|--|--|--|
54
54
| Image verbalization | Uses an LLM to generate natural-language descriptions of images, and then uses an embedding model to vectorize plain text and verbalized images.<br><br>Requires an [Azure OpenAI resource](/azure/ai-services/openai/how-to/create-resource) <sup>1, 2</sup> or [Azure AI Foundry project](/azure/ai-foundry/how-to/create-projects).<br><br>For text vectorization, you can also use an [Azure AI services multi-service resource](/azure/ai-services/multi-service-resource#azure-ai-multi-services-resource-for-azure-ai-search-skills) <sup>3</sup> in a [supported region](cognitive-search-skill-vision-vectorize.md). | LLMs:<br>GPT-4o<br>GPT-4o-mini<br>phi-4 <sup>4</sup><br><br>Embedding models:<br>text-embedding-ada-002<br>text-embedding-3-small<br>text-embedding-3-large |
55
-
| Multimodal embeddings | Uses an embedding model to directly vectorize both text and images.<br><br>Requires an [Azure AI Foundry project](/azure/ai-foundry/how-to/create-projects) or [Azure AI services multi-service resource](/azure/ai-services/multi-service-resource#azure-ai-multi-services-resource-for-azure-ai-search-skills) <sup>3</sup> in a [supported region](cognitive-search-skill-vision-vectorize.md). | Cohere-embed-v3-english<br>Cohere-embed-v3-multilingual |
55
+
| Multimodal embeddings | Uses an embedding model to directly vectorize both text and images.<br><br>Requires an [Azure AI Foundry project](/azure/ai-foundry/how-to/create-projects) or [Azure AI services multi-service resource](/azure/ai-services/multi-service-resource#azure-ai-multi-services-resource-for-azure-ai-search-skills) <sup>3</sup> in a [supported region](cognitive-search-skill-vision-vectorize.md). | Cohere-embed-v3-english<br>Cohere-embed-v3-multilingual<br>embed-v-4-0 <sup>5</sup>|
56
56
57
57
<sup>1</sup> The endpoint of your Azure OpenAI resource must have a [custom subdomain](/azure/ai-services/cognitive-services-custom-subdomains), such as `https://my-unique-name.openai.azure.com`. If you created your resource in the [Azure portal](https://portal.azure.com/), this subdomain was automatically generated during resource setup.
58
58
@@ -62,6 +62,8 @@ For content embedding, you can choose either image verbalization (followed by te
62
62
63
63
<sup>4</sup> `phi-4` is only available to Azure AI Foundry projects.
64
64
65
+
<sup>5</sup> The Azure portal doesn't support `embed-v-4-0` for vectorization, so don't use it for this quickstart. Instead, use the [AML skill](cognitive-search-aml-skill.md) to programmatically specify this model. You can then use the portal to view and manage the skillset.
66
+
65
67
### Public endpoint requirements
66
68
67
69
All of the preceding resources must have public access enabled so that the Azure portal nodes can access them. Otherwise, the wizard fails. After the wizard runs, you can enable firewalls and private endpoints on the integration components for security. For more information, see [Secure connections in the import wizards](search-import-data-portal.md#secure-connections).
Copy file name to clipboardExpand all lines: articles/search/search-get-started-portal-import-vectors.md
+4-2Lines changed: 4 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ ms.custom:
10
10
- build-2024
11
11
- ignite-2024
12
12
ms.topic: quickstart
13
-
ms.date: 06/11/2025
13
+
ms.date: 07/16/2025
14
14
---
15
15
16
16
# Quickstart: Vectorize text in the Azure portal
@@ -49,7 +49,7 @@ For integrated vectorization, you must use one of the following embedding models
49
49
|--|--|
50
50
|[Azure OpenAI in Azure AI Foundry Models](/azure/ai-services/openai/how-to/create-resource) <sup>1, 2</sup> | text-embedding-ada-002<br>text-embedding-3-small<br>text-embedding-3-large |
51
51
|[Azure AI services multi-service resource](/azure/ai-services/multi-service-resource#azure-ai-multi-services-resource-for-azure-ai-search-skills) <sup>3</sup> | For text and images: [Azure AI Vision multimodal](/azure/ai-services/computer-vision/how-to/image-retrieval) <sup>4</sup></li> |
52
-
|[Azure AI Foundry model catalog](/azure/ai-foundry/what-is-azure-ai-foundry)| For text:<br>Cohere-embed-v3-english<br>Cohere-embed-v3-multilingual<br><br>For images:<br>Facebook-DinoV2-Image-Embeddings-ViT-Base<br>Facebook-DinoV2-Image-Embeddings-ViT-Giant |
52
+
|[Azure AI Foundry model catalog](/azure/ai-foundry/what-is-azure-ai-foundry)| For text:<br>Cohere-embed-v3-english<br>Cohere-embed-v3-multilingual<br><br>For images:<br>Facebook-DinoV2-Image-Embeddings-ViT-Base<br>Facebook-DinoV2-Image-Embeddings-ViT-Giant<br><br>For text and images:<br>embed-v-4-0 <sup>5</sup>|
53
53
54
54
<sup>1</sup> The endpoint of your Azure OpenAI resource must have a [custom subdomain](/azure/ai-services/cognitive-services-custom-subdomains), such as `https://my-unique-name.openai.azure.com`. If you created your resource in the [Azure portal](https://portal.azure.com/), this subdomain was automatically generated during resource setup.
55
55
@@ -59,6 +59,8 @@ For integrated vectorization, you must use one of the following embedding models
59
59
60
60
<sup>4</sup> The Azure AI Vision multimodal embedding model is available in [select regions](/azure/ai-services/computer-vision/overview-image-analysis#region-availability).
61
61
62
+
<sup>5</sup> The Azure portal doesn't support `embed-v-4-0` for vectorization, so don't use it for this quickstart. Instead, use the [AML skill](cognitive-search-aml-skill.md) to programmatically specify this model. You can then use the portal to view and manage the skillset.
63
+
62
64
### Public endpoint requirements
63
65
64
66
For the purposes of this quickstart, all of the preceding resources must have public access enabled so that the Azure portal nodes can access them. Otherwise, the wizard fails. After the wizard runs, you can enable firewalls and private endpoints on the integration components for security. For more information, see [Secure connections in the import wizards](search-import-data-portal.md#secure-connections).
Copy file name to clipboardExpand all lines: articles/search/search-how-to-integrated-vectorization.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ author: haileytap
7
7
ms.author: haileytapia
8
8
ms.service: azure-ai-search
9
9
ms.topic: how-to
10
-
ms.date: 06/11/2025
10
+
ms.date: 07/16/2025
11
11
---
12
12
13
13
# Set up integrated vectorization in Azure AI Search using REST
@@ -48,7 +48,7 @@ For integrated vectorization, you must use one of the following embedding models
48
48
|--|--|
49
49
|[Azure OpenAI in Azure AI Foundry Models](/azure/ai-services/openai/how-to/create-resource) <sup>1, 2</sup> | text-embedding-ada-002<br>text-embedding-3-small<br>text-embedding-3-large |
50
50
|[Azure AI services multi-service resource](/azure/ai-services/multi-service-resource#azure-ai-services-resource-for-azure-ai-search-skills) <sup>3</sup> | For text and images: [Azure AI Vision multimodal](/azure/ai-services/computer-vision/how-to/image-retrieval) <sup>4</sup></li> |
51
-
<!--| [Azure AI Foundry model catalog](/azure/ai-foundry/what-is-azure-ai-foundry) | For text:<br>Cohere-embed-v3-english<br>Cohere-embed-v3-multilingual<br><br>For images:<br>Facebook-DinoV2-Image-Embeddings-ViT-Base<br>Facebook-DinoV2-Image-Embeddings-ViT-Giant|-->
51
+
<!--| [Azure AI Foundry model catalog](/azure/ai-foundry/what-is-azure-ai-foundry) | For text:<br>Cohere-embed-v3-english<br>Cohere-embed-v3-multilingual<br><br>For images:<br>Facebook-DinoV2-Image-Embeddings-ViT-Base<br>Facebook-DinoV2-Image-Embeddings-ViT-Giant<br>For text and images:<br>embed-v-4-0|-->
52
52
53
53
<sup>1</sup> The endpoint of your Azure OpenAI resource must have a [custom subdomain](/azure/ai-services/cognitive-services-custom-subdomains), such as `https://my-unique-name.openai.azure.com`. If you created your resource in the [Azure portal](https://portal.azure.com/), this subdomain was automatically generated during resource setup.
| Azure AI Vision | multimodal 4.0 <sup>1</sup> |[AzureAIVision](cognitive-search-skill-vision-vectorize.md)|[AzureAIVision](vector-search-vectorizer-ai-services-vision.md)|
57
-
| Azure AI Foundry model catalog | Facebook-DinoV2-Image-Embeddings-ViT-Base, <br>Facebook-DinoV2-Image-Embeddings-ViT-Giant, <br>Cohere-embed-v3-english, <br>Cohere-embed-v3-multilingual|[AML](cognitive-search-aml-skill.md) <sup>2</sup> |[Azure AI Foundry model catalog](vector-search-vectorizer-azure-machine-learning-ai-studio-catalog.md)|
57
+
| Azure AI Foundry model catalog | Facebook-DinoV2-Image-Embeddings-ViT-Base, <br>Facebook-DinoV2-Image-Embeddings-ViT-Giant, <br>Cohere-embed-v3-english, <br>Cohere-embed-v3-multilingual, <br>embed-v-4-0 <sup>1, 2</sup> |[AML](cognitive-search-aml-skill.md) <sup>3</sup> |[Azure AI Foundry model catalog](vector-search-vectorizer-azure-machine-learning-ai-studio-catalog.md)|
58
58
59
59
<sup>1</sup> Supports image and text vectorization.
60
60
61
-
<sup>2</sup> Deployed models in the model catalog are accessed over an AML endpoint. We use the existing AML skill for this connection.
61
+
<sup>2</sup> At this time, you can only specify `embed-v-4-0` programmatically through the [AML skill](cognitive-search-aml-skill.md), not through the Azure portal. However, you can use the portal to view and manage the skillset afterward.
62
+
63
+
<sup>3</sup> Deployed models in the model catalog are accessed over an AML endpoint. We use the existing AML skill for this connection.
62
64
63
65
You can use other models besides the ones listed here. For more information, see [Use non-Azure models for embeddings](#use-non-azure-models-for-embeddings) in this article.
|[`aml`](vector-search-vectorizer-azure-machine-learning-ai-studio-catalog.md)| Facebook-DinoV2-Image-Embeddings, Cohere-embed-v3 |[Azure AI Foundry model catalog](vector-search-integrated-vectorization-ai-studio.md)|[AML skill](cognitive-search-aml-skill.md)|
47
+
|[`aml`](vector-search-vectorizer-azure-machine-learning-ai-studio-catalog.md)| Facebook-DinoV2-Image-Embeddings, Cohere-embed-v3, embed-v-4-0 <sup>1</sup>|[Azure AI Foundry model catalog](vector-search-integrated-vectorization-ai-studio.md)|[AML skill](cognitive-search-aml-skill.md)|
48
48
|[`aiServicesVision`](vector-search-vectorizer-ai-services-vision.md)|[Multimodal embeddings 4.0 API](/azure/ai-services/computer-vision/concept-image-retrieval)| Azure AI Vision (through an Azure AI services multi-service account) |[Azure AI Vision multimodal embeddings skill](cognitive-search-skill-vision-vectorize.md)|
49
49
|[`customWebApi`](vector-search-vectorizer-custom-web-api.md)| Any embedding model | Hosted externally |[Custom Web API skill](cognitive-search-custom-skill-web-api.md)|
50
50
51
+
<sup>1</sup> At this time, you can only specify `embed-v-4-0` programmatically through the [AML skill](cognitive-search-aml-skill.md), not through the Azure portal. However, you can use the portal to view and manage the skillset afterward.
52
+
51
53
## Try a vectorizer with sample data
52
54
53
55
The [Import and vectorize data wizard](search-get-started-portal-import-vectors.md) reads files from Azure Blob storage, creates an index with chunked and vectorized fields, and adds a vectorizer. By design, the vectorizer that's created by the wizard is set to the same embedding model used to index the blob content.
Copy file name to clipboardExpand all lines: articles/search/vector-search-integrated-vectorization-ai-studio.md
+10-13Lines changed: 10 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ ms.service: azure-ai-search
8
8
ms.custom:
9
9
- build-2024
10
10
ms.topic: how-to
11
-
ms.date: 07/07/2025
11
+
ms.date: 07/16/2025
12
12
---
13
13
14
14
# Use embedding models from Azure AI Foundry model catalog for integrated vectorization
@@ -35,15 +35,13 @@ After the model is deployed, you can use it for [integrated vectorization](vecto
35
35
36
36
Integrated vectorization and the [Import and vectorize data wizard](search-import-data-portal.md) support the following embedding models in the model catalog:
37
37
38
-
For text embeddings:
38
+
| Embedding type | Supported models |
39
+
|--|--|
40
+
| Text | Cohere-embed-v3-english, Cohere-embed-v3-multilingual|
| Multimodal (text and image) | embed-v-4-0 <sup>1</sup> |
39
43
40
-
+ Cohere-embed-v3-english
41
-
+ Cohere-embed-v3-multilingual
42
-
43
-
For image embeddings:
44
-
45
-
+ Facebook-DinoV2-Image-Embeddings-ViT-Base
46
-
+ Facebook-DinoV2-Image-Embeddings-ViT-Giant
44
+
<sup>1</sup> At this time, you can only specify `embed-v-4-0` programmatically through the [AML skill](cognitive-search-aml-skill.md), not through the Azure portal. However, you can use the portal to view and manage the skillset afterward.
47
45
48
46
## Deploy an embedding model from the Azure AI Foundry model catalog
49
47
@@ -178,15 +176,14 @@ This AML skill payload works with the following text embedding models from Azure
178
176
179
177
+ Cohere-embed-v3-english
180
178
+ Cohere-embed-v3-multilingual
179
+
+ embed-v-4-0
181
180
182
181
It assumes that you're chunking your content using the Text Split skill and therefore your text to be vectorized is in the `/document/pages/*` path. If your text comes from a different path, update all references to the `/document/pages/*` path accordingly.
183
182
184
183
You must add the `/v1/embed` path onto the end of the URL that you copied from your Azure AI Foundry deployment. You might also change the values for the `input_type`, `truncate` and `embedding_types` inputs to better fit your use case. For more information on the available options, review the [Cohere Embed API reference](/azure/ai-foundry/how-to/deploy-models-cohere-embed).
185
184
186
185
The URI and key are generated when you deploy the model from the catalog. For more information about these values, see [How to deploy Cohere Embed models with Azure AI Foundry](/azure/ai-foundry/how-to/deploy-models-cohere-embed).
187
186
188
-
Note that image URIs aren't supported by this integration at this time.
@@ -220,9 +217,9 @@ Note that image URIs aren't supported by this integration at this time.
220
217
}
221
218
```
222
219
223
-
In addition, the output of the Cohere model isn't the embeddings array directly, but rather a JSON object that contains it. You need to select it appropriately when mapping it to the index definition via `indexProjections` or `outputFieldMappings`. Here's a sample `indexProjections` payload that would allow you to do implement this mapping.
220
+
In addition, the output of the Cohere model isn't the embeddings array directly, but rather a JSON object that contains it. You need to select it appropriately when mapping it to the index definition via `indexProjections` or `outputFieldMappings`. Here's a sample `indexProjections` payload that would allow you to do implement this mapping.
224
221
225
-
If you selected a different `embedding_types` in your skill definition that you have to change `float` in the `source` path to the appropriate type that you did select instead.
222
+
If you selected a different `embedding_types` in your skill definition, change `float` in the `source` path to the type you selected.
Copy file name to clipboardExpand all lines: articles/search/vector-search-vectorizer-azure-machine-learning-ai-studio-catalog.md
+5-3Lines changed: 5 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,13 +8,13 @@ ms.service: azure-ai-search
8
8
ms.custom:
9
9
- build-2024
10
10
ms.topic: reference
11
-
ms.date: 12/03/2024
11
+
ms.date: 07/16/2024
12
12
---
13
13
14
14
# Azure AI Foundry model catalog vectorizer
15
15
16
16
> [!IMPORTANT]
17
-
> This vectorizer is in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). The [2024-05-01-Preview REST API](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2024-05-01-Preview&preserve-view=true) supports this feature.
17
+
> This vectorizer is in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). To use this feature, we recommend the latest preview version of [Indexes - Create Or Update](/rest/api/searchservice/indexes/create-or-update) (REST API).
18
18
19
19
The **Azure AI Foundry model catalog** vectorizer connects to an embedding model that was deployed via [the Azure AI Foundry model catalog](/azure/ai-foundry/how-to/model-catalog-overview) to an Azure Machine Learning endpoint. Your data is processed in the [Geo](https://azure.microsoft.com/explore/global-infrastructure/data-residency/) where your model is deployed.
20
20
@@ -27,7 +27,7 @@ Parameters are case-sensitive. Which parameters you choose to use depends on wha
27
27
| Parameter name | Description |
28
28
|--------------------|-------------|
29
29
|`uri`| (Required) The [URI of the AML online endpoint](../machine-learning/how-to-authenticate-online-endpoint.md) to which the _JSON_ payload is sent. Only the **https** URI scheme is allowed. |
30
-
|`modelName`| (Required) The model ID from the Azure AI Foundry model catalog that is deployed at the provided endpoint. Supported models are: <ul><li>Facebook-DinoV2-Image-Embeddings-ViT-Base </li><li>Facebook-DinoV2-Image-Embeddings-ViT-Giant </li><li>Cohere-embed-v3-english </li><li>Cohere-embed-v3-multilingual</ul> |
30
+
|`modelName`| (Required) The model ID from the Azure AI Foundry model catalog that is deployed at the provided endpoint. Supported models are:<p><ul><li>Facebook-DinoV2-Image-Embeddings-ViT-Base </li><li>Facebook-DinoV2-Image-Embeddings-ViT-Giant </li><li>Cohere-embed-v3-english </li><li>Cohere-embed-v3-multilingual</li><li>Cohere-embed-v4</li></ul> |
31
31
|`key`| (Required for [key authentication](#WhatParametersToUse)) The [key for the AML online endpoint](../machine-learning/how-to-authenticate-online-endpoint.md). |
32
32
|`resourceId`| (Required for [token authentication](#WhatParametersToUse)). The Azure Resource Manager resource ID of the AML online endpoint. It should be in the format subscriptions/{guid}/resourceGroups/{resource-group-name}/Microsoft.MachineLearningServices/workspaces/{workspace-name}/onlineendpoints/{endpoint_name}. |
33
33
|`region`| (Optional for [token authentication](#WhatParametersToUse)). The [region](https://azure.microsoft.com/global-infrastructure/regions/) the AML online endpoint is deployed in. Needed if the region is different from the region of the search service. |
@@ -55,6 +55,7 @@ Which vector query types are supported by the Azure AI Foundry model catalog vec
55
55
| Facebook-DinoV2-Image-Embeddings-ViT-Giant || X | X |
56
56
| Cohere-embed-v3-english | X |||
57
57
| Cohere-embed-v3-multilingual | X |||
58
+
| Cohere-embed-v4 | X | X | X |
58
59
59
60
## Expected field dimensions
60
61
@@ -66,6 +67,7 @@ The expected field dimensions for a vector field configured with an Azure AI Fou
0 commit comments