Skip to content

Commit 628cde0

Browse files
Merge pull request #7556 from haileytap/feedback
[Azure Search] Update models and resources in VS and MM portal quickstarts
2 parents c0e0e81 + 3f49690 commit 628cde0

File tree

2 files changed

+86
-59
lines changed

2 files changed

+86
-59
lines changed

articles/search/search-get-started-portal-image-search.md

Lines changed: 25 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: haileytap
66
ms.author: haileytapia
77
ms.service: azure-ai-search
88
ms.topic: quickstart
9-
ms.date: 09/12/2025
9+
ms.date: 10/08/2025
1010
ms.custom:
1111
- references_regions
1212
---
@@ -46,20 +46,30 @@ For content extraction, you can choose either default extraction via Azure AI Se
4646

4747
### Supported embedding methods
4848

49-
For content embedding, you can choose either image verbalization (followed by text vectorization) or multimodal embeddings. Deployment instructions for the models are provided in a [later section](#deploy-models). The following table describes both embedding methods.
49+
For content embedding, choose one of the following methods:
5050

51-
| Method | Description | Supported models |
51+
+ **Image verbalization:** Uses an LLM to generate natural-language descriptions of images, and then uses an embedding model to vectorize plain text and verbalized images.
52+
53+
+ **Multimodal embeddings:** Uses an embedding model to directly vectorize both text and images.
54+
55+
The following table lists the supported providers and models for each method. Deployment instructions for the models are provided in a [later section](#deploy-models).
56+
57+
| Provider | Models for image verbalization | Models for multimodal embeddings |
5258
|--|--|--|
53-
| Image verbalization | Uses an LLM to generate natural-language descriptions of images, and then uses an embedding model to vectorize plain text and verbalized images.<br><br>Requires an [Azure OpenAI resource](/azure/ai-services/openai/how-to/create-resource) <sup>1, 2</sup> or [Azure AI Foundry project](/azure/ai-foundry/how-to/create-projects).<br><br>For text vectorization, you can also use an [Azure AI services multi-service resource](/azure/ai-services/multi-service-resource#azure-ai-multi-services-resource-for-azure-ai-search-skills) <sup>3</sup> in a [supported region](cognitive-search-skill-vision-vectorize.md). | LLMs:<br>GPT-4o<br>GPT-4o-mini<br>phi-4 <sup>4</sup><br><br>Embedding models:<br>text-embedding-ada-002<br>text-embedding-3-small<br>text-embedding-3-large |
54-
| Multimodal embeddings | Uses an embedding model to directly vectorize both text and images.<br><br>Requires an [Azure AI Foundry hub-based project](/azure/ai-foundry/how-to/create-projects) or [Azure AI services multi-service resource](/azure/ai-services/multi-service-resource#azure-ai-multi-services-resource-for-azure-ai-search-skills) <sup>3</sup> in a [supported region](cognitive-search-skill-vision-vectorize.md). | Cohere-embed-v3-english<br>Cohere-embed-v3-multilingual<br>Cohere-embed-v4 <sup>5</sup> |
59+
| [Azure OpenAI in Azure AI Foundry Models resource](/azure/ai-services/openai/how-to/create-resource) <sup>1, 2</sup> | LLMs:<br>gpt-4o<br>gpt-4o-mini<br>gpt-5<br>gpt-5-mini<br>gpt-5-nano<br><br>Embedding models:<br>text-embedding-ada-002<br>text-embedding-3-small<br>text-embedding-3-large | |
60+
| [Azure AI Foundry project](/azure/ai-foundry/how-to/create-projects) | Embedding models:<br>text-embedding-ada-002<br>text-embedding-3-small<br>text-embedding-3-large | |
61+
| [Azure AI Foundry hub-based project](/azure/ai-foundry/how-to/hub-create-projects) | LLMs:<br>phi-4<br>gpt-4o<br>gpt-4o-mini<br>gpt-5<br>gpt-5-mini<br>gpt-5-nano<br><br>Embedding models:<br>text-embedding-ada-002<br>text-embedding-3-small<br>text-embedding-3-large<br>Cohere-embed-v3-english <sup>3</sup><br>Cohere-embed-v3-multilingual <sup>3</sup> | Cohere-embed-v3-english <sup>3</sup><br>Cohere-embed-v3-multilingual <sup>3</sup> |
62+
| [Azure AI services multi-service resource](/azure/ai-services/multi-service-resource#azure-ai-multi-services-resource-for-azure-ai-search-skills) <sup>4</sup> | Embedding model: [Azure AI Vision multimodal](/azure/ai-services/computer-vision/how-to/image-retrieval) <sup>5</sup> | [Azure AI Vision multimodal](/azure/ai-services/computer-vision/how-to/image-retrieval) <sup>5</sup> |
5563

5664
<sup>1</sup> The endpoint of your Azure OpenAI resource must have a [custom subdomain](/azure/ai-services/cognitive-services-custom-subdomains), such as `https://my-unique-name.openai.azure.com`. If you created your resource in the [Azure portal](https://portal.azure.com/), this subdomain was automatically generated during resource setup.
5765

58-
<sup>2</sup> For billing purposes, you must [attach your Azure AI multi-service resource](cognitive-search-attach-cognitive-services.md) to the skillset in your Azure AI Search service. Unless you use a [keyless connection (preview)](cognitive-search-attach-cognitive-services.md#bill-through-a-keyless-connection) to create the skillset, both resources must be in the same region.
66+
<sup>2</sup> Azure OpenAI resources (with access to embedding models) that were created in the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs) aren't supported. You must create an Azure OpenAI resource in the Azure portal.
67+
68+
<sup>3</sup> To use this model in the wizard, you must [deploy it as a serverless API deployment](/azure/ai-foundry/how-to/deploy-models-serverless).
5969

60-
<sup>3</sup> `phi-4` is only available to Azure AI Foundry projects.
70+
<sup>4</sup> For billing purposes, you must [attach your Azure AI multi-service resource](cognitive-search-attach-cognitive-services.md) to the skillset in your Azure AI Search service. Unless you use a [keyless connection (preview)](cognitive-search-attach-cognitive-services.md#bill-through-a-keyless-connection) to create the skillset, both resources must be in the same region.
6171

62-
<sup>4</sup> The Azure portal doesn't support `embed-v-4-0` for vectorization, so don't use it for this quickstart. Instead, use the [AML skill](cognitive-search-aml-skill.md) or [Azure AI Foundry model catalog vectorizer](vector-search-vectorizer-azure-machine-learning-ai-studio-catalog.md) to programmatically specify this model. You can then use the portal to manage the skillset or vectorizer.
72+
<sup>5</sup> The Azure AI Vision multimodal embeddings APIs are available in [select regions](/azure/ai-services/computer-vision/overview-image-analysis#region-availability).
6373

6474
### Public endpoint requirements
6575

@@ -91,7 +101,7 @@ On your Azure AI Search service:
91101

92102
1. [Configure a system-assigned managed identity](search-how-to-managed-identities.md#create-a-system-managed-identity).
93103

94-
1. [Assign the following roles](search-security-rbac.md) to yourself:
104+
1. [Assign the following roles](search-security-rbac.md) to yourself.
95105

96106
+ **Search Service Contributor**
97107

@@ -101,7 +111,7 @@ On your Azure AI Search service:
101111

102112
### [**Azure Storage**](#tab/storage-perms)
103113

104-
Azure Storage is both the data source for your documents and the destination for extracted images. Your search service requires access to these storage containers, which you create in the next section.
114+
Azure Storage is both the data source for your documents and the destination for extracted images. Your search service requires access to the storage containers you create in the next section.
105115

106116
On your Azure Storage account:
107117

@@ -126,9 +136,9 @@ On your Azure OpenAI resource:
126136
The Azure AI Foundry model catalog provides LLMs for image verbalization and embedding models for text and image vectorization. Your search service requires access to call the [GenAI Prompt skill](cognitive-search-skill-genai-prompt.md) and [AML skill](cognitive-search-aml-skill.md).
127137

128138
> [!NOTE]
129-
> If you're using a hub-based project for multimodal embeddings, skip this step. The wizard requires key-based authentication in this scenario.
139+
> If you're using a hub-based project, skip this step. Hub-based projects support API keys instead of managed identities for authentication.
130140
131-
On your Azure AI Foundry project:
141+
On your Azure AI Foundry resource:
132142

133143
+ Assign **Azure AI Project Manager** to your [search service identity](search-how-to-managed-identities.md#create-a-system-managed-identity).
134144

@@ -269,7 +279,7 @@ To use the skills for image verbalization:
269279

270280
1. Select your Azure subscription, resource, and LLM deployment.
271281

272-
1. For the authentication type, select **System assigned identity**.
282+
1. For the authentication type, select **System assigned identity** if you're not using a hub-based project. Otherwise, leave it as **API key**.
273283

274284
1. Select the checkbox that acknowledges the billing effects of using these resources.
275285

@@ -281,7 +291,7 @@ To use the skills for image verbalization:
281291

282292
1. Select your Azure subscription, resource, and embedding model deployment (if applicable).
283293

284-
1. For the authentication type, select **System assigned identity**.
294+
1. For the authentication type, select **System assigned identity** if you're not using a hub-based project. Otherwise, leave it as **API key**.
285295

286296
1. Select the checkbox that acknowledges the billing effects of using these resources.
287297

@@ -307,7 +317,7 @@ To use the skills for multimodal embeddings:
307317

308318
1. Select your Azure subscription, resource, and embedding model deployment (if applicable).
309319

310-
1. If you're using Azure AI Vision, select **System assigned identity** for the authentication type. Otherwise, leave it as **API key**.
320+
1. For the authentication type, select **System assigned identity** if you're not using a hub-based project. Otherwise, leave it as **API key**.
311321

312322
1. Select the checkbox that acknowledges the billing effects of using this resource.
313323

0 commit comments

Comments
 (0)