You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/search/search-get-started-portal-image-search.md
+13-8Lines changed: 13 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ ms.author: haileytapia
7
7
ms.service: azure-ai-search
8
8
ms.update-cycle: 90-days
9
9
ms.topic: quickstart
10
-
ms.date: 07/16/2025
10
+
ms.date: 07/22/2025
11
11
ms.custom:
12
12
- references_regions
13
13
---
@@ -52,7 +52,7 @@ For content embedding, you can choose either image verbalization (followed by te
52
52
| Method | Description | Supported models |
53
53
|--|--|--|
54
54
| Image verbalization | Uses an LLM to generate natural-language descriptions of images, and then uses an embedding model to vectorize plain text and verbalized images.<br><br>Requires an [Azure OpenAI resource](/azure/ai-services/openai/how-to/create-resource) <sup>1, 2</sup> or [Azure AI Foundry project](/azure/ai-foundry/how-to/create-projects).<br><br>For text vectorization, you can also use an [Azure AI services multi-service resource](/azure/ai-services/multi-service-resource#azure-ai-multi-services-resource-for-azure-ai-search-skills) <sup>3</sup> in a [supported region](cognitive-search-skill-vision-vectorize.md). | LLMs:<br>GPT-4o<br>GPT-4o-mini<br>phi-4 <sup>4</sup><br><br>Embedding models:<br>text-embedding-ada-002<br>text-embedding-3-small<br>text-embedding-3-large |
55
-
| Multimodal embeddings | Uses an embedding model to directly vectorize both text and images.<br><br>Requires an [Azure AI Foundry project](/azure/ai-foundry/how-to/create-projects) or [Azure AI services multi-service resource](/azure/ai-services/multi-service-resource#azure-ai-multi-services-resource-for-azure-ai-search-skills) <sup>3</sup> in a [supported region](cognitive-search-skill-vision-vectorize.md). | Cohere-embed-v3-english<br>Cohere-embed-v3-multilingual<br>Cohere-embed-v4 <sup>5</sup> |
55
+
| Multimodal embeddings | Uses an embedding model to directly vectorize both text and images.<br><br>Requires an [Azure AI Foundry hub-based project](/azure/ai-foundry/how-to/create-projects) or [Azure AI services multi-service resource](/azure/ai-services/multi-service-resource#azure-ai-multi-services-resource-for-azure-ai-search-skills) <sup>3</sup> in a [supported region](cognitive-search-skill-vision-vectorize.md). | Cohere-embed-v3-english<br>Cohere-embed-v3-multilingual<br>Cohere-embed-v4 <sup>5</sup> |
56
56
57
57
<sup>1</sup> The endpoint of your Azure OpenAI resource must have a [custom subdomain](/azure/ai-services/cognitive-services-custom-subdomains), such as `https://my-unique-name.openai.azure.com`. If you created your resource in the [Azure portal](https://portal.azure.com/), this subdomain was automatically generated during resource setup.
58
58
@@ -128,6 +128,9 @@ On your Azure OpenAI resource:
128
128
129
129
The Azure AI Foundry model catalog provides LLMs for image verbalization and embedding models for text and image vectorization. Your search service requires access to call the [GenAI Prompt skill](cognitive-search-skill-genai-prompt.md) and [AML skill](cognitive-search-aml-skill.md).
130
130
131
+
> [!NOTE]
132
+
> If you're using a hub-based project for multimodal embeddings, skip this step. The wizard requires keys instead of managed identities for authentication in that scenario.
133
+
131
134
On your Azure AI Foundry project:
132
135
133
136
+ Assign **Azure AI Project Manager** to your [search service identity](search-howto-managed-identities-data-sources.md#create-a-system-managed-identity).
@@ -197,7 +200,7 @@ Azure AI Search requires a connection to a data source for content ingestion and
197
200
198
201
To connect to your data:
199
202
200
-
1. On the **Connect to your data** page, specify your Azure subscription.
203
+
1. On the **Connect to your data** page, select your Azure subscription.
201
204
202
205
1. Select the storage account and container to which you uploaded the sample data.
203
206
@@ -233,7 +236,7 @@ To use the Document Layout skill:
233
236
234
237
:::image type="content" source="media/search-get-started-portal-images/extract-your-content-doc-intelligence.png" alt-text="Screenshot of the wizard page with Azure AI Document Intelligence selected for content extraction." border="true" lightbox="media/search-get-started-portal-images/extract-your-content-doc-intelligence.png":::
235
238
236
-
1.Specify your Azure subscription and multi-service resource.
239
+
1.Select your Azure subscription and multi-service resource.
237
240
238
241
1. For the authentication type, select **System assigned identity**.
239
242
@@ -267,7 +270,7 @@ To use the skills for image verbalization:
267
270
268
271
1. For the kind, select your LLM provider: **Azure OpenAI** or **AI Foundry Hub catalog models**.
269
272
270
-
1.Specify your Azure subscription, resource, and LLM deployment.
273
+
1.Select your Azure subscription, resource, and LLM deployment.
271
274
272
275
1. For the authentication type, select **System assigned identity**.
273
276
@@ -279,7 +282,7 @@ To use the skills for image verbalization:
279
282
280
283
1. For the kind, select your model provider: **Azure OpenAI**, **AI Foundry Hub catalog models**, or **AI Vision vectorization**.
281
284
282
-
1.Specify your Azure subscription, resource, and embedding model deployment.
285
+
1.Select your Azure subscription, resource, and embedding model deployment (if applicable).
283
286
284
287
1. For the authentication type, select **System assigned identity**.
285
288
@@ -305,7 +308,9 @@ To use the skills for multimodal embeddings:
305
308
306
309
If Azure AI Vision is unavailable, make sure your search service and multi-service resource are both in a [region that supports the Azure AI Vision multimodal APIs](/azure/ai-services/computer-vision/how-to/image-retrieval).
307
310
308
-
1. Specify your Azure subscription, resource, and embedding model deployment.
311
+
1. Select your Azure subscription, resource, and embedding model deployment (if applicable).
312
+
313
+
1. If you're using Azure AI Vision, select **System assigned identity** for the authentication type. Otherwise, leave it as **API key**.
309
314
310
315
1. Select the checkbox that acknowledges the billing effects of using this resource.
311
316
@@ -321,7 +326,7 @@ The next step is to send images extracted from your documents to Azure Storage.
321
326
322
327
To store the extracted images:
323
328
324
-
1. On the **Image output** page, specify your Azure subscription.
329
+
1. On the **Image output** page, select your Azure subscription.
325
330
326
331
1. Select the storage account and blob container you created to store the images.
0 commit comments