Skip to content

Commit 362bd6e

Browse files
committed
Create multimodal-search-oveview.md
1 parent 0b8e733 commit 362bd6e

File tree

1 file changed

+85
-0
lines changed

1 file changed

+85
-0
lines changed
Lines changed: 85 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,85 @@
1+
---
2+
title: Multimodal search concepts and guidance in Azure AI Search
3+
titleSuffix: Azure AI Search
4+
description: Learn what multimodal search is, how Azure AI Search supports it for text + image content, and where to find detailed concepts, tutorials, and samples.
5+
ms.service: azure-ai-search
6+
ms.topic: conceptual
7+
ms.date: 05/10/2025
8+
author: gmndrg
9+
ms.author: gimondra
10+
---
11+
12+
# Multimodal search in Azure AI Search
13+
14+
Multimodal search is the ability to ingest, understand, and retrieve documents that contain text and images, enabling you to perform searches that combine various modalities, such as querying with text to find information embedded in relevant complex images. In practice, this means an application using multimodal search can answer a question such as “What is the process to have an HR form approved?” even when the only authoritative description of the workflow lives inside an embedded diagram of a PDF file.
15+
16+
Diagrams, scanned forms, screenshots, and infographics often contain the decisive details that make or break an answer. Multimodal search helps closing that gap by bringing visual content into the same retrieval pipeline, so your AI agent doesn't overlook a critical image, and your users can trace every provided answer back to its original source.
17+
18+
Building a robust multimodal pipeline typically requires multiple moving parts: extracting inline images and page text, describing an image in natural language, embedding both modalities into a common vector space, storing extracted images for later display, preserving the order of the information as displayed in the document and finally executing hybrid queries that combine keyword and vector search with semantic ranking.
19+
20+
Azure AI Search simplifies the construction of a multimodal pipeline through a guided experience in the Azure portal:
21+
22+
1. [Azure portal multimodal functionality](search-get-started-portal-image-search.md): The step-by-step multimodal functionality under "Import and vectorize data" wizard accepts document inputs, applies data extraction and enrichment settings, and produces a fully operational index that contains page text, inline embedded images references, and vector embeddings.
23+
2. [Reference GitHub multimodal RAG sample application](https://aka.ms/azs-multimodal-sample-app-repo):A companion repository on GitHub with end-to-end sample code that demonstrates how a [Retrieval Augmented Generation (RAG)](retrieval-augmented-generation-overview.md) application consumes the multimodal index and renders both textual citations and associated image snippets in the response. This wizard also provides an end-to-end code-ready app deployment in case you'd like to a code-only approach for data ingestion and processing as well.
24+
25+
## Functionality enabling multimodality
26+
27+
The functionality behind the "Import and Vectorize Data" wizard's multimodality option is powered by managed, configurable AI skills and the Azure Search knowledge store:
28+
29+
+ [Document Intelligence layout skill](cognitive-search-skill-document-intelligence-layout.md) and [Document extraction skill](cognitive-search-skill-document-extraction.md) obtain page text, inline images, and structural metadata. The Document Extraction skill doesn't support polygon extraction or page number extraction. Also, the range of supported file types may vary. To ensure optimal alignment with your specific use case, check each skill documentation for detailed information on compatibility and capabilities.
30+
+ [Split skill](cognitive-search-skill-textsplit.md) chunks the extracted text for utilization in the remaining pipeline functionality (such as embedding skills).
31+
+ [Gen AI prompt skill](cognitive-search-skill-genai-prompt.md) verbalizes images, producing concise natural-language descriptions suitable for text search and embedding using a Large Language Model (LLM).
32+
+ Text/image (or multimodal) embedding skills create embeddings for text and images, enabling similarity and hybrid retrieval. You can call [Azure OpenAI](cognitive-search-skill-azure-openai-embedding.md), [AI Foundry](https://learn.microsoft.com/en-us/azure/search/cognitive-search-aml-skill.md) or [AI Vision](cognitive-search-skill-vision-vectorize.md) embedding models natively.
33+
+ [Knowledge store](knowledge-store-concept-intro.md) stores extracted images that can be returned directly to client applications. When you use the 'Import and vectorize data' wizard with the multimodality option, an image's location is stored directly within the index, enabling convenient retrieval at a query time.
34+
35+
36+
## Selecting an ingestion skill
37+
38+
A multimodal pipeline begins by cracking each source document into chunks of text, inline images, and associated metadata. Azure AI Search provides two built-in skills for this step. Both enable textual and image extraction, but they differ in the layout detail and metadata they return, and in how their billing works.
39+
40+
| Characteristic | Document Intelligence layout skill | Document extraction skill |
41+
|----------------|------------------------------------|---------------------------|
42+
| Location metadata extraction (page, bounding polygon) | Yes | No |
43+
| Data-extraction billing | Billed according to [Document Intelligence layout-model pricing](https://azure.microsoft.com/pricing/details/ai-document-intelligence/). | Image extraction is billed as outlined in the [Azure AI Search pricing page](https://azure.microsoft.com/pricing/details/search/). |
44+
| Recommended scenarios | RAG pipelines and agent workflows that need precise page numbers, on-page highlights, or diagram overlays in client apps. | Rapid prototyping or production pipelines where the exact position or detailed layout information is not required. |
45+
46+
You can also call directly [Content Understanding](/azure/ai-services/content-understanding/concepts/retrieval-augmented-generation) for multimodality content extraction purposes using a [custom skill](cognitive-search-custom-skill-web-api.md) since it isn't supported natively yet in Azure AI Search.
47+
48+
## Choosing an embedding strategy: image verbalization or direct embeddings
49+
Retrieving knowledge from images can follow two complementary paths in Azure AI Search. Understanding the distinctions helps you align cost, latency, and answer quality with the needs of your application.
50+
51+
### Image verbalization followed by text embeddings
52+
With this method, the Gen AI prompt skill invokes an LLM during ingestion to create a concise natural-language description of each extracted image—for example “Five-step HR access workflow that begins with manager approval.” The description is stored as text and embedded alongside the surrounding document text. Because the image is now expressed in language, Azure AI Search can:
53+
54+
- Interpret the relationships and entities shown in a diagram.
55+
- Supply ready-made captions that an LLM can cite verbatim in a response.
56+
- Return relevant snippets for RAG applications/AI agent scenarios with grounded data.
57+
58+
The added semantic depth entails an LLM call for every image and a marginal increase in indexing time.
59+
60+
### Direct vision–text embeddings
61+
A second option is to pass the document extracted images and text to a multimodal embedding model that produces vector representations in the same vector space. Configuration is straightforward and no LLM is required at indexing time. Direct embeddings are well suited to visual similarity and “find-me-something-that-looks-like-this” scenarios.
62+
63+
Because the representation is purely mathematical, it does not convey why two images are related, and it offers the LLM no ready context for citations or detailed explanations.
64+
65+
### Combining both approaches
66+
Many solutions need both encoding paths. Diagrams, flow charts, and other explanation-rich visuals are verbalized so that semantic information is available for RAG and AI agent grounding. Screenshots, product photos, or artwork are embedded directly for efficient similarity search. You can customize your Azure AI Search index and indexer skillset pipeline so it can store the two sets of vectors and retrieve them side by side.
67+
68+
69+
### Tutorials and samples
70+
71+
To help you get started with multimodal search in Azure AI Search, here's a collection of tutorials and samples that demonstrate how to create and optimize multimodal indexes using Azure functionalities and capabilities.
72+
73+
| Tutorial / sample | Description |
74+
| ---------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------- |
75+
| [Quickstart: Multimodal search in the Azure portal](search-get-started-portal-image-search.md) | Create and test a multimodal index in the Azure portal using the wizard and Search Explorer. |
76+
| [Tutorial: Image verbalization + document extraction](tutorial-multimodal-indexing-with-image-verbalization-and-doc-extraction.md) | Extract text and images, verbalize diagrams, and embed the resulting descriptions and text into a searchable index. |
77+
| [Tutorial: Multimodal embeddings + document extraction](tutorial-multimodal-indexing-with-embedding-and-doc-extraction.md) | Use a vision-text model to embed both text and images directly, enabling visual-similarity search over scanned PDFs. |
78+
| [Tutorial: Image verbalization + layout skill](tutorial-multimodal-index-image-verbalization-skill.md) | Apply layout-aware chunking and diagram verbalization, capture location metadata, and store cropped images for precise citations and page highlights. |
79+
| [Tutorial: Multimodal embeddings + layout skill](tutorial-multimodal-index-embeddings-skill.md) | Combine layout-aware chunking with unified embeddings for hybrid semantic + keyword search that returns exact hit locations. |
80+
| [Sample app: Multimodal RAG GitHub repository](https://aka.ms/azs-multimodal-sample-app-repo) | An end-to-end RAG application code with multimodal capabilities that surfaces both text snippets and image annotations—ideal for jump-starting enterprise copilots. |
81+
82+
83+
84+
85+

0 commit comments

Comments
 (0)