|
| 1 | +--- |
| 2 | +title: Multimodal search concepts and guidance in Azure AI Search |
| 3 | +titleSuffix: Azure AI Search |
| 4 | +description: Learn what multimodal search is, how Azure AI Search supports it for text + image content, and where to find detailed concepts, tutorials, and samples. |
| 5 | +ms.service: azure-ai-search |
| 6 | +ms.topic: conceptual |
| 7 | +ms.date: 05/11/2025 |
| 8 | +author: gmndrg |
| 9 | +ms.author: gimondra |
| 10 | +--- |
| 11 | + |
| 12 | +# Multimodal search in Azure AI Search |
| 13 | + |
| 14 | +Multimodal search refers to the ability to ingest, understand, and retrieve content across multiple data types, including text, images, and other modalities such as video and audio. In Azure AI Search, multimodal search natively supports the ingestion of documents containing text and images, and the retrieval of their content, enabling users to perform searches that combine these modalities. In practice, this capability means an application using multimodal search can answer a question such as, "What is the process to have an HR form approved?" even when the only authoritative description of the workflow lives inside an embedded diagram of a PDF file. |
| 15 | + |
| 16 | +Diagrams, scanned forms, screenshots, and infographics often contain the decisive details that make or break an answer. Multimodal search helps close the gap by integrating visual content into the same retrieval pipeline as text. This approach reduces the likelihood that your AI agent or RAG application might overlook important images and enables your users to trace every provided answer back to its original source. |
| 17 | + |
| 18 | +Building a robust multimodal pipeline typically involves several key steps. These steps include extracting inline images and page text, describing images in natural language, embedding both text and images into a shared vector space, and storing the images for later use as annotations. Multimodal search also requires preserving the order of information as it appears in the document and executing [hybrid queries](hybrid-search-overview.md) that combine [full text search](search-lucene-query-architecture.md) with [vector search](vector-search-overview.md) and [semantic ranking](semantic-search-overview.md). |
| 19 | + |
| 20 | +Azure AI Search simplifies the construction of a multimodal pipeline through a guided experience in the Azure portal: |
| 21 | + |
| 22 | +1. [Azure portal multimodal functionality](search-get-started-portal-image-search.md): The step-by-step multimodal functionality in the "Import and vectorize data" wizard helps configure your data source, extraction and enrichment settings, and generate a multimodal index containing text, embedded image references, and vector embeddings. |
| 23 | +1. [Reference GitHub multimodal RAG application sample](https://aka.ms/azs-multimodal-sample-app-repo): A companion GitHub repository with sample code. The sample demonstrates how a [Retrieval Augmented Generation (RAG)](retrieval-augmented-generation-overview.md) application consumes a multimodal index and renders both textual citations and associated image snippets in the response. The repository also showcases the full process of data ingestion and indexing through code, providing developers with a programmatic alternative to the Azure portal wizard. |
| 24 | + |
| 25 | +## Functionality enabling multimodality |
| 26 | + |
| 27 | +The functionality behind the "Import and vectorize data" wizard's multimodality option is powered by managed, configurable AI skills and the Azure Search knowledge store: |
| 28 | + |
| 29 | ++ [Document Intelligence layout skill](cognitive-search-skill-document-intelligence-layout.md) and [document extraction skill](cognitive-search-skill-document-extraction.md) obtain page text, inline images, and structural metadata. The Document Extraction skill doesn't support polygon extraction or page number extraction. Also, the range of supported file types may vary. To ensure optimal alignment with your specific use case, check each skill documentation for detailed information on compatibility and capabilities. |
| 30 | ++ [Split skill](cognitive-search-skill-textsplit.md) chunks the extracted text for utilization in the remaining pipeline functionality (such as embedding skills). |
| 31 | ++ [Gen AI prompt skill](cognitive-search-skill-genai-prompt.md) verbalizes images, producing concise natural-language descriptions suitable for text search and embedding using a Large Language Model (LLM). |
| 32 | ++ Text/image (or multimodal) embedding skills create embeddings for text and images, enabling similarity and hybrid retrieval. You can call [Azure OpenAI](cognitive-search-skill-azure-openai-embedding.md), [AI Foundry](cognitive-search-aml-skill.md), or [AI Vision](cognitive-search-skill-vision-vectorize.md) embedding models natively. |
| 33 | ++ [Knowledge store](knowledge-store-concept-intro.md) stores extracted images that can be returned directly to client applications. When you use the 'Import and vectorize data' wizard with the multimodality option, an image's location is stored directly within the index, enabling convenient retrieval at a query time. |
| 34 | + |
| 35 | + |
| 36 | +## Selecting an ingestion skill |
| 37 | + |
| 38 | +A multimodal pipeline begins by cracking each source document into chunks of text, inline images, and associated metadata. Azure AI Search provides two built-in skills for this step. Both enable textual and image extraction, but they differ in the layout detail and metadata they return, and in how their billing works. |
| 39 | + |
| 40 | +| Characteristic | Document Intelligence layout skill | Document extraction skill | |
| 41 | +|----------------|------------------------------------|---------------------------| |
| 42 | +| Location metadata extraction (page, bounding polygon) | Yes | No | |
| 43 | +| Data-extraction billing | Billed according to [Document Intelligence layout-model pricing](https://azure.microsoft.com/pricing/details/ai-document-intelligence/). | Image extraction is billed as outlined in the [Azure AI Search pricing page](https://azure.microsoft.com/pricing/details/search/). | |
| 44 | +| Recommended scenarios | RAG pipelines and agent workflows that need precise page numbers, on-page highlights, or diagram overlays in client apps. | Rapid prototyping or production pipelines where the exact position or detailed layout information isn't required. | |
| 45 | + |
| 46 | +You can also call directly [Content Understanding](/azure/ai-services/content-understanding/concepts/retrieval-augmented-generation) for multimodality content extraction purposes using a [custom skill](cognitive-search-custom-skill-web-api.md) since it isn't supported natively yet in Azure AI Search. |
| 47 | + |
| 48 | +## Choosing an embedding strategy: image verbalization or direct embeddings |
| 49 | +Retrieving knowledge from images can follow two complementary paths in Azure AI Search. Understanding the distinctions helps you align cost, latency, and answer quality with the needs of your application. |
| 50 | + |
| 51 | +### Image verbalization followed by text embeddings |
| 52 | +With this method, the Gen AI prompt skill invokes an LLM during ingestion to create a concise natural-language description of each extracted image—for example "Five-step HR access workflow that begins with manager approval." The description is stored as text and embedded alongside the surrounding document text. Because the image is now expressed in language, Azure AI Search can: |
| 53 | + |
| 54 | +- Interpret the relationships and entities shown in a diagram. |
| 55 | +- Supply ready-made captions that an LLM can cite verbatim in a response. |
| 56 | +- Return relevant snippets for RAG applications/AI agent scenarios with grounded data. |
| 57 | + |
| 58 | +The added semantic depth entails an LLM call for every image and a marginal increase in indexing time. |
| 59 | + |
| 60 | +### Direct vision–text embeddings |
| 61 | +A second option is to pass the document extracted images and text to a multimodal embedding model that produces vector representations in the same vector space. Configuration is straightforward and no LLM is required at indexing time. Direct embeddings are well suited to visual similarity and “find-me-something-that-looks-like-this” scenarios. |
| 62 | + |
| 63 | +Because the representation is purely mathematical, it doesn't convey why two images are related, and it offers the LLM no ready context for citations or detailed explanations. |
| 64 | + |
| 65 | +### Combining both approaches |
| 66 | +Many solutions need both encoding paths. Diagrams, flow charts, and other explanation-rich visuals are verbalized so that semantic information is available for RAG and AI agent grounding. Screenshots, product photos, or artwork are embedded directly for efficient similarity search. You can customize your Azure AI Search index and indexer skillset pipeline so it can store the two sets of vectors and retrieve them side by side. |
| 67 | + |
| 68 | + |
| 69 | +### Tutorials and samples |
| 70 | + |
| 71 | +To help you get started with multimodal search in Azure AI Search, here's a collection of tutorials and samples that demonstrate how to create and optimize multimodal indexes using Azure functionalities and capabilities. |
| 72 | + |
| 73 | +| Tutorial / sample | Description | |
| 74 | +| ---------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------- | |
| 75 | +| [Quickstart: Multimodal search in the Azure portal](search-get-started-portal-image-search.md) | Create and test a multimodal index in the Azure portal using the wizard and Search Explorer. | |
| 76 | +| [Tutorial: Image verbalization + document extraction](tutorial-multimodal-indexing-with-image-verbalization-and-doc-extraction.md) | Extract text and images, verbalize diagrams, and embed the resulting descriptions and text into a searchable index. | |
| 77 | +| [Tutorial: Multimodal embeddings + document extraction](tutorial-multimodal-indexing-with-embedding-and-doc-extraction.md) | Use a vision-text model to embed both text and images directly, enabling visual-similarity search over scanned PDFs. | |
| 78 | +| [Tutorial: Image verbalization + layout skill](tutorial-multimodal-index-image-verbalization-skill.md) | Apply layout-aware chunking and diagram verbalization, capture location metadata, and store cropped images for precise citations and page highlights. | |
| 79 | +| [Tutorial: Multimodal embeddings + layout skill](tutorial-multimodal-index-embeddings-skill.md) | Combine layout-aware chunking with unified embeddings for hybrid semantic + keyword search that returns exact hit locations. | |
| 80 | +| [Sample app: Multimodal RAG GitHub repository](https://aka.ms/azs-multimodal-sample-app-repo) | An end-to-end RAG application code with multimodal capabilities that surfaces both text snippets and image annotations—ideal for jump-starting enterprise copilots. | |
| 81 | + |
| 82 | + |
| 83 | + |
| 84 | + |
| 85 | + |
0 commit comments