You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/search/multimodal-search-overview.md
+13-15Lines changed: 13 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
---
2
-
title: Multimodal search concepts and guidance in Azure AI Search
2
+
title: Multimodal search concepts and guidance
3
3
titleSuffix: Azure AI Search
4
4
description: Learn what multimodal search is, how Azure AI Search supports it for text + image content, and where to find detailed concepts, tutorials, and samples.
5
5
ms.service: azure-ai-search
@@ -19,19 +19,19 @@ Building a robust multimodal pipeline typically involves several key steps. Thes
19
19
20
20
Azure AI Search simplifies the construction of a multimodal pipeline through a guided experience in the Azure portal:
21
21
22
-
1.[Azure portal multimodal functionality](search-get-started-portal-image-search.md): The step-by-step multimodal functionality in the "Import and vectorize data" wizard helps configure your data source, extraction and enrichment settings, and generate a multimodal index containing text, embedded image references, and vector embeddings.
23
-
1.[Reference GitHub multimodal RAG application sample](https://aka.ms/azs-multimodal-sample-app-repo): A companion GitHub repository with sample code. The sample demonstrates how a [Retrieval Augmented Generation (RAG)](retrieval-augmented-generation-overview.md) application consumes a multimodal index and renders both textual citations and associated image snippets in the response. The repository also showcases the full process of data ingestion and indexing through code, providing developers with a programmatic alternative to the Azure portal wizard.
24
-
22
+
+[Azure portal multimodal functionality](search-get-started-portal-image-search.md): The step-by-step multimodal functionality in the **Import and vectorize data** wizard helps configure your data source, extraction and enrichment settings, and generate a multimodal index containing text, embedded image references, and vector embeddings.
23
+
24
+
+[Reference GitHub multimodal RAG application sample](https://aka.ms/azs-multimodal-sample-app-repo): A companion GitHub repository with sample code. The sample demonstrates how a [Retrieval Augmented Generation (RAG)](retrieval-augmented-generation-overview.md) application consumes a multimodal index and renders both textual citations and associated image snippets in the response. The repository also showcases the full process of data ingestion and indexing through code, providing developers with a programmatic alternative to the Azure portal wizard.
25
+
25
26
## Functionality enabling multimodality
26
27
27
-
The functionality behind the "Import and vectorize data" wizard's multimodality option is powered by managed, configurable AI skills and the Azure Search knowledge store:
28
+
The functionality behind the **Import and vectorize data** wizard's multimodality option is powered by managed, configurable AI skills and the Azure Search knowledge store:
28
29
29
30
+[Document Intelligence layout skill](cognitive-search-skill-document-intelligence-layout.md) and [document extraction skill](cognitive-search-skill-document-extraction.md) obtain page text, inline images, and structural metadata. The Document Extraction skill doesn't support polygon extraction or page number extraction. Also, the range of supported file types may vary. To ensure optimal alignment with your specific use case, check each skill documentation for detailed information on compatibility and capabilities.
30
31
+[Split skill](cognitive-search-skill-textsplit.md) chunks the extracted text for utilization in the remaining pipeline functionality (such as embedding skills).
31
32
+[Gen AI prompt skill](cognitive-search-skill-genai-prompt.md) verbalizes images, producing concise natural-language descriptions suitable for text search and embedding using a Large Language Model (LLM).
32
33
+ Text/image (or multimodal) embedding skills create embeddings for text and images, enabling similarity and hybrid retrieval. You can call [Azure OpenAI](cognitive-search-skill-azure-openai-embedding.md), [AI Foundry](cognitive-search-aml-skill.md), or [AI Vision](cognitive-search-skill-vision-vectorize.md) embedding models natively.
33
-
+[Knowledge store](knowledge-store-concept-intro.md) stores extracted images that can be returned directly to client applications. When you use the 'Import and vectorize data' wizard with the multimodality option, an image's location is stored directly within the index, enabling convenient retrieval at a query time.
34
-
34
+
+[Knowledge store](knowledge-store-concept-intro.md) stores extracted images that can be returned directly to client applications. When you use the **Import and vectorize data** wizard with the multimodal option, an image's location is stored directly within the index, enabling convenient retrieval at a query time.
35
35
36
36
## Selecting an ingestion skill
37
37
@@ -43,13 +43,15 @@ A multimodal pipeline begins by cracking each source document into chunks of tex
43
43
| Data-extraction billing | Billed according to [Document Intelligence layout-model pricing](https://azure.microsoft.com/pricing/details/ai-document-intelligence/). | Image extraction is billed as outlined in the [Azure AI Search pricing page](https://azure.microsoft.com/pricing/details/search/). |
44
44
| Recommended scenarios | RAG pipelines and agent workflows that need precise page numbers, on-page highlights, or diagram overlays in client apps. | Rapid prototyping or production pipelines where the exact position or detailed layout information isn't required. |
45
45
46
-
You can also call directly [Content Understanding](/azure/ai-services/content-understanding/concepts/retrieval-augmented-generation) for multimodality content extraction purposes using a [custom skill](cognitive-search-custom-skill-web-api.md) since it isn't supported natively yet in Azure AI Search.
46
+
You can also call directly [Content Understanding](/azure/ai-services/content-understanding/concepts/retrieval-augmented-generation) for multimodal content extraction purposes using a [custom skill](cognitive-search-custom-skill-web-api.md) since it isn't supported natively yet in Azure AI Search.
47
47
48
48
## Choosing an embedding strategy: image verbalization or direct embeddings
49
+
49
50
Retrieving knowledge from images can follow two complementary paths in Azure AI Search. Understanding the distinctions helps you align cost, latency, and answer quality with the needs of your application.
50
51
51
52
### Image verbalization followed by text embeddings
52
-
With this method, the Gen AI prompt skill invokes an LLM during ingestion to create a concise natural-language description of each extracted image—for example "Five-step HR access workflow that begins with manager approval." The description is stored as text and embedded alongside the surrounding document text. Because the image is now expressed in language, Azure AI Search can:
53
+
54
+
With this method, the GenAI Prompt skill invokes an LLM during ingestion to create a concise natural-language description of each extracted image—for example "Five-step HR access workflow that begins with manager approval." The description is stored as text and embedded alongside the surrounding document text. Because the image is now expressed in language, Azure AI Search can:
53
55
54
56
- Interpret the relationships and entities shown in a diagram.
55
57
- Supply ready-made captions that an LLM can cite verbatim in a response.
@@ -58,13 +60,14 @@ With this method, the Gen AI prompt skill invokes an LLM during ingestion to cre
58
60
The added semantic depth entails an LLM call for every image and a marginal increase in indexing time.
59
61
60
62
### Direct multimodal embeddings
63
+
61
64
A second option is to pass the document extracted images and text to a multimodal embedding model that produces vector representations in the same vector space. Configuration is straightforward and no LLM is required at indexing time. Direct embeddings are well suited to visual similarity and “find-me-something-that-looks-like-this” scenarios.
62
65
63
66
Because the representation is purely mathematical, it doesn't convey why two images are related, and it offers the LLM no ready context for citations or detailed explanations.
64
67
65
68
### Combining both approaches
66
-
Many solutions need both encoding paths. Diagrams, flow charts, and other explanation-rich visuals are verbalized so that semantic information is available for RAG and AI agent grounding. Screenshots, product photos, or artwork are embedded directly for efficient similarity search. You can customize your Azure AI Search index and indexer skillset pipeline so it can store the two sets of vectors and retrieve them side by side.
67
69
70
+
Many solutions need both encoding paths. Diagrams, flow charts, and other explanation-rich visuals are verbalized so that semantic information is available for RAG and AI agent grounding. Screenshots, product photos, or artwork are embedded directly for efficient similarity search. You can customize your Azure AI Search index and indexer skillset pipeline so it can store the two sets of vectors and retrieve them side by side.
68
71
69
72
## Tutorials and samples
70
73
@@ -78,8 +81,3 @@ To help you get started with multimodal search in Azure AI Search, here's a coll
78
81
|[Tutorial: Image verbalization + layout skill](tutorial-multimodal-index-image-verbalization-skill.md)| Apply layout-aware chunking and diagram verbalization, capture location metadata, and store cropped images for precise citations and page highlights. |
79
82
|[Tutorial: Multimodal embeddings + layout skill](tutorial-multimodal-index-embeddings-skill.md)| Combine layout-aware chunking with unified embeddings for hybrid semantic + keyword search that returns exact hit locations. |
80
83
|[Sample app: Multimodal RAG GitHub repository](https://aka.ms/azs-multimodal-sample-app-repo)| An end-to-end RAG application code with multimodal capabilities that surfaces both text snippets and image annotations—ideal for jump-starting enterprise copilots. |
Copy file name to clipboardExpand all lines: articles/search/search-agentic-retrieval-concept.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ ms.author: heidist
9
9
ms.service: azure-ai-search
10
10
ms.topic: concept-article
11
11
ms.custom: references_regions
12
-
ms.date: 05/08/2025
12
+
ms.date: 05/15/2025
13
13
---
14
14
15
15
# Agentic retrieval in Azure AI Search
@@ -18,11 +18,11 @@ ms.date: 05/08/2025
18
18
19
19
In Azure AI Search, *agentic retrieval* is a new parallel query processing architecture that uses conversational language models to generate multiple subqueries for a single retrieval request, incorporating conversation history and semantic ranking to produce high-quality grounding data for custom chat and generative AI solutions that include agents.
20
20
21
-
Programmatically, agentic retrieval is supported through a new Knowledge Agents object (also known as a search agent) in the 2025-05-01-preview data plane REST API and in Azure SDK prerelease packages that provide the feature. An agent's retrieval response is designed for downstream consumption by other agents and chat apps based on generative AI.
21
+
Programmatically, agentic retrieval is supported through a new Knowledge Agents object (also known as a search agent) in the 2025-05-01-preview data plane REST API and in Azure SDK prerelease packages that provide the feature. An agent's retrieval response is designed for downstream consumption by other agents and chat apps.
22
22
23
23
## Why use agentic retrieval
24
24
25
-
You should use agentic retrieval when you want to customize a chat experience with high quality inputs that include your proprietary data.
25
+
You should use agentic retrieval when you want to send data to an agent or customize a chat experience with high quality inputs that include your proprietary data.
26
26
27
27
The *agentic* aspect is a reasoning step in query planning processing that's performed by a supported large language model (LLM) that you provide. The LLM is tasked with designing multiple subqueries based on: user questions, chat history, and parameters on the request. The subqueries target your indexed documents (plain text and vectors) in Azure AI Search.
Copy file name to clipboardExpand all lines: articles/search/search-document-level-access-overview.md
+18-15Lines changed: 18 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,26 +15,31 @@ Azure AI Search offers support for document-level access control, enabling organ
15
15
16
16
Document-level access helps restrict content visibility to authorized users, based on predefined access rules. Azure AI Search supports this functionality through multiple approaches, providing flexibility for integration.
17
17
18
-
## Overview of document-level access control features
18
+
## Feature overview
19
19
20
-
Azure AI Search provides document-level access control in the following ways:
21
-
22
-
### Native support for integration with Microsoft Entra-based POSIX-style Access Control List (ACL) systems (preview)
20
+
Azure AI Search provides two approaches for document-level access control: native support for permission inheritance (applies to Azure Data Lake Storage (ADLS) Gen2) and security trimming.
21
+
22
+
### Security trimming via filters
23
+
24
+
For scenarios where native ACL and RBAC integration isn't supported, Azure AI Search enables [security trimming using query filters](search-security-trimming-for-azure-search.md). By creating a field in the index to represent user or group identities, you can use the filters to include or exclude documents from query results based on those identities. This approach is useful for systems with custom access models or non-Microsoft Entra-based security frameworks.
25
+
26
+
### Native support for POSIX-like ACL permissions (preview)
27
+
28
+
Through Microsoft Entra ID, the [ADLS Gen2 access control model](/azure/storage/blobs/data-lake-storage-access-control-model) supports both Azure role-based access control (Azure RBAC) and POSIX-like access control lists (ACLs). In Azure AI Search using the newest preview APIs, you can flow these permission through to a search index and queries.
29
+
30
+
ADLS Gen2 provides ACLs in a format that works well for this approach, but you can use any data source that provides permission data in the same format.
23
31
24
-
#### Retrieving permissions metadata during data ingestion process
32
+
#### Retrieve permissions metadata during data ingestion process
33
+
25
34
Azure AI Search enables you to push document permissions directly into the search index alongside the content, enabling consistent application of access rules at query time. This capability is achieved in two ways:
26
35
27
-
- Use the [REST API](/rest/api/searchservice/operation-groups) or supported SDKs to [push documents and their associated permission metadata](search-index-access-control-lists-and-rbac-push-api.md)into the search index. This approach is ideal for systems with [Microsoft Entra](/Entra/fundamentals/what-is-Entra)-based [Access Control Lists (ACLs)](/azure/storage/blobs/data-lake-storage-access-control) and [Role-based access control (RBAC) roles](/azure/role-based-access-control/overview), such as [Azure Data Lake Storage (ADLS) Gen2](/azure/storage/blobs/data-lake-storage-introduction). By embedding ACLs and RBAC container metadata within the index, developers can reduce the need for custom security trimming logic during query execution.
36
+
- Use the [REST API](/rest/api/searchservice/operation-groups) or supported SDKs to [push documents and their associated permission metadata](search-index-access-control-lists-and-rbac-push-api.md)into the search index. This approach is ideal for systems with [Microsoft Entra](/Entra/fundamentals/what-is-Entra)-based [Access Control Lists (ACLs)](/azure/storage/blobs/data-lake-storage-access-control) and [Role-based access control (RBAC) roles](/azure/role-based-access-control/overview), such as [Azure Data Lake Storage (ADLS) Gen2](/azure/storage/blobs/data-lake-storage-introduction). By embedding ACLs and RBAC container metadata within the index, developers can reduce the need for custom security trimming logic during query execution.
28
37
29
-
-For [built-in ADLS Gen2 indexers](search-indexer-access-control-lists-and-role-based-access.md), you can use the preview REST API with the permission filter options to flow existing ACLs and RBAC permissions to your search index. This indexer pulls ACLs and RBAC roles at container level during the data ingestion process, enabling a low/no-code workflow for managing document-level permissions.
38
+
-For [built-in ADLS Gen2 indexers](search-indexer-access-control-lists-and-role-based-access.md), you can use the preview REST API with the permission filter options to flow existing ACLs and RBAC permissions to your search index. This indexer pulls ACLs and RBAC roles at container level during the data ingestion process, enabling a low/no-code workflow for managing document-level permissions.
30
39
31
-
#### Enforcing document-level permissions at query time
32
-
With native [token-based querying](https://aka.ms/azs-query-preserving-permissions), Azure AI Search validates a user's [Microsoft Entra token](/Entra/identity/devices/concept-tokens-microsoft-Entra-id) to enforce ACLs and RBAC roles automatically. This functionality helps trim result sets to include only documents the user is authorized to access. You can achieve automatic trimming by attaching the user's Microsoft Entra token to your query request.
40
+
#### Enforce document-level permissions at query time
33
41
34
-
35
-
### Security trimming via filters
36
-
37
-
For scenarios where native ACL and RBAC integration isn't supported, Azure AI Search enables [security trimming using query filters](search-security-trimming-for-azure-search.md). By creating a field in the index to represent user or group identities, you can use the filters to include or exclude documents from query results based on those identities. This approach is useful for systems with custom access models or non-Microsoft Entra-based security frameworks.
42
+
With native [token-based querying](https://aka.ms/azs-query-preserving-permissions), Azure AI Search validates a user's [Microsoft Entra token](/Entra/identity/devices/concept-tokens-microsoft-Entra-id) to enforce ACLs and RBAC roles automatically. This functionality helps trim result sets to include only documents the user is authorized to access. You can achieve automatic trimming by attaching the user's Microsoft Entra token to your query request. For more information, see [Query-Time ACL and RBAC enforcement in Azure AI Search](search-query-access-control-rbac-enforcement.md).
38
43
39
44
## Benefits of document-level access control
40
45
@@ -54,8 +59,6 @@ To help you dive deeper into document-level access control in Azure AI Search, h
54
59
|**Index ADLS Gen2 permissions metadata using built-in indexers**|[Index permissions using ADLS Gen2 indexer](search-indexer-access-control-lists-and-role-based-access.md)|
55
60
|**Query using Microsoft Entra token-based permissions**|[Query using Microsoft Entra token-based permissions](https://aka.ms/azs-query-preserving-permissions)|
56
61
|**Security trimming via filters**|[Security trimming via filters](search-security-trimming-for-azure-search.md)|
0 commit comments