You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In Azure AI Search, *agentic retrieval* is a new parallel query processing architecture that uses conversational language models to generate multiple subqueries for a single retrieval request, incorporating conversation history and semantic ranking to produce high-quality grounding data for custom chat and generative AI solutions.
19
+
In Azure AI Search, *agentic retrieval* is a new parallel query processing architecture that uses conversational language models to generate multiple subqueries for a single retrieval request, incorporating conversation history and semantic ranking to produce high-quality grounding data for custom chat and generative AI solutions that include agents.
20
20
21
-
Programmatically, agentic retrieval is supported through a new Agents object in the 2025-05-01-preview data plane REST API and in Azure SDK prerelease packages that provide the feature. An agent's retrieval response is designed for downstream consumption by other agents and chat apps based on generative AI.
21
+
Programmatically, agentic retrieval is supported through a new Knowledge Agents object (also known as a search agent) in the 2025-05-01-preview data plane REST API and in Azure SDK prerelease packages that provide the feature. An agent's retrieval response is designed for downstream consumption by other agents and chat apps based on generative AI.
22
22
23
23
## Why use agentic retrieval
24
24
@@ -47,15 +47,15 @@ Agentic retrieval invokes the entire query processing pipeline multiple times fo
47
47
48
48
## Agentic retrieval architecture
49
49
50
-
Agentic retrieval is designed for a conversational search experience that includes an LLM. An important part of agentic retrieval is that an entire chat conversation can be included as inputs in subsequent queries, providing context and nuance for more relevant responses.
50
+
Agentic retrieval is designed for a conversational search experience that includes an LLM. An important part of agentic retrieval is how the LLM breaks down an initial query into subqueries, which are more effective at locating the best matches in your index.
51
51
52
52
:::image type="content" source="media/agentic-retrieval/agentic-retrieval-architecture.png" alt-text="Diagram of agentic retrieval workflow using an example query." lightbox="media/agentic-retrieval/agentic-retrieval-architecture.png" :::
53
53
54
54
Agentic retrieval has these components:
55
55
56
56
| Component | Resource | Usage |
57
57
|-----------|----------|-------|
58
-
| LLM (gpt-4o and gpt-4.1 series) | Azure OpenAI |Formulates subqueries for the query plan. You can use these models for other downstream operations. Specifically, you can send the unified response string to one of these models and ask it ground its answer on the string. |
58
+
| LLM (gpt-4o and gpt-4.1 series) | Azure OpenAI |An LLM has two functions. First, it formulates subqueries for the query plan and sends it back to the search agent. Second, after the query executes, the LLM receives grounding data from the query response and uses it for answer formulation. |
59
59
| Search index | Azure AI Search | Contains plain text and vector content, a semantic configuration, and other elements as needed. |
60
60
| Search agent | Azure AI Search | Connects to your LLM, providing parameters and inputs to build a query plan. |
61
61
| Retrieval engine | Azure AI Search | Executes on the LLM-generated query plan and other parameters, returning a rich response that includes content and query plan metadata. Queries are keyword, vector, and hybrid. Results are merged and ranked. |
@@ -139,7 +139,7 @@ Putting it all together, you'd pay about $3.30 for semantic ranking in Azure AI
139
139
140
140
## How to get started
141
141
142
-
You must use the REST APIs or a prerelease Azure SDK page that provides the functionality. At this time, there's no Azure portal or Azure AI Foundry portal support.
142
+
You must use the preview REST APIs or a prerelease Azure SDK package that provides the functionality. At this time, there's no Azure portal or Azure AI Foundry portal support.
Copy file name to clipboardExpand all lines: articles/search/search-how-to-index-logic-apps-indexers.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -42,7 +42,7 @@ After the wizard completes, you have the following components:
42
42
|-----------|----------|------------|
43
43
| Search index | Azure AI Search | Contains indexed content from a supported Logic Apps connector. The index schema is a default index created by the wizard. You can add extra elements, such as scoring profile or semantic configuration, but you can't change existing fields. You view, manage, and access the search index on Azure AI Search. |
44
44
| Logic app resource and workflow | Azure Logic Apps | You can view the running workflow, or you can open the designer in Azure Logic Apps to edit the workflow, as you regularly do if you'd started from Azure Logic Apps instead. You can edit and extend the workflow, but exercise caution so as to not break the indexing pipeline. |
45
-
| Logic app templates | Azure Logic Apps | Up to two templates created per workflow: one for on-demand indexing, and a second template for scheduled indexing. You can modify the indexing schedule. |
45
+
| Logic app templates | Azure Logic Apps | Up to two templates created per workflow: one for on-demand indexing, and a second template for scheduled indexing. You can modify the indexing schedule in the **Index multiple documents** step of the workflow. |
46
46
47
47
## Prerequisites
48
48
@@ -96,7 +96,7 @@ The following connectors are helpful for indexing unstructured data, as a comple
96
96
97
97
Currently, the public preview has these limitations:
98
98
99
-
+Search index is generated using a fixed schema (document ID, content, and vectorized content), with text extraction only. You can [modify the index](#modify-existing-objects) as long as the update doesn't affect existing fields.
99
+
+The search index is generated using a fixed schema (document ID, content, and vectorized content), with text extraction only. You can [modify the index](#modify-existing-objects) as long as the update doesn't affect existing fields.
100
100
+ Vectorization supports text embedding only.
101
101
+ Deletion detection isn't supported. You must manually [delete orphaned documents](search-howto-reindex.md#delete-orphan-documents) from the index.
102
102
+ Duplicate documents in the search index are a known issue in this preview. Consider deleting objects and starting over if this becomes an issue.
@@ -113,8 +113,7 @@ Follow these steps to create a Logic Apps workflow for indexing content in Azure
113
113
114
114
1. In **Connect to your data**, provide a name prefix used for the search index and workflow. Having a common name helps you manage them together.
115
115
116
-
<!-- Open Issue: how to specify frequency -->
117
-
1. Specify the indexing frequency. If you choose on a schedule, a template that includes scheduling option is created in Logic Apps.
116
+
1. Specify the indexing frequency. If you choose on a schedule, a template that includes a scheduling option is used to create the workflow. You can modify the indexing schedule in the **Index multiple documents** step of the workflow after it's created.
118
117
119
118
1. Select an authentication type where the logic app workflow connects to the search engine and starts the indexing process. The workflow can connect using [Azure AI Search API keys](search-security-api-keys.md) or the wizard can create a role assignment that grants permissions to the Logic Apps system-assigned managed identity, assuming one exists.
120
119
@@ -142,9 +141,10 @@ You can make the following modifications to a search index without breaking inde
142
141
143
142
You can make the following updates to a workflow without breaking indexing:
144
143
145
-
+ Modify templates that control indexing frequency.
144
+
+ Modify **List files in folder** to change the number of documents sent to indexing.
146
145
+ Modify **Chunk Text** to vary token inputs. The recommended token size is 512 tokens for most scenarios.
147
146
+ Modify **Chunk Text** to add a page overlap length.
147
+
+ Modify **Index multiple documents** step to control indexing frequency if you chose scheduled indexing in the wizard.
148
148
149
149
In logic apps designer, review the workflow and each step in the indexing pipeline. The workflow specifies document extraction, default document chunking ([Text Split skill](cognitive-search-skill-textsplit.md)), embedding ([Azure OpenAI embedding skill](cognitive-search-skill-azure-openai-embedding.md)), output field mappings, and finally indexing.
0 commit comments