Skip to content

Commit fe7ca3f

Browse files
committed
fixed validation warning
1 parent 22b924f commit fe7ca3f

File tree

2 files changed

+7
-6
lines changed

2 files changed

+7
-6
lines changed

articles/search/agentic-retrieval-how-to-create-pipeline.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -20,10 +20,11 @@ This article describes an approach or pattern for building a solution that uses
2020

2121
:::image type="content" source="media/agentic-retrieval/agent-to-agent-pipeline.svg" alt-text="Diagram of Azure AI Search integration with Azure AI Agent service." lightbox="media/agentic-retrieval/agent-to-agent-pipeline.png" :::
2222

23-
To run the code for this tutorial, download the [agentic-retrieval-pipeline-example](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/agentic-retrieval-pipeline-example) Python sample on GitHub.
24-
2523
This exercise differs from the [Agentic Retrieval Quickstart](search-get-started-agentic-retrieval.md) in how it uses Azure AI Agent to retrieve data from the index, and how it uses an agent tool for orchestration. If you want to understand the retrieval pipeline in its simplest form, begin with the quickstart.
2624

25+
< [!TIP]>
26+
> To run the code for this tutorial, download the [agentic-retrieval-pipeline-example](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/agentic-retrieval-pipeline-example) Python sample on GitHub.
27+
2728
## Prerequisites
2829

2930
The following resources are required for this design pattern:
@@ -325,7 +326,7 @@ You can also delete individual objects:
325326

326327
+ [Delete a knowledge source](agentic-knowledge-source-how-to-search-index.md#delete-a-knowledge-source)
327328

328-
+ [Delete an index](search-how-to-manage-index#delete-an-index)
329+
+ [Delete an index](search-how-to-manage-index.md#delete-an-index)
329330

330331
## Related content
331332

articles/search/retrieval-augmented-generation-overview.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ You can choose between two approaches for RAG workloads: agentic retrieval, or t
3535
> [!NOTE]
3636
> New to copilot and RAG concepts? Watch [Vector search and state of the art retrieval for Generative AI apps](https://www.youtube.com/watch?v=lSzc1MJktAo).
3737
38-
## Option 1: Modern RAG with Agentic Retrieval
38+
## Modern RAG with Agentic Retrieval
3939

4040
Azure AI Search now provides **agentic retrieval**, a specialized pipeline designed specifically for RAG patterns. This approach uses large language models to intelligently break down complex user queries into focused subqueries, executes them in parallel, and returns structured responses optimized for chat completion models.
4141

@@ -51,9 +51,9 @@ You need new objects for this pipeline: one or more knowledge sources, a knowled
5151

5252
For new RAG implementations, we recommend starting with [agentic retrieval](agentic-retrieval-overview.md). For existing solutions, consider migrating to take advantage of improved accuracy and context understanding.
5353

54-
## Option 2: Classic RAG pattern for Azure AI Search
54+
## Classic RAG pattern for Azure AI Search
5555

56-
A RAG solution can be implemented on Azure AI Search using the original query execution environment. This approach is faster and simpler with fewer components, and depending on your application requirements it can be the best choice. There's no LLM query planning or LLM integration in the query pipeline. Your application sends a single query request to Azure AI Search, the search engine executes the query and returns search results. There's no query execution details in the response, and citations are built into the response only if you have fields in your index that provide a parent document name or page.
56+
A RAG solution can be implemented on Azure AI Search using the original query execution architecture. With this approach, your application sends a single query request to Azure AI Search, the search engine processes the request, and returns search results to the caller. There's no side trip to an LLM query planning or LLM integration in the query pipeline. There's no query execution details in the response, and citations are built into the response only if you have fields in your index that provide a parent document name or page. This approach is faster and simpler with fewer components. Depending on your application requirements, it can be the best choice.
5757

5858
A high-level summary of classic RAG pattern built on Azure AI Search looks like this:
5959

0 commit comments

Comments
 (0)