Skip to content

Commit 357509e

Browse files
chbradshchangliu2
andauthored
Update scenarios/evaluate/Supported_Evaluation_Metrics/RAG_Evaluation/README.md
Co-authored-by: changliu2 <99364750+changliu2@users.noreply.github.com>
1 parent f97f16e commit 357509e

File tree

1 file changed

+1
-1
lines changed
  • scenarios/evaluate/Supported_Evaluation_Metrics/RAG_Evaluation

1 file changed

+1
-1
lines changed

scenarios/evaluate/Supported_Evaluation_Metrics/RAG_Evaluation/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ To support RAG quality output, it’s important to evaluate the following aspect
2424

2525
This tutorial includes two notebooks as best practices to cover these important evaluation aspects:
2626

27-
- [Evaluating a RAG retrieval system end to end](https://aka.ms/knowledge-agent-eval-sample): Complex queries are a common scenario for advanced RAG retrieval systems. In both principle and practice, [agentic RAG](aka.ms/agentRAG) is an advanced RAG pattern compared to traditional RAG patterns in agentic scenarios. By using the Agentic Retrieval API in Azure AI Search in Azure AI Foundry, we observe [up to 40% better relevance for complex queries than our baselines](https://techcommunity.microsoft.com/blog/Azure-AI-Services-blog/up-to-40-better-relevance-for-complex-queries-with-new-agentic-retrieval-engine/4413832/). After onboarding to agentic retrieval, use evaluating the end-to-end RAG system with [Groundedness](http://aka.ms/groundedness-doc) and [Relevance](http://aka.ms/relevance-doc) evaluators.
27+
- [Evaluate and Optimize a RAG retrieval system end to end](https://aka.ms/knowledge-agent-eval-sample): Complex queries are a common scenario for advanced RAG retrieval systems. In both principle and practice, [agentic RAG](aka.ms/agentRAG) is an advanced RAG pattern compared to traditional RAG patterns in agentic scenarios. By using the Agentic Retrieval API in Azure AI Search in Azure AI Foundry, we observe [up to 40% better relevance for complex queries than our baselines](https://techcommunity.microsoft.com/blog/Azure-AI-Services-blog/up-to-40-better-relevance-for-complex-queries-with-new-agentic-retrieval-engine/4413832/). After onboarding to agentic retrieval, it's a best practice to evaluate the end-to-end response of the RAG system with [Groundedness](http://aka.ms/groundedness-doc) and [Relevance](http://aka.ms/relevance-doc) evaluators. With the ability to assess the end-to-end quality for one set of RAG parameter, you can perform "parameter sweep" for another set to finetune and optimize the parameters for the agentic retrieval pipeline.
2828

2929
- [Parameter Sweep: evaluating and optimizing RAG document retrieval quality](https://aka.ms/doc-retrieval-sample): Document retrieval quality is a common bottleneck in RAG workflows. To address this, one best practice is to optimize your RAG search parameters according to your enterprise data. For advanced scenarios where you can curate ground-truth relevance labels for document retrieval results (commonly called qrels), it’s a best practice to "sweep" and optimize the parameters by evaluating the document retrieval quality using golden metrics such as [NDCG](https://en.wikipedia.org/wiki/Discounted_cumulative_gain).
3030

0 commit comments

Comments
 (0)