You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/concepts/evaluation-evaluators/agent-evaluators.md
+12Lines changed: 12 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,6 +19,12 @@ Agents are powerful productivity assistants. They can plan, make decisions, and
19
19
20
20
Agents emit messages, and providing the above inputs typically require parsing messages and extracting the relevant information. If you're building agents using Azure AI Agent Service, we provide native integration for evaluation that directly takes their agent messages. To learn more, see an [end-to-end example of evaluating agents in Azure AI Agent Service](https://aka.ms/e2e-agent-eval-sample).
21
21
22
+
Besides `IntentResolution`, `ToolCallAccuracy`, `TaskAdherence` specific to agentic workflows, you can also assess other quality as well as safety aspects of your agentic workflows, leveraging out comprehensive suite of built-in evaluators. We support this list of evaluators for Azure AI agent messages from our converter:
We will show examples of `IntentResolution`, `ToolCallAccuracy`, `TaskAdherence` here. See more examples in [evaluating Azure AI agents](../../how-to/develop/agent-evaluate-sdk.md#evaluate-azure-ai-agents) for other evaluators with Azure AI agent message support.
27
+
22
28
## Model configuration for AI-assisted evaluators
23
29
24
30
For reference in the following code snippets, the AI-assisted evaluators use a model configuration for the LLM-judge:
> We recommend using `o3-mini` for a balance of reasoning capability and cost efficiency.
48
+
40
49
## Intent resolution
41
50
42
51
`IntentResolutionEvaluator` measures how well the system identifies and understands a user's request, including how well it scopes the user’s intent, asks clarifying questions, and reminds end users of its scope of capabilities. Higher score means better identification of user intent.
@@ -83,6 +92,9 @@ If you're building agents outside of Azure AI Agent Serice, this evaluator accep
83
92
84
93
`ToolCallAccuracyEvaluator` measures an agent's ability to select appropriate tools, extract, and process correct parameters from previous steps of the agentic workflow. It detects whether each tool call made is accurate (binary) and reports back the average scores, which can be interpreted as a passing rate across tool calls made.
85
94
95
+
> [!NOTE]
96
+
> `ToolCallAccuracyEvaluator` only supports Azure AI Agent's Function Tool evaluation, but does not support Built-in Tool evaluation. The agent messages must have at least one Function Tool actually called to be evaluated.
> We recommend using `o3-mini` for a balance of reasoning capability and cost efficiency.
40
+
38
41
## Coherence
39
42
40
43
`CoherenceEvaluator` measures the logical and orderly presentation of ideas in a response, allowing the reader to easily follow and understand the writer's train of thought. A coherent response directly addresses the question with clear connections between sentences and paragraphs, using appropriate transitions and a logical sequence of ideas. Higher scores mean better coherence.
> We recommend using `o3-mini` for a balance of reasoning capability and cost efficiency.
44
+
42
45
## Retrieval
43
46
44
47
Retrieval quality is very important given its upstream role in RAG: if the retrieval quality is poor and the response requires corpus-specific knowledge, there's less chance your LLM model gives you a satisfactory answer. `RetrievalEvaluator` measures the **textual quality** of retrieval results with an LLM without requiring ground truth (also known as query relevance judgment), which provides value compared to `DocumentRetrievalEvaluator` measuring `ndcg`, `xdcg`, `fidelity`, and other classical information retrieval metrics that require ground truth. This metric focuses on how relevant the context chunks (encoded as a string) are to address a query and how the most relevant context chunks are surfaced at the top of the list.
@@ -90,69 +93,74 @@ Retrieval quality is very important given its upstream role in RAG: if the retri
90
93
```python
91
94
from azure.ai.evaluation import DocumentRetrievalEvaluator
92
95
96
+
# these query_relevance_label are given by your human- or LLM-judges.
93
97
retrieval_ground_truth = [
94
98
{
95
99
"document_id": "1",
96
-
"query_relevance_judgement": 4
100
+
"query_relevance_label": 4
97
101
},
98
102
{
99
103
"document_id": "2",
100
-
"query_relevance_judgement": 2
104
+
"query_relevance_label": 2
101
105
},
102
106
{
103
107
"document_id": "3",
104
-
"query_relevance_judgement": 3
108
+
"query_relevance_label": 3
105
109
},
106
110
{
107
111
"document_id": "4",
108
-
"query_relevance_judgement": 1
112
+
"query_relevance_label": 1
109
113
},
110
114
{
111
115
"document_id": "5",
112
-
"query_relevance_judgement": 0
116
+
"query_relevance_label": 0
113
117
},
114
118
]
119
+
# the min and max of the label scores are inputs to document retrieval evaluator
120
+
ground_truth_label_min =0
121
+
ground_truth_label_max =4
115
122
123
+
# these relevance scores come from your search retrieval system
The numerical score on a likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score <= threshold, or "fail" otherwise. Using the reason field can help you understand why the score is high or low.
163
+
All numerical scores have `high_is_better=True` except for `holes`and `holes_ratio` which have `high_is_better=False`. Given a numerical threshold (default to 3), we also output "pass" if the score <= threshold, or "fail" otherwise.
156
164
157
165
```python
158
166
{
@@ -163,15 +171,16 @@ The numerical score on a likert scale (integer 1 to 5) and a higher score is bet
> We recommend using `o3-mini` for a balance of reasoning capability and cost efficiency.
38
+
36
39
## Similarity
37
40
38
41
`SimilarityEvaluator` measures the degrees of semantic similarity between the generated text and its ground truth with respect to a query. Compared to other text-similarity metrics that require ground truths, this metric focuses on semantics of a response (instead of simple overlap in tokens or n-grams) and also considers the broader context of a query.
Copy file name to clipboardExpand all lines: articles/ai-foundry/concepts/model-benchmarks.md
+17-21Lines changed: 17 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,7 +27,7 @@ Model leaderboards (preview) in Azure AI Foundry portal allow you to streamline
27
27
Whenever you find a model to your liking, you can select it and zoom into the **Detailed benchmarking results** of the model within the model catalog. If satisfied with the model, you can deploy it, try it in the playground, or evaluate it on your data. The leaderboards support benchmarking across text language models (large language models (LLMs) and small language models (SLMs)) and embedding models.
28
28
29
29
30
-
Model benchmarks assess LLMs and SLMs across the following categories: quality, performance, and cost. In addition, we assess the quality of embedding models using standard benchmarks. The benchmarks are updated regularly as better and more unsaturated datasets and associated metrics are added to existing models, and as new models are added to the model catalog.
30
+
Model benchmarks assess LLMs and SLMs across the following categories: quality, performance, and cost. In addition, we assess the quality of embedding models using standard benchmarks. The leaderboards are updated regularly as better and more unsaturated benchmarks are onboarded, and as new models are added to the model catalog.
31
31
32
32
33
33
## Quality benchmarks of language models
@@ -40,37 +40,33 @@ Azure AI assesses the quality of LLMs and SLMs using accuracy scores from standa
40
40
41
41
Quality index is provided on a scale of zero to one. Higher values of quality index are better. The datasets included in quality index are:
| Accuracy | Accuracy scores are available at the dataset and the model levels. At the dataset level, the score is the average value of an accuracy metric computed over all examples in the dataset. The accuracy metric used is `exact-match` in all cases, except for the _HumanEval_ and _MBPP_ datasets that uses a `pass@1` metric. Exact match compares model generated text with the correct answer according to the dataset, reporting one if the generated text matches the answer exactly and zero otherwise. The `pass@1` metric measures the proportion of model solutions that pass a set of unit tests in a code generation task. At the model level, the accuracy score is the average of the dataset-level accuracies for each model. |
60
+
| Accuracy | Accuracy scores are available at the dataset and the model levels. At the dataset level, the score is the average value of an accuracy metric computed over all examples in the dataset. The accuracy metric used is `exact-match` in all cases, except for the _HumanEval_ and _MBPP_ datasets that use a `pass@1` metric. Exact match compares model generated text with the correct answer according to the dataset, reporting one if the generated text matches the answer exactly and zero otherwise. The `pass@1` metric measures the proportion of model solutions that pass a set of unit tests in a code generation task. At the model level, the accuracy score is the average of the dataset-level accuracies for each model. |
65
61
66
62
Accuracy scores are provided on a scale of zero to one. Higher values are better.
67
63
68
64
69
65
## Safety benchmarks of language models
70
66
71
-
Safety benchmarks use a standard metric Attack Success Rate to measure how vulerable language models are to attacks in biosecurity, cybersecurity, and chemical security. Currently, the [Weapons of Mass Destruction Proxy (WMDP) benchmark](https://www.wmdp.ai/) is used to assess hazardous knowledge in language models. The lower the Attack Success Rate is, the safer is the model response.
67
+
Safety benchmarks use a standard metric Attack Success Rate to measure how vulnerable language models are to attacks in biosecurity, cybersecurity, and chemical security. Currently, the [Weapons of Mass Destruction Proxy (WMDP) benchmark](https://www.wmdp.ai/) is used to assess hazardous knowledge in language models. The lower the Attack Success Rate is, the safer is the model response.
72
68
73
-
All model endpoints are benchmarked with the default Azure AI Content Safety filters on with a default configuration. These safety filters detect and block [content harm categories](../../ai-services/content-safety/concepts/harm-categories.md) in violence, self-harm, sexual, hate and unfaireness, but do not measure categories in cybersecurity, biosecurity, chemical security.
69
+
All model endpoints are benchmarked with the default Azure AI Content Safety filters on with a default configuration. These safety filters detect and block [content harm categories](../../ai-services/content-safety/concepts/harm-categories.md) in violence, self-harm, sexual, hate, and unfairness, but do not specifically cover categories in cybersecurity, biosecurity, chemical security.
74
70
75
71
76
72
## Performance benchmarks of language models
@@ -135,7 +131,7 @@ Azure AI also displays the cost index as follows:
135
131
136
132
## Quality benchmarks of embedding models
137
133
138
-
The quality index of embedding models is defined as the averaged accuracy scores of a comprehensive set of standard benchmark datasests targeting Information Retrieval, Document Clustering, and Summarization tasks.
134
+
The quality index of embedding models is defined as the averaged accuracy scores of a comprehensive set of standard benchmark datasets targeting Information Retrieval, Document Clustering, and Summarization tasks.
139
135
140
136
See more details in accuracy score definitions specific to each dataset:
141
137
@@ -155,7 +151,7 @@ See more details in accuracy score definitions specific to each dataset:
155
151
156
152
Benchmark results originate from public datasets that are commonly used for language model evaluation. In most cases, the data is hosted in GitHub repositories maintained by the creators or curators of the data. Azure AI evaluation pipelines download data from their original sources, extract prompts from each example row, generate model responses, and then compute relevant accuracy metrics.
157
153
158
-
Prompt construction follows best practices for each dataset, as specified by the paper introducing the dataset and industry standards. In most cases, each prompt contains several _shots_, that is, several examples of complete questions and answers to prime the model for the task. The evaluation pipelines create shots by sampling questions and answers from a portion of the data that's held out from evaluation.
154
+
Prompt construction follows best practices for each dataset, as specified by the paper introducing the dataset and industry standards. In most cases, each prompt contains several _shots_, that is, several examples of complete questions and answers to prime the model for the task. The evaluation pipelines create shots by sampling questions and answers from a portion of the data held out from evaluation.
0 commit comments