You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The numerical score on a Likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score >= threshold, or "fail" otherwise. Using the reason and additional fields can help you understand why the score is high or low.
86
+
The numerical score is on a Likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score >= threshold, or "fail" otherwise. Using the reason and additional fields can help you understand why the score is high or low.
87
87
88
88
```python
89
89
{
@@ -104,14 +104,17 @@ The numerical score on a Likert scale (integer 1 to 5) and a higher score is bet
104
104
105
105
```
106
106
107
-
If you're building agents outside of Azure AI Agent Serice, this evaluator accepts a schema typical for agent messages. To learn more, see our sample notebook for [Intent Resolution](https://aka.ms/intentresolution-sample).
107
+
If you're building agents outside of Azure AI Agent Service, this evaluator accepts a schema typical for agent messages. To learn more, see our sample notebook for [Intent Resolution](https://aka.ms/intentresolution-sample).
108
108
109
109
## Tool call accuracy
110
110
111
-
`ToolCallAccuracyEvaluator` measures an agent's ability to select appropriate tools, extract, and process correct parameters from previous steps of the agentic workflow. It detects whether each tool call made is accurate (binary) and reports back the average scores, which can be interpreted as a passing rate across tool calls made.
111
+
`ToolCallAccuracyEvaluator` measures the accuracy and efficiency of tool calls made by an agent in a run. It provides a 1-5 score based on:
112
+
- the relevance and helpfulness of the tool invoked;
113
+
- the correctness of parameters used in tool calls;
114
+
- the counts of missing or excessive calls.
112
115
113
116
> [!NOTE]
114
-
> `ToolCallAccuracyEvaluator` only supports Azure AI Agent's Function Tool evaluation, but doesn't support Built-in Tool evaluation. The agent messages must have at least one Function Tool actually called to be evaluated.
117
+
> `ToolCallAccuracyEvaluator` only supports Azure AI Agent's Function Tool evaluation, but doesn't support Built-in Tool evaluation. The agent run must have at least one Function Tool call and no Built-in Tool calls made to be evaluated.
115
118
116
119
### Tool call accuracy example
117
120
@@ -150,20 +153,35 @@ tool_call_accuracy(
150
153
151
154
### Tool call accuracy output
152
155
153
-
The numerical score (passing rate of correct tool calls) is 0-1 and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score >= threshold, or "fail" otherwise. Using the reason and tool call detail fields can help you understand why the score is high or low.
156
+
The numerical score is on a Likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score >= threshold, or "fail" otherwise. Using the reason and tool call detail fields can help you understand why the score is high or low.
154
157
155
158
```python
156
159
{
157
-
"tool_call_accuracy": 1.0,
160
+
"tool_call_accuracy": 5,
158
161
"tool_call_accuracy_result": "pass",
159
-
"tool_call_accuracy_threshold": 0.8,
160
-
"per_tool_call_details": [
161
-
{
162
-
"tool_call_accurate": True,
163
-
"tool_call_accurate_reason": "The input Data should get a Score of 1 because the TOOL CALL is directly relevant to the user's question about the weather in Seattle, includes appropriate parameters that match the TOOL DEFINITION, and the parameter values are correct and relevant to the user's query.",
164
-
"tool_call_id": "call_CUdbkBfvVBla2YP3p24uhElJ"
162
+
"tool_call_accuracy_threshold": 3,
163
+
"details": {
164
+
"tool_calls_made_by_agent": 1,
165
+
"correct_tool_calls_made_by_agent": 1,
166
+
"per_tool_call_details": [
167
+
{
168
+
"tool_name": "fetch_weather",
169
+
"total_calls_required": 1,
170
+
"correct_calls_made_by_agent": 1,
171
+
"correct_tool_percentage": 1.0,
172
+
"tool_call_errors": 0,
173
+
"tool_success_result": "pass"
174
+
}
175
+
],
176
+
"excess_tool_calls": {
177
+
"total": 0,
178
+
"details": []
179
+
},
180
+
"missing_tool_calls": {
181
+
"total": 0,
182
+
"details": []
165
183
}
166
-
]
184
+
}
167
185
}
168
186
```
169
187
@@ -187,7 +205,7 @@ task_adherence(
187
205
188
206
### Task adherence output
189
207
190
-
The numerical score on a Likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score >= threshold, or "fail" otherwise. Using the reason field can help you understand why the score is high or low.
208
+
The numerical score is on a Likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score >= threshold, or "fail" otherwise. Using the reason field can help you understand why the score is high or low.
@@ -59,6 +59,7 @@ Agents can use tool. Here's an example of creating custom tools you intend the a
59
59
```python
60
60
from azure.ai.projects.models import FunctionTool, ToolSet
61
61
from typing import Set, Callable, Any
62
+
import json
62
63
63
64
# Define a custom Python function.
64
65
deffetch_weather(location: str) -> str:
@@ -177,7 +178,7 @@ And that's it! `converted_data` contains all inputs required for [these evaluato
177
178
178
179
For complex tasks that require refined reasoning for the evaluation, we recommend a strong reasoning model like `o3-mini` or the o-series mini models released afterwards with a balance of reasoning performance and cost efficiency.
179
180
180
-
We set up a list of quality and safety evaluator in `quality_evaluators` and `safety_evaluators` and reference them in [evaluating multiples agent runs or a thread](#evaluate-multiple-agent-runs-or-threads).
181
+
We set up a list of quality and safety evaluators in `quality_evaluators` and `safety_evaluators` and reference them in [evaluating multiples agent runs or a thread](#evaluate-multiple-agent-runs-or-threads).
"intent_resolution_result": "pass", # pass because 5 > 3 the threshold
249
249
"intent_resolution_threshold": 3,
250
+
"intent_resolution_result": "pass", # pass because 5 > 3 the threshold
250
251
"intent_resolution_reason": "The assistant correctly understood the user's request to fetch the weather in Seattle. It used the appropriate tool to get the weather information and provided a clear and accurate response with the current weather conditions in Seattle. The response fully resolves the user's query with all necessary information."
"task_adherence_result": "pass", # pass because 5 > 3 the threshold
255
255
"task_adherence_threshold": 3,
256
+
"task_adherence_result": "pass", # pass because 5 > 3 the threshold
256
257
"task_adherence_reason": "The response accurately follows the instructions, fetches the correct weather information, and relays it back to the user without any errors or omissions."
257
258
}
258
259
{
259
260
"tool_call_accuracy": 5, # a score between 1-5, higher is better
260
-
"tool_call_accuracy_result": "pass", # pass because 1.0 > 0.8 the threshold
261
261
"tool_call_accuracy_threshold": 3,
262
+
"tool_call_accuracy_result": "pass", # pass because 5 > 3 the threshold
262
263
"details": { ... } # helpful details for debugging the tool calls made by the agent
263
264
}
264
265
```
@@ -316,7 +317,7 @@ If you're using agents outside Azure AI Foundry Agent Service, you can still eva
316
317
317
318
Agents typically emit messages to interact with a user or other agents. Our built-in evaluators can accept simple data types such as strings in `query`, `response`, and `ground_truth` according to the [single-turn data input requirements](./evaluate-sdk.md#data-requirements-for-built-in-evaluators). However, it can be a challenge to extract these simple data types from agent messages, due to the complex interaction patterns of agents and framework differences. For example, a single user query can trigger a long list of agent messages, typically with multiple tool calls invoked.
318
319
319
-
As illustrated in the following example, we enable agent message support specifically for the built-in evaluators `IntentResolution`, `ToolCallAccuracy`, and `TaskAdherence` to evaluate these aspects of agentic workflow. These evaluators take `tool_calls` or `tool_definitions` as parameters unique to agents.
320
+
As illustrated in the following example, we enable agent message support specifically for the built-in evaluators `IntentResolutionEvaluator`, `ToolCallAccuracyEvaluator`, and `TaskAdherenceEvaluator` to evaluate these aspects of agentic workflow. These evaluators take `tool_calls` or `tool_definitions` as parameters unique to agents.
@@ -375,16 +376,11 @@ See the following output (reference [Output format](#output-format) for details)
375
376
"intent_resolution_result": "pass",
376
377
"intent_resolution_threshold": 3,
377
378
"intent_resolution_reason": "The response provides the opening hours of the Eiffel Tower, which directly addresses the user's query. The information is clear, accurate, and complete, fully resolving the user's intent.",
378
-
"additional_details": {
379
-
"conversation_has_intent": true,
380
-
"agent_perceived_intent": "inquire about the opening hours of the Eiffel Tower",
381
-
"actual_user_intent": "inquire about the opening hours of the Eiffel Tower",
382
-
"correct_intent_detected": true,
383
-
"intent_resolved": true
384
-
}
385
379
}
386
380
```
387
381
382
+
### Agent tool calls and definitions
383
+
388
384
See the following examples of `tool_calls` and `tool_definitions` for `ToolCallAccuracyEvaluator`:
389
385
390
386
```python
@@ -421,6 +417,10 @@ tool_definitions = [{
421
417
}
422
418
}
423
419
}]
420
+
421
+
from azure.ai.evaluation import ToolCallAccuracyEvaluator
422
+
423
+
tool_call_accuracy = ToolCallAccuracyEvaluator(model_config) # reuse the config defined above
@@ -436,9 +436,162 @@ See the following output (reference [Output format](#output-format) for details)
436
436
}
437
437
```
438
438
439
-
### Agent messages
439
+
### Agent message schema
440
+
441
+
In agent message format, `query` and `response` are a list of OpenAI-style messages. Specifically, `query` carries the past agent-user interactions leading up to the last user query and requires the system message (of the agent) on top of the list; and `response` carries the last message of the agent in response to the last user query.
440
442
441
-
In agent message format, `query` and `response` are a list of OpenAI-style messages. Specifically, `query` carries the past agent-user interactions leading up to the last user query and requires the system message (of the agent) on top of the list; and `response` carries the last message of the agent in response to the last user query. See the following example:
443
+
The expected input format for the evaluators is a Python list of messages as follows:
444
+
445
+
```
446
+
[
447
+
{
448
+
"role": "system" | "user" | "assistant" | "tool",
449
+
"createdAt": "ISO 8601 timestamp", // Optional for 'system'
450
+
"run_id": "string", // Optional, only for assistant/tool in tool call context
451
+
"tool_call_id": "string", // Optional, only for tool/tool_result
452
+
"name": "string", // Present if it's a tool call
453
+
"arguments": { ... }, // Parameters passed to the tool (if tool call)
454
+
"content": [
455
+
{
456
+
"type": "text" | "tool_call" | "tool_result",
457
+
"text": "string", // if type == text
458
+
"tool_call_id": "string", // if type == tool_call
459
+
"name": "string", // tool name if type == tool_call
460
+
"arguments": { ... }, // tool args if type == tool_call
461
+
"tool_result": { ... } // result if type == tool_result
462
+
}
463
+
]
464
+
}
465
+
]
466
+
```
467
+
468
+
Sample query and response objects:
469
+
470
+
```python
471
+
query = [
472
+
{
473
+
"role": "system",
474
+
"content": "You are an AI assistant interacting with Azure Maps services to serve user requests."
475
+
},
476
+
{
477
+
"createdAt": "2025-04-25T23:55:43Z",
478
+
"role": "user",
479
+
"content": [
480
+
{
481
+
"type": "text",
482
+
"text": "Find the address for coordinates 41.8781,-87.6298."
483
+
}
484
+
]
485
+
},
486
+
{
487
+
"createdAt": "2025-04-25T23:55:45Z",
488
+
"run_id": "run_DGE8RWPS8A9SmfCg61waRx9u",
489
+
"role": "assistant",
490
+
"content": [
491
+
{
492
+
"type": "tool_call",
493
+
"tool_call_id": "call_nqNyhOFRw4FmF50jaCCq2rDa",
494
+
"name": "azure_maps_reverse_address_search",
495
+
"arguments": {
496
+
"lat": "41.8781",
497
+
"lon": "-87.6298"
498
+
}
499
+
}
500
+
]
501
+
},
502
+
{
503
+
"createdAt": "2025-04-25T23:55:47Z",
504
+
"run_id": "run_DGE8RWPS8A9SmfCg61waRx9u",
505
+
"tool_call_id": "call_nqNyhOFRw4FmF50jaCCq2rDa",
506
+
"role": "tool",
507
+
"content": [
508
+
{
509
+
"type": "tool_result",
510
+
"tool_result": {
511
+
"address": "300 South Federal Street, Chicago, IL 60604",
512
+
"position": {
513
+
"lat": "41.8781",
514
+
"lon": "-87.6298"
515
+
}
516
+
}
517
+
}
518
+
]
519
+
},
520
+
{
521
+
"createdAt": "2025-04-25T23:55:48Z",
522
+
"run_id": "run_DGE8RWPS8A9SmfCg61waRx9u",
523
+
"role": "assistant",
524
+
"content": [
525
+
{
526
+
"type": "text",
527
+
"text": "The address for the coordinates 41.8781, -87.6298 is 300 South Federal Street, Chicago, IL 60604."
528
+
}
529
+
]
530
+
},
531
+
{
532
+
"createdAt": "2025-04-25T23:55:50Z",
533
+
"role": "user",
534
+
"content": [
535
+
{
536
+
"type": "text",
537
+
"text": "What timezone corresponds to 41.8781,-87.6298?"
538
+
}
539
+
]
540
+
},
541
+
]
542
+
543
+
response = [
544
+
{
545
+
"createdAt": "2025-04-25T23:55:52Z",
546
+
"run_id": "run_DmnhUGqYd1vCBolcjjODVitB",
547
+
"role": "assistant",
548
+
"content": [
549
+
{
550
+
"type": "tool_call",
551
+
"tool_call_id": "call_qi2ug31JqzDuLy7zF5uiMbGU",
552
+
"name": "azure_maps_timezone",
553
+
"arguments": {
554
+
"lat": 41.878100000000003,
555
+
"lon": -87.629800000000003
556
+
}
557
+
}
558
+
]
559
+
},
560
+
{
561
+
"createdAt": "2025-04-25T23:55:54Z",
562
+
"run_id": "run_DmnhUGqYd1vCBolcjjODVitB",
563
+
"tool_call_id": "call_qi2ug31JqzDuLy7zF5uiMbGU",
564
+
"role": "tool",
565
+
"content": [
566
+
{
567
+
"type": "tool_result",
568
+
"tool_result": {
569
+
"ianaId": "America/Chicago",
570
+
"utcOffset": None,
571
+
"abbreviation": None,
572
+
"isDaylightSavingTime": None
573
+
}
574
+
}
575
+
]
576
+
},
577
+
{
578
+
"createdAt": "2025-04-25T23:55:55Z",
579
+
"run_id": "run_DmnhUGqYd1vCBolcjjODVitB",
580
+
"role": "assistant",
581
+
"content": [
582
+
{
583
+
"type": "text",
584
+
"text": "The timezone for the coordinates 41.8781, -87.6298 is America/Chicago."
585
+
}
586
+
]
587
+
}
588
+
]
589
+
```
590
+
591
+
> [!NOTE]
592
+
> The evaluator throws a warning that query (the conversation history till the current run) or agent response (the response to the query) can't be parsed when their format isn't the expected one.
593
+
594
+
See an example of evaluating the agent messages with `ToolCallAccuracyEvaluator`:
0 commit comments