You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We support AzureOpenAI or OpenAI [reasoning models](../../../ai-services/openai/how-to/reasoning.md) and non-reasoning models for the LLM-judge depending on the evaluators:
58
58
@@ -65,7 +65,7 @@ For complex evaluation that requires refined reasoning, we recommend a strong re
65
65
66
66
## Intent resolution
67
67
68
-
`IntentResolutionEvaluator` measures how well the system identifies and understands a user's request, including how well it scopes the user’s intent, asks clarifying questions, and reminds end users of its scope of capabilities. Higher score means better identification of user intent.
68
+
`IntentResolutionEvaluator` measures how well the system identifies and understands a user's request, including how well it scopes the user's intent, asks clarifying questions, and reminds end users of its scope of capabilities. Higher score means better identification of user intent.
69
69
70
70
### Intent resolution example
71
71
@@ -99,11 +99,9 @@ The numerical score is on a Likert scale (integer 1 to 5) and a higher score is
99
99
}
100
100
}
101
101
102
-
103
-
104
102
```
105
103
106
-
If you're building agents outside of Azure AI Agent Service, this evaluator accepts a schema typical for agent messages. To learn more, see our sample notebook for [Intent Resolution](https://aka.ms/intentresolution-sample).
104
+
If you're building agents outside of Azure AI Foundry Agent Service, this evaluator accepts a schema typical for agent messages. To learn more, see our sample notebook for [Intent Resolution](https://aka.ms/intentresolution-sample).
107
105
108
106
## Tool call accuracy
109
107
@@ -112,15 +110,100 @@ If you're building agents outside of Azure AI Agent Service, this evaluator acce
112
110
- the correctness of parameters used in tool calls;
113
111
- the counts of missing or excessive calls.
114
112
115
-
> [!NOTE]
116
-
> `ToolCallAccuracyEvaluator` only supports Azure AI Agent's Function Tool evaluation, but doesn't support Built-in Tool evaluation. The agent run must have at least one Function Tool call and no Built-in Tool calls made to be evaluated.
113
+
#### Tool call evaluation support
114
+
115
+
`ToolCallAccuracyEvaluator` supports evaluation in Azure AI Foundry Agent Service for the following tools:
116
+
117
+
- File Search
118
+
- Azure AI Search
119
+
- Bing Grounding
120
+
- Bing Custom Search
121
+
- SharePoint Grounding
122
+
- Code Interpreter
123
+
- Fabric Data Agent
124
+
- OpenAPI
125
+
- Function Tool (user-defined tools)
126
+
127
+
However, if a non-supported tool is used in the agent run, it outputs a "pass" and a reason that evaluating the invoked tool(s) isn't supported, for ease of filtering out these cases. It's recommended that you wrap non-supported tools as user-defined tools to enable evaluation.
117
128
118
129
### Tool call accuracy example
119
130
120
131
```python
121
132
from azure.ai.evaluation import ToolCallAccuracyEvaluator
query="What timezone corresponds to 41.8781,-87.6298?",
139
+
response=[
140
+
{
141
+
"createdAt": "2025-04-25T23:55:52Z",
142
+
"run_id": "run_DmnhUGqYd1vCBolcjjODVitB",
143
+
"role": "assistant",
144
+
"content": [
145
+
{
146
+
"type": "tool_call",
147
+
"tool_call_id": "call_qi2ug31JqzDuLy7zF5uiMbGU",
148
+
"name": "azure_maps_timezone",
149
+
"arguments": {
150
+
"lat": 41.878100000000003,
151
+
"lon": -87.629800000000003
152
+
}
153
+
}
154
+
]
155
+
},
156
+
{
157
+
"createdAt": "2025-04-25T23:55:54Z",
158
+
"run_id": "run_DmnhUGqYd1vCBolcjjODVitB",
159
+
"tool_call_id": "call_qi2ug31JqzDuLy7zF5uiMbGU",
160
+
"role": "tool",
161
+
"content": [
162
+
{
163
+
"type": "tool_result",
164
+
"tool_result": {
165
+
"ianaId": "America/Chicago",
166
+
"utcOffset": None,
167
+
"abbreviation": None,
168
+
"isDaylightSavingTime": None
169
+
}
170
+
}
171
+
]
172
+
},
173
+
{
174
+
"createdAt": "2025-04-25T23:55:55Z",
175
+
"run_id": "run_DmnhUGqYd1vCBolcjjODVitB",
176
+
"role": "assistant",
177
+
"content": [
178
+
{
179
+
"type": "text",
180
+
"text": "The timezone for the coordinates 41.8781, -87.6298 is America/Chicago."
181
+
}
182
+
]
183
+
}
184
+
],
185
+
tool_definitions=[
186
+
{
187
+
"name": "azure_maps_timezone",
188
+
"description": "local time zone information for a given latitude and longitude.",
189
+
"parameters": {
190
+
"type": "object",
191
+
"properties": {
192
+
"lat": {
193
+
"type": "float",
194
+
"description": "The latitude of the location."
195
+
},
196
+
"lon": {
197
+
"type": "float",
198
+
"description": "The longitude of the location."
199
+
}
200
+
}
201
+
}
202
+
}
203
+
]
204
+
)
205
+
206
+
# alternatively, provide the tool calls directly without the full agent response
124
207
tool_call_accuracy(
125
208
query="How is the weather in Seattle?",
126
209
tool_calls=[{
@@ -188,7 +271,7 @@ If you're building agents outside of Azure AI Agent Service, this evaluator acce
188
271
189
272
## Task adherence
190
273
191
-
In various task-oriented AI systems such as agentic systems, it's important to assess whether the agent has stayed on track to complete a given task instead of making inefficient or out-of-scope steps. `TaskAdherenceEvaluator` measures how well an agent’s response adheres to their assigned tasks, according to their task instruction (extracted from system message and user query), and available tools. Higher score means better adherence of the system instruction to resolve the given task.
274
+
In various task-oriented AI systems such as agentic systems, it's important to assess whether the agent has stayed on track to complete a given task instead of making inefficient or out-of-scope steps. `TaskAdherenceEvaluator` measures how well an agent's response adheres to their assigned tasks, according to their task instruction (extracted from system message and user query), and available tools. Higher score means better adherence of the system instruction to resolve the given task.
If you use [Foundry Agent Service](../../../ai-services/agents/overview.md), you can seamlessly evaluate your agents via our converter support for Azure AI agent threads and runs. We support this list of evaluators for Azure AI agent messages from our converter:
42
+
If you use [Foundry Agent Service](../../../ai-services/agents/overview.md), you can seamlessly evaluate your agents using our converter support for Azure AI agents and Semantic Kernel agents. The following evaluators are supported for evaluation data returned by the converter:`IntentResolution`, `ToolCallAccuracy`, `TaskAdherence`, `Relevance`, and `Groundedness`.
43
43
44
-
### Evaluators supported for evaluation data converter
44
+
> [!NOTE]
45
+
> If you are building other agents that output a different schema, you can convert them into the general openai-style [agent message schema](#agent-message-schema) and use the above evaluators.
46
+
> More generally, if you can parse the agent messages into the [required data formats](./evaluate-sdk.md#data-requirements-for-built-in-evaluators), you can also all of our evaluators.
> `ToolCallAccuracyEvaluator` only supports Foundry Agent's Function Tool evaluation (user-defined Python functions), but doesn't support other Tool evaluation. If an agent run invoked a tool other than Function Tool, it outputs a "pass" and a reason that evaluating the invoked tool(s) isn't supported.
49
+
#### Tool call evaluation support
50
+
`ToolCallAccuracyEvaluator` supports evaluation in Azure AI Agent for the following tools:
51
+
52
+
- File Search
53
+
- Azure AI Search
54
+
- Bing Grounding
55
+
- Bing Custom Search
56
+
- SharePoint Grounding
57
+
- Code Interpreter
58
+
- Fabric Data Agent
59
+
- OpenAPI
60
+
- Function Tool (user-defined tools)
61
+
62
+
However, if a non-supported tool is used in the agent run, it outputs a "pass" and a reason that evaluating the invoked tool(s) isn't supported, for ease of filtering out these cases. It is recommended that you wrap non-supported tools as user-defined tools to enable evaluation.
51
63
52
64
Here's an example that shows you how to seamlessly build and evaluate an Azure AI agent. Separately from evaluation, Azure AI Foundry Agent Service requires `pip install azure-ai-projects azure-identity`, an Azure AI project connection string, and the supported models.
And that's it! `converted_data` contains all inputs required for [these evaluators](#evaluators-supported-for-evaluation-data-converter). You don't need to read the input requirements for each evaluator and do any work to parse the inputs. All you need to do is select your evaluator and call the evaluator on this single run. We support AzureOpenAI or OpenAI [reasoning models](../../../ai-services/openai/how-to/reasoning.md) and non-reasoning models for the judge depending on the evaluators:
183
+
And that's it! `converted_data` contains all inputs required for [these evaluators](#evaluate-azure-ai-agents). You don't need to read the input requirements for each evaluator and do any work to parse the inputs. All you need to do is select your evaluator and call the evaluator on this single run. We support AzureOpenAI or OpenAI [reasoning models](../../../ai-services/openai/how-to/reasoning.md) and non-reasoning models for the judge depending on the evaluators:
172
184
173
185
| Evaluators | Reasoning Models as Judge (example: o-series models from Azure OpenAI / OpenAI) | Non-reasoning models as Judge (example: gpt-4.1, gpt-4o, etc.) | To enable |
|Other quality evaluators| Not Supported | Supported| -- |
187
+
|All quality evaluators except for `GroundednessProEvaluator`| Supported | Supported | Set additional parameter `is_reasoning_model=True` in initializing evaluators |
188
+
|`GroundednessProEvaluator`| User does not need to support model | User does not need to support model| -- |
177
189
178
190
For complex tasks that require refined reasoning for the evaluation, we recommend a strong reasoning model like `o3-mini` or the o-series mini models released afterwards with a balance of reasoning performance and cost efficiency.
179
191
@@ -197,17 +209,18 @@ model_config = {
197
209
"api_version": os.getenv("AZURE_API_VERSION"),
198
210
}
199
211
212
+
# example config for a reasoning model
200
213
reasoning_model_config = {
201
214
"azure_deployment": "o3-mini",
202
215
"api_key": os.getenv("AZURE_API_KEY"),
203
216
"azure_endpoint": os.getenv("AZURE_ENDPOINT"),
204
217
"api_version": os.getenv("AZURE_API_VERSION"),
205
218
}
206
219
207
-
# Evaluators with reasoning model support
220
+
# Evaluators you might want to use with reasoning models
208
221
quality_evaluators = {evaluator.__name__: evaluator(model_config=reasoning_model_config, is_reasoning_model=True) for evaluator in [IntentResolutionEvaluator, TaskAdherenceEvaluator, ToolCallAccuracyEvaluator]}
209
222
210
-
# Other evaluators do not support reasoning models
223
+
# Other evaluators you might NOT want to use with reasoning models
211
224
quality_evaluators.update({ evaluator.__name__: evaluator(model_config=model_config) for evaluator in [CoherenceEvaluator, FluencyEvaluator, RelevanceEvaluator]})
212
225
213
226
## Using Azure AI Foundry (non-Hub) project endpoint, example: AZURE_AI_PROJECT=https://your-account.services.ai.azure.com/api/projects/your-project
@@ -223,7 +236,6 @@ for name, evaluator in quality_and_safety_evaluators.items():
223
236
print(name)
224
237
print(json.dumps(result, indent=4))
225
238
226
-
227
239
```
228
240
229
241
#### Output format
@@ -233,12 +245,12 @@ AI-assisted quality evaluators provide a result for a query and response pair. T
233
245
-`{metric_name}`: Provides a numerical score, on a Likert scale (integer 1 to 5) or a float between 0 and 1.
234
246
-`{metric_name}_label`: Provides a binary label (if the metric naturally outputs a binary score).
235
247
-`{metric_name}_reason`: Explains why a certain score or label was given for each data point.
248
+
-`details`: Optional output containing debugging information about the quality of a single agent run.
236
249
237
250
To further improve intelligibility, all evaluators accept a binary threshold (unless their outputs are already binary) and output two new keys. For the binarization threshold, a default is set, which the user can override. The two new keys are:
238
251
239
252
-`{metric_name}_result`: A "pass" or "fail" string based on a binarization threshold.
240
253
-`{metric_name}_threshold`: A numerical binarization threshold set by default or by the user.
241
-
-`additional_details`: Contains debugging information about the quality of a single agent run.
242
254
243
255
See the following example output for some evaluators:
244
256
@@ -316,13 +328,14 @@ If you're using agents outside Azure AI Foundry Agent Service, you can still eva
316
328
317
329
Agents typically emit messages to interact with a user or other agents. Our built-in evaluators can accept simple data types such as strings in `query`, `response`, and `ground_truth` according to the [single-turn data input requirements](./evaluate-sdk.md#data-requirements-for-built-in-evaluators). However, it can be a challenge to extract these simple data types from agent messages, due to the complex interaction patterns of agents and framework differences. For example, a single user query can trigger a long list of agent messages, typically with multiple tool calls invoked.
318
330
319
-
As illustrated in the following example, we enable agent message support specifically for the built-in evaluators `IntentResolutionEvaluator`, `ToolCallAccuracyEvaluator`, and `TaskAdherenceEvaluator`to evaluate these aspects of agentic workflow. These evaluators take `tool_calls` or `tool_definitions` as parameters unique to agents.
331
+
As illustrated in the following example, we enable agent message support for the following built-in evaluators to evaluate these aspects of agentic workflow. These evaluators may take `tool_calls` or `tool_definitions` as parameters unique to agents when evaluating agents.
-`Message`: `dict` OpenAI-style message that describes agent interactions with a user, where the `query` must include a system message as the first message.
328
341
-`ToolCall`: `dict` that specifies tool calls invoked during agent interactions with a user.
0 commit comments