Skip to content

Commit 55298c0

Browse files
Merge pull request #6056 from MicrosoftDocs/main
Auto Publish – main to live - 2025-07-16 11:00 UTC
2 parents 9608ba4 + c18a0c0 commit 55298c0

File tree

6 files changed

+242
-170
lines changed

6 files changed

+242
-170
lines changed

articles/ai-foundry/concepts/evaluation-evaluators/agent-evaluators.md

Lines changed: 22 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: lgayhardt
66
ms.author: lagayhar
77
manager: scottpolly
88
ms.reviewer: changliu2
9-
ms.date: 05/19/2025
9+
ms.date: 07/15/2025
1010
ms.service: azure-ai-foundry
1111
ms.topic: reference
1212
ms.custom:
@@ -18,17 +18,22 @@ ms.custom:
1818

1919
[!INCLUDE [feature-preview](../../includes/feature-preview.md)]
2020

21-
Agents are powerful productivity assistants. They can plan, make decisions, and execute actions. Agents typically first [reason through user intents in conversations](#intent-resolution), [select the correct tools](#tool-call-accuracy) to call and satisfy the user requests, and [complete various tasks](#task-adherence) according to their instructions.
21+
Agents are powerful productivity assistants. They can plan, make decisions, and execute actions. Agents typically first [reason through user intents in conversations](#intent-resolution), [select the correct tools](#tool-call-accuracy) to call and satisfy the user requests, and [complete various tasks](#task-adherence) according to their instructions. We currently support these agent-specific evaluators for agentic workflows:
22+
23+
- [Intent resolution](#intent-resolution)
24+
- [Tool call accuracy](#tool-call-accuracy)
25+
- [Task adherence](#task-adherence)
2226

2327
## Evaluating Azure AI agents
2428

2529
Agents emit messages, and providing the above inputs typically require parsing messages and extracting the relevant information. If you're building agents using Azure AI Agent Service, we provide native integration for evaluation that directly takes their agent messages. To learn more, see an [end-to-end example of evaluating agents in Azure AI Agent Service](https://aka.ms/e2e-agent-eval-sample).
2630

27-
Besides `IntentResolution`, `ToolCallAccuracy`, `TaskAdherence` specific to agentic workflows, you can also assess other quality as well as safety aspects of your agentic workflows, leveraging out comprehensive suite of built-in evaluators. We support this list of evaluators for Azure AI agent messages from our converter:
31+
Besides `IntentResolution`, `ToolCallAccuracy`, `TaskAdherence` specific to agentic workflows, you can also assess other quality and safety aspects of your agentic workflows, using our comprehensive suite of built-in evaluators. We support this list of evaluators for Azure AI agent messages from our converter:
32+
2833
- **Quality**: `IntentResolution`, `ToolCallAccuracy`, `TaskAdherence`, `Relevance`, `Coherence`, `Fluency`
2934
- **Safety**: `CodeVulnerabilities`, `Violence`, `Self-harm`, `Sexual`, `HateUnfairness`, `IndirectAttack`, `ProtectedMaterials`.
3035

31-
We will show examples of `IntentResolution`, `ToolCallAccuracy`, `TaskAdherence` here. See more examples in [evaluating Azure AI agents](../../how-to/develop/agent-evaluate-sdk.md#evaluate-azure-ai-agents) for other evaluators with Azure AI agent message support.
36+
In this article we show examples of `IntentResolution`, `ToolCallAccuracy`, and `TaskAdherence`. For examples of using other evaluators with Azure AI agent messages, see [evaluating Azure AI agents](../../how-to/develop/agent-evaluate-sdk.md#evaluate-azure-ai-agents).
3237

3338
## Model configuration for AI-assisted evaluators
3439

@@ -48,8 +53,16 @@ model_config = AzureOpenAIModelConfiguration(
4853
)
4954
```
5055

51-
> [!TIP]
52-
> We recommend using `o3-mini` for a balance of reasoning capability and cost efficiency.
56+
### Evaluator model support
57+
58+
We support AzureOpenAI or OpenAI [reasoning models](../../../ai-services/openai/how-to/reasoning.md) and non-reasoning models for the LLM-judge depending on the evaluators:
59+
60+
| Evaluators | Reasoning Models as Judge (example: o-series models from Azure OpenAI / OpenAI) | Non-reasoning models as Judge (example: gpt-4.1, gpt-4o, etc.) | To enable |
61+
|--|--|--|--|
62+
| `Intent Resolution`, `Task Adherence`, `Tool Call Accuracy`, `Response Completeness` | Supported | Supported | Set additional parameter `is_reasoning_model=True` in initializing evaluators |
63+
| Other quality evaluators| Not Supported | Supported | -- |
64+
65+
For complex evaluation that requires refined reasoning, we recommend a strong reasoning model like `o3-mini` and o-series mini models released afterwards with a balance of reasoning performance and cost efficiency.
5366

5467
## Intent resolution
5568

@@ -70,7 +83,7 @@ intent_resolution(
7083

7184
### Intent resolution output
7285

73-
The numerical score on a likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score >= threshold, or "fail" otherwise. Using the reason and additional fields can help you understand why the score is high or low.
86+
The numerical score on a Likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score >= threshold, or "fail" otherwise. Using the reason and additional fields can help you understand why the score is high or low.
7487

7588
```python
7689
{
@@ -98,7 +111,7 @@ If you're building agents outside of Azure AI Agent Serice, this evaluator accep
98111
`ToolCallAccuracyEvaluator` measures an agent's ability to select appropriate tools, extract, and process correct parameters from previous steps of the agentic workflow. It detects whether each tool call made is accurate (binary) and reports back the average scores, which can be interpreted as a passing rate across tool calls made.
99112

100113
> [!NOTE]
101-
> `ToolCallAccuracyEvaluator` only supports Azure AI Agent's Function Tool evaluation, but does not support Built-in Tool evaluation. The agent messages must have at least one Function Tool actually called to be evaluated.
114+
> `ToolCallAccuracyEvaluator` only supports Azure AI Agent's Function Tool evaluation, but doesn't support Built-in Tool evaluation. The agent messages must have at least one Function Tool actually called to be evaluated.
102115
103116
### Tool call accuracy example
104117

@@ -174,7 +187,7 @@ task_adherence(
174187

175188
### Task adherence output
176189

177-
The numerical score on a likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score >= threshold, or "fail" otherwise. Using the reason field can help you understand why the score is high or low.
190+
The numerical score on a Likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score >= threshold, or "fail" otherwise. Using the reason field can help you understand why the score is high or low.
178191

179192
```python
180193
{

articles/ai-foundry/concepts/evaluation-evaluators/azure-openai-graders.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: lgayhardt
66
ms.author: lagayhar
77
manager: scottpolly
88
ms.reviewer: mithigpe
9-
ms.date: 05/19/2025
9+
ms.date: 07/16/2025
1010
ms.service: azure-ai-foundry
1111
ms.topic: reference
1212
ms.custom:
@@ -47,7 +47,7 @@ model_config = AzureOpenAIModelConfiguration(
4747
4848
Here's an example `data.jsonl` that is used in the following code snippets:
4949

50-
```json
50+
```jsonl
5151
[
5252
{
5353
"query": "What is the importance of choosing the right provider in getting the most value out of your health insurance plan?",

articles/ai-foundry/concepts/evaluation-evaluators/general-purpose-evaluators.md

Lines changed: 19 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: lgayhardt
66
ms.author: lagayhar
77
manager: scottpolly
88
ms.reviewer: changliu2
9-
ms.date: 05/19/2025
9+
ms.date: 07/16/2025
1010
ms.service: azure-ai-foundry
1111
ms.topic: reference
1212
ms.custom:
@@ -16,7 +16,10 @@ ms.custom:
1616

1717
# General purpose evaluators
1818

19-
AI systems might generate textual responses that are incoherent, or lack the general writing quality you might desire beyond minimum grammatical correctness. To address these issues, use [Coherence](#coherence) and [Fluency](#fluency).
19+
AI systems might generate textual responses that are incoherent, or lack the general writing quality you might desire beyond minimum grammatical correctness. To address these issues, we support evaluating:
20+
21+
- [Coherence](#coherence)
22+
- [Fluency](#fluency)
2023

2124
If you have a question-answering (QA) scenario with both `context` and `ground truth` data in addition to `query` and `response`, you can also use our [QAEvaluator](#question-answering-composite-evaluator) a composite evaluator that uses relevant evaluators for judgment.
2225

@@ -32,14 +35,22 @@ load_dotenv()
3235

3336
model_config = AzureOpenAIModelConfiguration(
3437
azure_endpoint=os.environ["AZURE_ENDPOINT"],
35-
api_key=os.environ.get["AZURE_API_KEY"],
38+
api_key=os.environ.get("AZURE_API_KEY"),
3639
azure_deployment=os.environ.get("AZURE_DEPLOYMENT_NAME"),
3740
api_version=os.environ.get("AZURE_API_VERSION"),
3841
)
3942
```
4043

41-
> [!TIP]
42-
> We recommend using `o3-mini` for a balance of reasoning capability and cost efficiency.
44+
### Evaluator model support
45+
46+
We support AzureOpenAI or OpenAI [reasoning models](../../../ai-services/openai/how-to/reasoning.md) and non-reasoning models for the LLM-judge depending on the evaluators:
47+
48+
| Evaluators | Reasoning Models as Judge (example: o-series models from Azure OpenAI / OpenAI) | Non-reasoning models as Judge (example: gpt-4.1, gpt-4o, etc.) | To enable |
49+
|------------|-----------------------------------------------------------------------------|-------------------------------------------------------------|-------|
50+
| `Intent Resolution`, `Task Adherence`, `Tool Call Accuracy`, `Response Completeness` | Supported | Supported | Set additional parameter `is_reasoning_model=True` in initializing evaluators |
51+
| Other quality evaluators| Not Supported | Supported | -- |
52+
53+
For complex evaluation that requires refined reasoning, we recommend a strong reasoning model like `o3-mini` and o-series mini models released afterwards with a balance of reasoning performance and cost efficiency.
4354

4455
## Coherence
4556

@@ -59,7 +70,7 @@ coherence(
5970

6071
### Coherence output
6172

62-
The numerical score on a likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score >= threshold, or "fail" otherwise. Using the reason field can help you understand why the score is high or low.
73+
The numerical score on a Likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score >= threshold, or "fail" otherwise. Using the reason field can help you understand why the score is high or low.
6374

6475
```python
6576
{
@@ -88,7 +99,7 @@ fluency(
8899

89100
### Fluency output
90101

91-
The numerical score on a likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score >= threshold, or "fail" otherwise. Using the reason field can help you understand why the score is high or low.
102+
The numerical score on a Likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score >= threshold, or "fail" otherwise. Using the reason field can help you understand why the score is high or low.
92103

93104
```python
94105
{
@@ -127,7 +138,7 @@ qa_eval(
127138

128139
### QA output
129140

130-
While F1 score outputs a numerical score on 0-1 float scale, the other evaluators output numerical scores on a likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score >= threshold, or "fail" otherwise. Using the reason field can help you understand why the score is high or low.
141+
While F1 score outputs a numerical score on 0-1 float scale, the other evaluators output numerical scores on a Likert scale (integer 1 to 5) and a higher score is better. Given a numerical threshold (default to 3), we also output "pass" if the score >= threshold, or "fail" otherwise. Using the reason field can help you understand why the score is high or low.
131142

132143
```python
133144
{

0 commit comments

Comments
 (0)