Skip to content

Commit 7e38a87

Browse files
Merge pull request #7708 from MicrosoftDocs/main
Auto Publish – main to live - 2025-10-16 17:06 UTC
2 parents 022882e + 3001cc8 commit 7e38a87

File tree

9 files changed

+654
-173
lines changed

9 files changed

+654
-173
lines changed

articles/ai-foundry/concepts/evaluation-evaluators/custom-evaluators.md

Lines changed: 14 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
---
2-
title: Custom evaluators
2+
title: Custom Evaluators
33
titleSuffix: Azure AI Foundry
44
description: Learn how to create custom evaluators for your AI applications using code-based or prompt-based approaches.
55
author: lgayhardt
66
ms.author: lagayhar
77
ms.reviewer: mithigpe
8-
ms.date: 07/31/2025
8+
ms.date: 10/16/2025
99
ms.service: azure-ai-foundry
1010
ms.topic: reference
1111
ms.custom:
@@ -15,11 +15,11 @@ ms.custom:
1515

1616
# Custom evaluators
1717

18-
Built-in evaluators are great out of the box to start evaluating your application's generations. However you might want to build your own code-based or prompt-based evaluator to cater to your specific evaluation needs.
18+
To start evaluating your application's generations, built-in evaluators are great out of the box. To cater to your evaluation needs, you can build your own code-based or prompt-based evaluator.
1919

2020
## Code-based evaluators
2121

22-
Sometimes a large language model isn't needed for certain evaluation metrics. This is when code-based evaluators can give you the flexibility to define metrics based on functions or callable class. You can build your own code-based evaluator, for example, by creating a simple Python class that calculates the length of an answer in `answer_length.py` under directory `answer_len/`:
22+
You don't need a large language model for certain evaluation metrics. Code-based evaluators can give you the flexibility to define metrics based on functions or callable classes. You can build your own code-based evaluator, for example, by creating a simple Python class that calculates the length of an answer in `answer_length.py` under directory `answer_len/`, as in the following example.
2323

2424
### Code-based evaluator example: Answer length
2525

@@ -32,7 +32,7 @@ class AnswerLengthEvaluator:
3232
return {"answer_length": len(answer)}
3333
```
3434

35-
Then run the evaluator on a row of data by importing a callable class:
35+
Run the evaluator on a row of data by importing a callable class:
3636

3737
```python
3838
from answer_len.answer_length import AnswerLengthEvaluator
@@ -49,13 +49,17 @@ answer_length = answer_length_evaluator(answer="What is the speed of light?")
4949

5050
## Prompt-based evaluators
5151

52-
To build your own prompt-based large language model evaluator or AI-assisted annotator, you can create a custom evaluator based on a **Prompty** file. Prompty is a file with `.prompty` extension for developing prompt template. The Prompty asset is a markdown file with a modified front matter. The front matter is in YAML format that contains many metadata fields that define model configuration and expected inputs of the Prompty. Let's create a custom evaluator `FriendlinessEvaluator` to measure friendliness of a response.
52+
To build your own prompt-based large language model evaluator or AI-assisted annotator, you can create a custom evaluator based on a *Prompty* file.
53+
54+
Prompty is a file with the `.prompty` extension for developing prompt template. The Prompty asset is a markdown file with a modified front matter. The front matter is in YAML format. It contains metadata fields that define model configuration and expected inputs of the Prompty.
55+
56+
To measure friendliness of a response, you can create a custom evaluator `FriendlinessEvaluator`:
5357

5458
### Prompt-based evaluator example: Friendliness evaluator
5559

56-
First, create a `friendliness.prompty` file that describes the definition of the friendliness metric and its grading rubric:
60+
First, create a `friendliness.prompty` file that defines the friendliness metric and its grading rubric:
5761

58-
```markdown
62+
```md
5963
---
6064
name: Friendliness Evaluator
6165
description: Friendliness Evaluator to measure warmth and approachability of answers.
@@ -108,7 +112,7 @@ generated_query: {{response}}
108112
output:
109113
```
110114

111-
Then create a class `FriendlinessEvaluator` to load the Prompty file and process the outputs with json format:
115+
Then create a class `FriendlinessEvaluator` to load the Prompty file and process the outputs with JSON format:
112116

113117
```python
114118
import os
@@ -132,7 +136,7 @@ class FriendlinessEvaluator:
132136
return response
133137
```
134138

135-
Now, you can create your own Prompty-based evaluator and run it on a row of data:
139+
Now, create your own Prompty-based evaluator and run it on a row of data:
136140

137141
```python
138142
from friendliness.friend import FriendlinessEvaluator

articles/ai-foundry/concepts/evaluation-evaluators/textual-similarity-evaluators.md

Lines changed: 21 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
---
2-
title: Textual similarity evaluators for generative AI
2+
title: Textual Similarity Evaluators for Generative AI
33
titleSuffix: Azure AI Foundry
44
description: Learn about textual similarity evaluators for generative AI, including semantic similarity, F1 score, BLEU, GLEU, ROUGE, and METEOR metrics.
55
author: lgayhardt
66
ms.author: lagayhar
77
ms.reviewer: changliu2
8-
ms.date: 07/31/2025
8+
ms.date: 10/16/2025
99
ms.service: azure-ai-foundry
1010
ms.topic: reference
1111
ms.custom:
@@ -15,7 +15,9 @@ ms.custom:
1515

1616
# Textual similarity evaluators
1717

18-
It's important to compare how closely the textual response generated by your AI system matches the response you would expect, typically called the "ground truth". Use LLM-judge metric like [`SimilarityEvaluator`](#similarity) with a focus on the semantic similarity between the generated response and the ground truth, or use metrics from the field of natural language processing (NLP) including [F1 Score](#f1-score), [BLEU](#bleu-score), [GLEU](#gleu-score), [ROUGE](#rouge-score), and [METEOR](#meteor-score) with a focus on the overlaps of tokens or n-grams between the two.
18+
It's important to compare how closely the textual response generated by your AI system matches the response you would expect. The expected response is called the *ground truth*.
19+
20+
Use a LLM-judge metric like [`SimilarityEvaluator`](#similarity) with a focus on the semantic similarity between the generated response and the ground truth. Or, use metrics from the field of natural language processing (NLP), including [F1 score](#f1-score), [BLEU](#bleu-score), [GLEU](#gleu-score), [ROUGE](#rouge-score), and [METEOR](#meteor-score) with a focus on the overlaps of tokens or n-grams between the two.
1921

2022
## Model configuration for AI-assisted evaluators
2123

@@ -36,11 +38,11 @@ model_config = AzureOpenAIModelConfiguration(
3638
```
3739

3840
> [!TIP]
39-
> We recommend using `o3-mini` for a balance of reasoning capability and cost efficiency.
41+
> We recommend that you use `o3-mini` to balance reasoning capability and cost efficiency.
4042
4143
## Similarity
4244

43-
`SimilarityEvaluator` measures the degrees of semantic similarity between the generated text and its ground truth with respect to a query. Compared to other text-similarity metrics that require ground truths, this metric focuses on semantics of a response (instead of simple overlap in tokens or n-grams) and also considers the broader context of a query.
45+
`SimilarityEvaluator` measures the degrees of semantic similarity between the generated text and its ground truth with respect to a query. Compared to other text-similarity metrics that require ground truths, this metric focuses on semantics of a response, instead of simple overlap in tokens or n-grams. It also considers the broader context of a query.
4446

4547
### Similarity example
4648

@@ -57,7 +59,7 @@ similarity(
5759

5860
### Similarity output
5961

60-
The numerical score on a likert scale (integer 1 to 5) and a higher score means a higher degree of similarity. Given a numerical threshold (default to 3), we also output "pass" if the score >= threshold, or "fail" otherwise. Using the reason field can help you understand why the score is high or low.
62+
The output is a numerical score on a likert scale, integer 1 to 5. A higher score means a higher degree of similarity. Given a numerical threshold (default to 3), this example also outputs *pass* if the score >= threshold, or *fail* otherwise. Use the reason field to understand why the score is high or low.
6163

6264
```python
6365
{
@@ -70,7 +72,10 @@ The numerical score on a likert scale (integer 1 to 5) and a higher score means
7072

7173
## F1 score
7274

73-
`F1ScoreEvaluator` measures the similarity by shared tokens between the generated text and the ground truth, focusing on both precision and recall. The F1-score computes the ratio of the number of shared words between the model generation and the ground truth. Ratio is computed over the individual words in the generated response against those in the ground truth answer. The number of shared words between the generation and the truth is the basis of the F1 score. Precision is the ratio of the number of shared words to the total number of words in the generation. Recall is the ratio of the number of shared words to the total number of words in the ground truth.
75+
`F1ScoreEvaluator` measures the similarity by shared tokens between the generated text and the ground truth. It focuses on both precision and recall. The F1-score computes the ratio of the number of shared words between the model generation and the ground truth. The ratio is computed over the individual words in the generated response against those words in the ground truth answer. The number of shared words between the generation and the truth is the basis of the F1 score.
76+
77+
- *Precision* is the ratio of the number of shared words to the total number of words in the generation.
78+
- *Recall* is the ratio of the number of shared words to the total number of words in the ground truth.
7479

7580
### F1 score example
7681

@@ -86,7 +91,7 @@ f1_score(
8691

8792
### F1 score output
8893

89-
The numerical score is a 0-1 float and a higher score is better. Given a numerical threshold (default to 0.5), we also output "pass" if the score >= threshold, or "fail" otherwise.
94+
The numerical score is a 0-1 float. A higher score is better. Given a numerical threshold (default to 0.5), it also outputs *pass* if the score >= threshold, or *fail* otherwise.
9095

9196
```python
9297
{
@@ -98,7 +103,7 @@ The numerical score is a 0-1 float and a higher score is better. Given a numeric
98103

99104
## BLEU score
100105

101-
`BleuScoreEvaluator` computes the BLEU (Bilingual Evaluation Understudy) score commonly used in natural language processing (NLP) and machine translation. It measures how closely the generated text matches the reference text.
106+
`BleuScoreEvaluator` computes the Bilingual Evaluation Understudy (BLEU) score commonly used in natural language processing and machine translation. It measures how closely the generated text matches the reference text.
102107

103108
### BLEU example
104109

@@ -114,7 +119,7 @@ bleu_score(
114119

115120
### BLEU output
116121

117-
The numerical score is a 0-1 float and a higher score is better. Given a numerical threshold (default to 0.5), we also output "pass" if the score >= threshold, or "fail" otherwise.
122+
The numerical score is a 0-1 float. A higher score is better. Given a numerical threshold (default to 0.5), it also outputs *pass* if the score >= threshold, or *fail* otherwise.
118123

119124
```python
120125
{
@@ -126,7 +131,7 @@ The numerical score is a 0-1 float and a higher score is better. Given a numeric
126131

127132
## GLEU score
128133

129-
`GleuScoreEvaluator` computes the GLEU (Google-BLEU) score. It measures the similarity by shared n-grams between the generated text and ground truth, similar to the BLEU score, focusing on both precision and recall. But it addresses the drawbacks of the BLEU score using a per-sentence reward objective.
134+
`GleuScoreEvaluator` computes the Google-BLEU (GLEU) score. It measures the similarity by shared n-grams between the generated text and ground truth. Similar to the BLEU score, it focuses on both precision and recall. It addresses the drawbacks of the BLEU score using a per-sentence reward objective.
130135

131136
### GLEU score example
132137

@@ -142,7 +147,7 @@ gleu_score(
142147

143148
### GLEU score output
144149

145-
The numerical score is a 0-1 float and a higher score is better. Given a numerical threshold (default to 0.5), we also output "pass" if the score >= threshold, or "fail" otherwise.
150+
The numerical score is a 0-1 float. A higher score is better. Given a numerical threshold (default to 0.5), it also outputs *pass* if the score >= threshold, or *fail* otherwise.
146151

147152
```python
148153
{
@@ -154,7 +159,7 @@ The numerical score is a 0-1 float and a higher score is better. Given a numeric
154159

155160
## ROUGE score
156161

157-
`RougeScoreEvaluator` computes the ROUGE (Recall-Oriented Understudy for Gisting Evaluation) scores, a set of metrics used to evaluate automatic summarization and machine translation. It measures the overlap between generated text and reference summaries. ROUGE focuses on recall-oriented measures to assess how well the generated text covers the reference text. The ROUGE score is composed of precision, recall, and F1 score.
162+
`RougeScoreEvaluator` computes the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) scores, a set of metrics used to evaluate automatic summarization and machine translation. It measures the overlap between generated text and reference summaries. ROUGE focuses on recall-oriented measures to assess how well the generated text covers the reference text. The ROUGE score is composed of precision, recall, and F1 score.
158163

159164
### ROUGE score example
160165

@@ -170,7 +175,7 @@ rouge(
170175

171176
### ROUGE score output
172177

173-
The numerical score is a 0-1 float and a higher score is better. Given a numerical threshold (default to 0.5), we also output "pass" if the score >= threshold, or "fail" otherwise.
178+
The numerical score is a 0-1 float. A higher score is better. Given a numerical threshold (default to 0.5), it also outputs *pass* if the score >= threshold, or *fail* otherwise.
174179

175180
```python
176181
{
@@ -188,7 +193,7 @@ The numerical score is a 0-1 float and a higher score is better. Given a numeric
188193

189194
## METEOR score
190195

191-
`MeteorScoreEvaluator` measures the similarity by shared n-grams between the generated text and the ground truth, similar to the BLEU score, focusing on precision and recall. But it addresses limitations of other metrics like the BLEU score by considering synonyms, stemming, and paraphrasing for content alignment.
196+
`MeteorScoreEvaluator` measures the similarity by shared n-grams between the generated text and the ground truth. Similar to the BLEU score, it focuses on precision and recall. It addresses limitations of other metrics like the BLEU score by considering synonyms, stemming, and paraphrasing for content alignment.
192197

193198
### METEOR score example
194199

@@ -204,7 +209,7 @@ meteor_score(
204209

205210
### METEOR score output
206211

207-
The numerical score is a 0-1 float and a higher score is better. Given a numerical threshold (default to 0.5), we also output "pass" if the score >= threshold, or "fail" otherwise.
212+
The numerical score is a 0-1 float. A higher score is better. Given a numerical threshold (default to 0.5), it also outputs *pass* if the score >= threshold, or *fail* otherwise.
208213

209214
```python
210215
{

0 commit comments

Comments
 (0)