You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/ai/conceptual/evaluation-libraries.md
+22-9Lines changed: 22 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
title: The Microsoft.Extensions.AI.Evaluation libraries
3
3
description: Learn about the Microsoft.Extensions.AI.Evaluation libraries, which simplify the process of evaluating the quality and accuracy of responses generated by AI models in .NET intelligent apps.
4
4
ms.topic: concept-article
5
-
ms.date: 03/18/2025
5
+
ms.date: 05/09/2025
6
6
---
7
7
# The Microsoft.Extensions.AI.Evaluation libraries (Preview)
8
8
@@ -11,7 +11,8 @@ The Microsoft.Extensions.AI.Evaluation libraries (currently in preview) simplify
11
11
The evaluation libraries, which are built on top of the [Microsoft.Extensions.AI abstractions](../microsoft-extensions-ai.md), are composed of the following NuGet packages:
12
12
13
13
-[📦 Microsoft.Extensions.AI.Evaluation](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation) – Defines the core abstractions and types for supporting evaluation.
14
-
-[📦 Microsoft.Extensions.AI.Evaluation.Quality](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation.Quality) – Contains evaluators that assess the quality of LLM responses in an app according to metrics such as relevance, fluency, coherence, and truthfulness.
14
+
-[📦 Microsoft.Extensions.AI.Evaluation.Quality](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation.Quality) – Contains evaluators that assess the quality of LLM responses in an app according to metrics such as relevance and completeness. These evaluators use the LLM directly to perform evaluations.
15
+
-[📦 Microsoft.Extensions.AI.Evaluation.Safety](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation.Safety) – Contains evaluators, such as the `ProtectedMaterialEvaluator` and `ContentHarmEvaluator`, that use the [Azure AI Foundry](/azure/ai-foundry/) Evaluation service to perform evaluations.
15
16
-[📦 Microsoft.Extensions.AI.Evaluation.Reporting](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation.Reporting) – Contains support for caching LLM responses, storing the results of evaluations, and generating reports from that data.
16
17
-[📦 Microsoft.Extensions.AI.Evaluation.Reporting.Azure](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation.Reporting.Azure) - Supports the reporting library with an implementation for caching LLM responses and storing the evaluation results in an [Azure Storage](/azure/storage/common/storage-introduction) container.
17
18
-[📦 Microsoft.Extensions.AI.Evaluation.Console](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation.Console) – A command-line tool for generating reports and managing evaluation data.
@@ -24,13 +25,25 @@ The libraries are designed to integrate smoothly with existing .NET apps, allowi
24
25
25
26
The evaluation libraries were built in collaboration with data science researchers from Microsoft and GitHub, and were tested on popular Microsoft Copilot experiences. The following table shows the built-in evaluators.
| Relevance, truth, and completeness | How effectively a response addresses a query |<xref:Microsoft.Extensions.AI.Evaluation.Quality.RelevanceTruthAndCompletenessEvaluator>|
| Coherence | The logical and orderly presentation of ideas |<xref:Microsoft.Extensions.AI.Evaluation.Quality.CoherenceEvaluator>|
32
-
| Equivalence | The similarity between the generated text and its ground truth with respect to a query |<xref:Microsoft.Extensions.AI.Evaluation.Quality.EquivalenceEvaluator>|
33
-
| Groundedness | How well a generated response aligns with the given context |<xref:Microsoft.Extensions.AI.Evaluation.Quality.GroundednessEvaluator>|
| Relevance | Evaluates how relevant a response is to a query |`RelevanceEvaluator`<!-- <xref:Microsoft.Extensions.AI.Evaluation.Quality.RelevanceEvaluator> -->|
31
+
| Completeness | Evaluates how comprehensive and accurate a response is |`CompletenessEvaluator`<!-- <xref:Microsoft.Extensions.AI.Evaluation.Quality.CompletenessEvaluator> -->|
32
+
| Retrieval | Evaluates performance in retrieving information for additional context |`RetrievalEvaluator`<!-- <xref:Microsoft.Extensions.AI.Evaluation.Quality.RetrievalEvaluator> -->|
| Coherence | Evaluates the logical and orderly presentation of ideas |<xref:Microsoft.Extensions.AI.Evaluation.Quality.CoherenceEvaluator>|
35
+
| Equivalence | Evaluates the similarity between the generated text and its ground truth with respect to a query |<xref:Microsoft.Extensions.AI.Evaluation.Quality.EquivalenceEvaluator>|
36
+
| Groundedness | Evaluates how well a generated response aligns with the given context |<xref:Microsoft.Extensions.AI.Evaluation.Quality.GroundednessEvaluator><br />`GroundednessProEvaluator`|
37
+
| Protected material | Evaluates response for the presence of protected material |`ProtectedMaterialEvaluator`|
38
+
| Ungrounded human attributes | Evaluates a response for the presence of content that indicates ungrounded inference of human attributes |`UngroundedAttributesEvaluator`|
39
+
| Hate content | Evaluates a response for the presence of content that's hateful or unfair |`HateAndUnfairnessEvaluator`† |
40
+
| Self-harm content | Evaluates a response for the presence of content that indicates self harm |`SelfHarmEvaluator`† |
41
+
| Violent content | Evaluates a response for the presence of violent content |`ViolenceEvaluator`† |
42
+
| Sexual content | Evaluates a response for the presence of sexual content |`SexualEvaluator`† |
43
+
| Code vulnerability content | Evaluates a response for the presence of vulnerable code |`CodeVulnerabilityEvaluator`|
44
+
| Indirect attack content | Evaluates a response for the presence of indirect attacks, such as manipulated content, intrusion, and information gathering |`IndirectAttackEvaluator`|
45
+
46
+
† In addition, the `ContentHarmEvaluator` provides single-shot evaluation for the four metrics supported by `HateAndUnfairnessEvaluator`, `SelfHarmEvaluator`, `ViolenceEvaluator`, and `SexualEvaluator`.
34
47
35
48
You can also customize to add your own evaluations by implementing the <xref:Microsoft.Extensions.AI.Evaluation.IEvaluator> interface or extending the base classes such as <xref:Microsoft.Extensions.AI.Evaluation.Quality.ChatConversationEvaluator> and <xref:Microsoft.Extensions.AI.Evaluation.Quality.SingleNumericMetricEvaluator>.
Copy file name to clipboardExpand all lines: docs/ai/tutorials/evaluate-with-reporting.md
+22-22Lines changed: 22 additions & 22 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,14 +1,14 @@
1
1
---
2
2
title: Tutorial - Evaluate a model's response
3
3
description: Create an MSTest app and add a custom evaluator to evaluate the AI chat response of a language model, and learn how to use the caching and reporting features of Microsoft.Extensions.AI.Evaluation.
4
-
ms.date: 03/14/2025
4
+
ms.date: 05/09/2025
5
5
ms.topic: tutorial
6
6
ms.custom: devx-track-dotnet-ai
7
7
---
8
8
9
9
# Tutorial: Evaluate a model's response with response caching and reporting
10
10
11
-
In this tutorial, you create an MSTest app to evaluate the chat response of an OpenAI model. The test app uses the [Microsoft.Extensions.AI.Evaluation](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation) libraries to perform the evaluations, cache the model responses, and create reports. The tutorial uses both a [built-in evaluator](xref:Microsoft.Extensions.AI.Evaluation.Quality.RelevanceTruthAndCompletenessEvaluator)and a custom evaluator.
11
+
In this tutorial, you create an MSTest app to evaluate the chat response of an OpenAI model. The test app uses the [Microsoft.Extensions.AI.Evaluation](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation) libraries to perform the evaluations, cache the model responses, and create reports. The tutorial uses both built-in and custom evaluators.
12
12
13
13
## Prerequisites
14
14
@@ -25,32 +25,32 @@ Complete the following steps to create an MSTest project that connects to the `g
25
25
26
26
1. In a terminal window, navigate to the directory where you want to create your app, and create a new MSTest app with the `dotnet new` command:
27
27
28
-
```dotnetcli
29
-
dotnet new mstest -o TestAIWithReporting
30
-
```
28
+
```dotnetcli
29
+
dotnet new mstest -o TestAIWithReporting
30
+
```
31
31
32
32
1. Navigate to the `TestAIWithReporting` directory, and add the necessary packages to your app:
1. Run the following commands to add [app secrets](/aspnet/core/security/app-secrets) for your Azure OpenAI endpoint, model name, and tenant ID:
47
47
48
-
```bash
49
-
dotnet user-secrets init
50
-
dotnet user-secrets set AZURE_OPENAI_ENDPOINT <your-azure-openai-endpoint>
51
-
dotnet user-secrets set AZURE_OPENAI_GPT_NAME gpt-4o
52
-
dotnet user-secrets set AZURE_TENANT_ID <your-tenant-id>
53
-
```
48
+
```bash
49
+
dotnet user-secrets init
50
+
dotnet user-secrets set AZURE_OPENAI_ENDPOINT <your-azure-openai-endpoint>
51
+
dotnet user-secrets set AZURE_OPENAI_GPT_NAME gpt-4o
52
+
dotnet user-secrets set AZURE_TENANT_ID <your-tenant-id>
53
+
```
54
54
55
55
(Depending on your environment, the tenant ID might not be needed. In that case, remove it from the code that instantiates the <xref:Azure.Identity.DefaultAzureCredential>.)
0 commit comments