You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/ai/conceptual/evaluation-libraries.md
+23-23Lines changed: 23 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,9 +4,9 @@ description: Learn about the Microsoft.Extensions.AI.Evaluation libraries, which
4
4
ms.topic: concept-article
5
5
ms.date: 05/13/2025
6
6
---
7
-
# The Microsoft.Extensions.AI.Evaluation libraries (Preview)
7
+
# The Microsoft.Extensions.AI.Evaluation libraries
8
8
9
-
The Microsoft.Extensions.AI.Evaluation libraries (currently in preview) simplify the process of evaluating the quality and accuracy of responses generated by AI models in .NET intelligent apps. Various metrics measure aspects like relevance, truthfulness, coherence, and completeness of the responses. Evaluations are crucial in testing, because they help ensure that the AI model performs as expected and provides reliable and accurate results.
9
+
The Microsoft.Extensions.AI.Evaluation libraries simplify the process of evaluating the quality and accuracy of responses generated by AI models in .NET intelligent apps. Various metrics measure aspects like relevance, truthfulness, coherence, and completeness of the responses. Evaluations are crucial in testing, because they help ensure that the AI model performs as expected and provides reliable and accurate results.
10
10
11
11
The evaluation libraries, which are built on top of the [Microsoft.Extensions.AI abstractions](../microsoft-extensions-ai.md), are composed of the following NuGet packages:
12
12
@@ -31,34 +31,34 @@ You can also customize to add your own evaluations by implementing the <xref:Mic
31
31
32
32
Quality evaluators measure response quality. They use an LLM to perform the evaluation.
|`Relevance`| Evaluates how relevant a response is to a query|<xref:Microsoft.Extensions.AI.Evaluation.Quality.RelevanceEvaluator>|
37
-
|`Completeness`| Evaluates how comprehensive and accurate a response is|<xref:Microsoft.Extensions.AI.Evaluation.Quality.CompletenessEvaluator>|
38
-
|`Retrieval`| Evaluates performance in retrieving information for additional context|<xref:Microsoft.Extensions.AI.Evaluation.Quality.RetrievalEvaluator>|
|`Coherence`| Evaluates the logical and orderly presentation of ideas|<xref:Microsoft.Extensions.AI.Evaluation.Quality.CoherenceEvaluator>|
41
-
|`Equivalence`| Evaluates the similarity between the generated text and its ground truth with respect to a query|<xref:Microsoft.Extensions.AI.Evaluation.Quality.EquivalenceEvaluator>|
42
-
|`Groundedness`| Evaluates how well a generated response aligns with the given context|<xref:Microsoft.Extensions.AI.Evaluation.Quality.GroundednessEvaluator>|
43
-
|`Relevance (RTC)`, `Truth (RTC)`, and `Completeness (RTC)`| Evaluates how relevant, truthful, and complete a response is|<xref:Microsoft.Extensions.AI.Evaluation.Quality.RelevanceTruthAndCompletenessEvaluator>†|
|<xref:Microsoft.Extensions.AI.Evaluation.Quality.RelevanceEvaluator>|`Relevance`| Evaluates how relevant a response is to a query |
37
+
|<xref:Microsoft.Extensions.AI.Evaluation.Quality.CompletenessEvaluator>|`Completeness`| Evaluates how comprehensive and accurate a response is |
38
+
|<xref:Microsoft.Extensions.AI.Evaluation.Quality.RetrievalEvaluator>|`Retrieval`| Evaluates performance in retrieving information for additional context |
|<xref:Microsoft.Extensions.AI.Evaluation.Quality.CoherenceEvaluator>|`Coherence`| Evaluates the logical and orderly presentation of ideas |
41
+
|<xref:Microsoft.Extensions.AI.Evaluation.Quality.EquivalenceEvaluator>|`Equivalence`| Evaluates the similarity between the generated text and its ground truth with respect to a query |
42
+
|<xref:Microsoft.Extensions.AI.Evaluation.Quality.GroundednessEvaluator>|`Groundedness`| Evaluates how well a generated response aligns with the given context |
43
+
|<xref:Microsoft.Extensions.AI.Evaluation.Quality.RelevanceTruthAndCompletenessEvaluator>† |`Relevance (RTC)`, `Truth (RTC)`, and `Completeness (RTC)`| Evaluates how relevant, truthful, and complete a response is |
44
44
45
45
† This evaluator is marked [experimental](../../fundamentals/syslib-diagnostics/experimental-overview.md).
46
46
47
47
### Safety evaluators
48
48
49
49
Safety evaluators check for presence of harmful, inappropriate, or unsafe content in a response. They rely on the Azure AI Foundry Evaluation service, which uses a model that's fine tuned to perform evaluations.
|`Groundedness Pro`| Uses a fine-tuned model hosted behind the Azure AI Foundry Evaluation service to evaluate how well a generated response aligns with the given context|<xref:Microsoft.Extensions.AI.Evaluation.Safety.GroundednessProEvaluator>|
54
-
|`Protected Material`| Evaluates response for the presence of protected material|<xref:Microsoft.Extensions.AI.Evaluation.Safety.ProtectedMaterialEvaluator>|
55
-
|`Ungrounded Attributes`| Evaluates a response for the presence of content that indicates ungrounded inference of human attributes|<xref:Microsoft.Extensions.AI.Evaluation.Safety.UngroundedAttributesEvaluator>|
56
-
|`Hate And Unfairness`| Evaluates a response for the presence of content that's hateful or unfair|<xref:Microsoft.Extensions.AI.Evaluation.Safety.HateAndUnfairnessEvaluator>†|
57
-
|`Self Harm`| Evaluates a response for the presence of content that indicates self harm|<xref:Microsoft.Extensions.AI.Evaluation.Safety.SelfHarmEvaluator>†|
58
-
|`Violence`| Evaluates a response for the presence of violent content|<xref:Microsoft.Extensions.AI.Evaluation.Safety.ViolenceEvaluator>†|
59
-
|`Sexual`| Evaluates a response for the presence of sexual content|<xref:Microsoft.Extensions.AI.Evaluation.Safety.SexualEvaluator>†|
60
-
|`Code Vulnerability`| Evaluates a response for the presence of vulnerable code|<xref:Microsoft.Extensions.AI.Evaluation.Safety.CodeVulnerabilityEvaluator>|
61
-
|`Indirect Attack`| Evaluates a response for the presence of indirect attacks, such as manipulated content, intrusion, and information gathering|<xref:Microsoft.Extensions.AI.Evaluation.Safety.IndirectAttackEvaluator>|
|<xref:Microsoft.Extensions.AI.Evaluation.Safety.GroundednessProEvaluator>|`Groundedness Pro`| Uses a fine-tuned model hosted behind the Azure AI Foundry Evaluation service to evaluate how well a generated response aligns with the given context |
54
+
|<xref:Microsoft.Extensions.AI.Evaluation.Safety.ProtectedMaterialEvaluator>|`Protected Material`| Evaluates response for the presence of protected material |
55
+
|<xref:Microsoft.Extensions.AI.Evaluation.Safety.UngroundedAttributesEvaluator>|`Ungrounded Attributes`| Evaluates a response for the presence of content that indicates ungrounded inference of human attributes |
56
+
|<xref:Microsoft.Extensions.AI.Evaluation.Safety.HateAndUnfairnessEvaluator>† |`Hate And Unfairness`| Evaluates a response for the presence of content that's hateful or unfair |
57
+
|<xref:Microsoft.Extensions.AI.Evaluation.Safety.SelfHarmEvaluator>† |`Self Harm`| Evaluates a response for the presence of content that indicates self harm |
58
+
|<xref:Microsoft.Extensions.AI.Evaluation.Safety.ViolenceEvaluator>† |`Violence`| Evaluates a response for the presence of violent content |
59
+
|<xref:Microsoft.Extensions.AI.Evaluation.Safety.SexualEvaluator>† |`Sexual`| Evaluates a response for the presence of sexual content |
60
+
|<xref:Microsoft.Extensions.AI.Evaluation.Safety.CodeVulnerabilityEvaluator>|`Code Vulnerability`| Evaluates a response for the presence of vulnerable code |
61
+
|<xref:Microsoft.Extensions.AI.Evaluation.Safety.IndirectAttackEvaluator>|`Indirect Attack`| Evaluates a response for the presence of indirect attacks, such as manipulated content, intrusion, and information gathering |
62
62
63
63
† In addition, the <xref:Microsoft.Extensions.AI.Evaluation.Safety.ContentHarmEvaluator> provides single-shot evaluation for the four metrics supported by `HateAndUnfairnessEvaluator`, `SelfHarmEvaluator`, `ViolenceEvaluator`, and `SexualEvaluator`.
In this quickstart, you learn how to create a conversational .NET console chat app using an OpenAI or Azure OpenAI model. The app uses the <xref:Microsoft.Extensions.AI> library so you can write code using AI abstractions rather than a specific SDK. AI abstractions enable you to change the underlying AI model with minimal code changes.
16
16
17
-
> [!NOTE]
18
-
> The [`Microsoft.Extensions.AI`](https://www.nuget.org/packages/Microsoft.Extensions.AI/) library is currently in Preview.
In this quickstart, you create an MSTest app to evaluate the chat response of an OpenAI model. The test app uses the [Microsoft.Extensions.AI.Evaluation](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation) libraries.
11
+
In this quickstart, you create an MSTest app to evaluate the quality of a chat response from an OpenAI model. The test app uses the [Microsoft.Extensions.AI.Evaluation](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation) libraries.
12
12
13
13
> [!NOTE]
14
-
>
15
-
> - The `Microsoft.Extensions.AI.Evaluation` library is currently in Preview.
16
-
> - This quickstart demonstrates the simplest usage of the evaluation API. Notably, it doesn't demonstrate use of the [response caching](../conceptual/evaluation-libraries.md#cached-responses) and [reporting](../conceptual/evaluation-libraries.md#reporting) functionality, which are important if you're authoring unit tests that run as part of an "offline" evaluation pipeline. The scenario shown in this quickstart is suitable in use cases such as "online" evaluation of AI responses within production code and logging scores to telemetry, where caching and reporting aren't relevant. For a tutorial that demonstrates the caching and reporting functionality, see [Tutorial: Evaluate a model's response with response caching and reporting](../tutorials/evaluate-with-reporting.md)
14
+
> This quickstart demonstrates the simplest usage of the evaluation API. Notably, it doesn't demonstrate use of the [response caching](../conceptual/evaluation-libraries.md#cached-responses) and [reporting](../conceptual/evaluation-libraries.md#reporting) functionality, which are important if you're authoring unit tests that run as part of an "offline" evaluation pipeline. The scenario shown in this quickstart is suitable in use cases such as "online" evaluation of AI responses within production code and logging scores to telemetry, where caching and reporting aren't relevant. For a tutorial that demonstrates the caching and reporting functionality, see [Tutorial: Evaluate a model's response with response caching and reporting](../tutorials/evaluate-with-reporting.md)
17
15
18
16
## Prerequisites
19
17
@@ -39,9 +37,9 @@ Complete the following steps to create an MSTest project that connects to the `g
@@ -51,9 +49,9 @@ Complete the following steps to create an MSTest project that connects to the `g
51
49
52
50
```bash
53
51
dotnet user-secrets init
54
-
dotnet user-secrets set AZURE_OPENAI_ENDPOINT <your-azure-openai-endpoint>
52
+
dotnet user-secrets set AZURE_OPENAI_ENDPOINT <your-Azure-OpenAI-endpoint>
55
53
dotnet user-secrets set AZURE_OPENAI_GPT_NAME gpt-4o
56
-
dotnet user-secrets set AZURE_TENANT_ID <your-tenant-id>
54
+
dotnet user-secrets set AZURE_TENANT_ID <your-tenant-ID>
57
55
```
58
56
59
57
(Depending on your environment, the tenant ID might not be needed. In that case, remove it from the code that instantiates the <xref:Azure.Identity.DefaultAzureCredential>.)
In this quickstart, you learn how to create a .NET console chat app to connect to and prompt an OpenAI or Azure OpenAI model. The app uses the <xref:Microsoft.Extensions.AI> library so you can write code using AI abstractions rather than a specific SDK. AI abstractions enable you to change the underlying AI model with minimal code changes.
16
16
17
-
> [!NOTE]
18
-
> The <xref:Microsoft.Extensions.AI> library is currently in Preview.
In this quickstart, you create a chat app that requests a response with *structured output*. A structured output response is a chat response that's of a type you specify instead of just plain text. The chat app you create in this quickstart analyzes sentiment of various product reviews, categorizing each review according to the values of a custom enumeration.
12
12
13
-
> [!NOTE]
14
-
> The <xref:Microsoft.Extensions.AI> library, which is used in this quickstart, is currently in Preview.
15
-
16
13
## Prerequisites
17
14
18
15
-[.NET 8 or a later version](https://dotnet.microsoft.com/download)
@@ -37,7 +34,7 @@ Complete the following steps to create a console app that connects to the `gpt-4
In this quickstart, you create a .NET console AI chat app to connect to an AI model with local function calling enabled. The app uses the <xref:Microsoft.Extensions.AI> library so you can write code using AI abstractions rather than a specific SDK. AI abstractions enable you to change the underlying AI model with minimal code changes.
16
16
17
-
> [!NOTE]
18
-
> The [`Microsoft.Extensions.AI`](https://www.nuget.org/packages/Microsoft.Extensions.AI/) library is currently in Preview.
0 commit comments