Skip to content

Commit d409543

Browse files
Merge pull request #46241 from dotnet/main
Merge main into live
2 parents 34fc947 + ce59cc4 commit d409543

File tree

17 files changed

+474
-66
lines changed

17 files changed

+474
-66
lines changed

docs/ai/conceptual/evaluation-libraries.md

Lines changed: 23 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,9 @@ description: Learn about the Microsoft.Extensions.AI.Evaluation libraries, which
44
ms.topic: concept-article
55
ms.date: 05/13/2025
66
---
7-
# The Microsoft.Extensions.AI.Evaluation libraries (Preview)
7+
# The Microsoft.Extensions.AI.Evaluation libraries
88

9-
The Microsoft.Extensions.AI.Evaluation libraries (currently in preview) simplify the process of evaluating the quality and accuracy of responses generated by AI models in .NET intelligent apps. Various metrics measure aspects like relevance, truthfulness, coherence, and completeness of the responses. Evaluations are crucial in testing, because they help ensure that the AI model performs as expected and provides reliable and accurate results.
9+
The Microsoft.Extensions.AI.Evaluation libraries simplify the process of evaluating the quality and accuracy of responses generated by AI models in .NET intelligent apps. Various metrics measure aspects like relevance, truthfulness, coherence, and completeness of the responses. Evaluations are crucial in testing, because they help ensure that the AI model performs as expected and provides reliable and accurate results.
1010

1111
The evaluation libraries, which are built on top of the [Microsoft.Extensions.AI abstractions](../microsoft-extensions-ai.md), are composed of the following NuGet packages:
1212

@@ -31,34 +31,34 @@ You can also customize to add your own evaluations by implementing the <xref:Mic
3131

3232
Quality evaluators measure response quality. They use an LLM to perform the evaluation.
3333

34-
| Metric | Description | Evaluator type |
35-
|----------------|--------------------------------------------------------|----------------|
36-
| `Relevance` | Evaluates how relevant a response is to a query | <xref:Microsoft.Extensions.AI.Evaluation.Quality.RelevanceEvaluator> |
37-
| `Completeness` | Evaluates how comprehensive and accurate a response is | <xref:Microsoft.Extensions.AI.Evaluation.Quality.CompletenessEvaluator> |
38-
| `Retrieval` | Evaluates performance in retrieving information for additional context | <xref:Microsoft.Extensions.AI.Evaluation.Quality.RetrievalEvaluator> |
39-
| `Fluency` | Evaluates grammatical accuracy, vocabulary range, sentence complexity, and overall readability| <xref:Microsoft.Extensions.AI.Evaluation.Quality.FluencyEvaluator> |
40-
| `Coherence` | Evaluates the logical and orderly presentation of ideas | <xref:Microsoft.Extensions.AI.Evaluation.Quality.CoherenceEvaluator> |
41-
| `Equivalence` | Evaluates the similarity between the generated text and its ground truth with respect to a query | <xref:Microsoft.Extensions.AI.Evaluation.Quality.EquivalenceEvaluator> |
42-
| `Groundedness` | Evaluates how well a generated response aligns with the given context | <xref:Microsoft.Extensions.AI.Evaluation.Quality.GroundednessEvaluator> |
43-
| `Relevance (RTC)`, `Truth (RTC)`, and `Completeness (RTC)` | Evaluates how relevant, truthful, and complete a response is | <xref:Microsoft.Extensions.AI.Evaluation.Quality.RelevanceTruthAndCompletenessEvaluator> |
34+
| Evaluator type | Metric | Description |
35+
|----------------------------------------------------------------------|-------------|-------------|
36+
| <xref:Microsoft.Extensions.AI.Evaluation.Quality.RelevanceEvaluator> | `Relevance` | Evaluates how relevant a response is to a query |
37+
| <xref:Microsoft.Extensions.AI.Evaluation.Quality.CompletenessEvaluator> | `Completeness` | Evaluates how comprehensive and accurate a response is |
38+
| <xref:Microsoft.Extensions.AI.Evaluation.Quality.RetrievalEvaluator> | `Retrieval` | Evaluates performance in retrieving information for additional context |
39+
| <xref:Microsoft.Extensions.AI.Evaluation.Quality.FluencyEvaluator> | `Fluency` | Evaluates grammatical accuracy, vocabulary range, sentence complexity, and overall readability|
40+
| <xref:Microsoft.Extensions.AI.Evaluation.Quality.CoherenceEvaluator> | `Coherence` | Evaluates the logical and orderly presentation of ideas |
41+
| <xref:Microsoft.Extensions.AI.Evaluation.Quality.EquivalenceEvaluator> | `Equivalence` | Evaluates the similarity between the generated text and its ground truth with respect to a query |
42+
| <xref:Microsoft.Extensions.AI.Evaluation.Quality.GroundednessEvaluator> | `Groundedness` | Evaluates how well a generated response aligns with the given context |
43+
| <xref:Microsoft.Extensions.AI.Evaluation.Quality.RelevanceTruthAndCompletenessEvaluator>| `Relevance (RTC)`, `Truth (RTC)`, and `Completeness (RTC)` | Evaluates how relevant, truthful, and complete a response is |
4444

4545
† This evaluator is marked [experimental](../../fundamentals/syslib-diagnostics/experimental-overview.md).
4646

4747
### Safety evaluators
4848

4949
Safety evaluators check for presence of harmful, inappropriate, or unsafe content in a response. They rely on the Azure AI Foundry Evaluation service, which uses a model that's fine tuned to perform evaluations.
5050

51-
| Metric | Description | Evaluator type |
52-
|--------------------|-----------------------------------------------------------------------|------------------------------|
53-
| `Groundedness Pro` | Uses a fine-tuned model hosted behind the Azure AI Foundry Evaluation service to evaluate how well a generated response aligns with the given context | <xref:Microsoft.Extensions.AI.Evaluation.Safety.GroundednessProEvaluator> |
54-
| `Protected Material` | Evaluates response for the presence of protected material | <xref:Microsoft.Extensions.AI.Evaluation.Safety.ProtectedMaterialEvaluator> |
55-
| `Ungrounded Attributes` | Evaluates a response for the presence of content that indicates ungrounded inference of human attributes | <xref:Microsoft.Extensions.AI.Evaluation.Safety.UngroundedAttributesEvaluator> |
56-
| `Hate And Unfairness` | Evaluates a response for the presence of content that's hateful or unfair | <xref:Microsoft.Extensions.AI.Evaluation.Safety.HateAndUnfairnessEvaluator> |
57-
| `Self Harm` | Evaluates a response for the presence of content that indicates self harm | <xref:Microsoft.Extensions.AI.Evaluation.Safety.SelfHarmEvaluator> |
58-
| `Violence` | Evaluates a response for the presence of violent content | <xref:Microsoft.Extensions.AI.Evaluation.Safety.ViolenceEvaluator> |
59-
| `Sexual` | Evaluates a response for the presence of sexual content | <xref:Microsoft.Extensions.AI.Evaluation.Safety.SexualEvaluator> |
60-
| `Code Vulnerability` | Evaluates a response for the presence of vulnerable code | <xref:Microsoft.Extensions.AI.Evaluation.Safety.CodeVulnerabilityEvaluator> |
61-
| `Indirect Attack` | Evaluates a response for the presence of indirect attacks, such as manipulated content, intrusion, and information gathering | <xref:Microsoft.Extensions.AI.Evaluation.Safety.IndirectAttackEvaluator> |
51+
| Evaluator type | Metric | Description |
52+
|---------------------------------------------------------------------------|--------------------|-------------|
53+
| <xref:Microsoft.Extensions.AI.Evaluation.Safety.GroundednessProEvaluator> | `Groundedness Pro` | Uses a fine-tuned model hosted behind the Azure AI Foundry Evaluation service to evaluate how well a generated response aligns with the given context |
54+
| <xref:Microsoft.Extensions.AI.Evaluation.Safety.ProtectedMaterialEvaluator> | `Protected Material` | Evaluates response for the presence of protected material |
55+
| <xref:Microsoft.Extensions.AI.Evaluation.Safety.UngroundedAttributesEvaluator> | `Ungrounded Attributes` | Evaluates a response for the presence of content that indicates ungrounded inference of human attributes |
56+
| <xref:Microsoft.Extensions.AI.Evaluation.Safety.HateAndUnfairnessEvaluator>| `Hate And Unfairness` | Evaluates a response for the presence of content that's hateful or unfair |
57+
| <xref:Microsoft.Extensions.AI.Evaluation.Safety.SelfHarmEvaluator>| `Self Harm` | Evaluates a response for the presence of content that indicates self harm |
58+
| <xref:Microsoft.Extensions.AI.Evaluation.Safety.ViolenceEvaluator>| `Violence` | Evaluates a response for the presence of violent content |
59+
| <xref:Microsoft.Extensions.AI.Evaluation.Safety.SexualEvaluator>| `Sexual` | Evaluates a response for the presence of sexual content |
60+
| <xref:Microsoft.Extensions.AI.Evaluation.Safety.CodeVulnerabilityEvaluator> | `Code Vulnerability` | Evaluates a response for the presence of vulnerable code |
61+
| <xref:Microsoft.Extensions.AI.Evaluation.Safety.IndirectAttackEvaluator> | `Indirect Attack` | Evaluates a response for the presence of indirect attacks, such as manipulated content, intrusion, and information gathering |
6262

6363
† In addition, the <xref:Microsoft.Extensions.AI.Evaluation.Safety.ContentHarmEvaluator> provides single-shot evaluation for the four metrics supported by `HateAndUnfairnessEvaluator`, `SelfHarmEvaluator`, `ViolenceEvaluator`, and `SexualEvaluator`.
6464

docs/ai/quickstarts/build-chat-app.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -14,9 +14,6 @@ zone_pivot_groups: openai-library
1414

1515
In this quickstart, you learn how to create a conversational .NET console chat app using an OpenAI or Azure OpenAI model. The app uses the <xref:Microsoft.Extensions.AI> library so you can write code using AI abstractions rather than a specific SDK. AI abstractions enable you to change the underlying AI model with minimal code changes.
1616

17-
> [!NOTE]
18-
> The [`Microsoft.Extensions.AI`](https://www.nuget.org/packages/Microsoft.Extensions.AI/) library is currently in Preview.
19-
2017
:::zone target="docs" pivot="openai"
2118

2219
[!INCLUDE [openai-prereqs](includes/prerequisites-openai.md)]

docs/ai/quickstarts/evaluate-ai-response.md

Lines changed: 9 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,17 @@
11
---
2-
title: Quickstart - Evaluate a model's response
2+
title: Quickstart - Evaluate the quality of a model's response
33
description: Learn how to create an MSTest app to evaluate the AI chat response of a language model.
44
ms.date: 03/18/2025
55
ms.topic: quickstart
66
ms.custom: devx-track-dotnet, devx-track-dotnet-ai
77
---
88

9-
# Evaluate a model's response
9+
# Evaluate the quality of a model's response
1010

11-
In this quickstart, you create an MSTest app to evaluate the chat response of an OpenAI model. The test app uses the [Microsoft.Extensions.AI.Evaluation](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation) libraries.
11+
In this quickstart, you create an MSTest app to evaluate the quality of a chat response from an OpenAI model. The test app uses the [Microsoft.Extensions.AI.Evaluation](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation) libraries.
1212

1313
> [!NOTE]
14-
>
15-
> - The `Microsoft.Extensions.AI.Evaluation` library is currently in Preview.
16-
> - This quickstart demonstrates the simplest usage of the evaluation API. Notably, it doesn't demonstrate use of the [response caching](../conceptual/evaluation-libraries.md#cached-responses) and [reporting](../conceptual/evaluation-libraries.md#reporting) functionality, which are important if you're authoring unit tests that run as part of an "offline" evaluation pipeline. The scenario shown in this quickstart is suitable in use cases such as "online" evaluation of AI responses within production code and logging scores to telemetry, where caching and reporting aren't relevant. For a tutorial that demonstrates the caching and reporting functionality, see [Tutorial: Evaluate a model's response with response caching and reporting](../tutorials/evaluate-with-reporting.md)
14+
> This quickstart demonstrates the simplest usage of the evaluation API. Notably, it doesn't demonstrate use of the [response caching](../conceptual/evaluation-libraries.md#cached-responses) and [reporting](../conceptual/evaluation-libraries.md#reporting) functionality, which are important if you're authoring unit tests that run as part of an "offline" evaluation pipeline. The scenario shown in this quickstart is suitable in use cases such as "online" evaluation of AI responses within production code and logging scores to telemetry, where caching and reporting aren't relevant. For a tutorial that demonstrates the caching and reporting functionality, see [Tutorial: Evaluate a model's response with response caching and reporting](../tutorials/evaluate-with-reporting.md)
1715
1816
## Prerequisites
1917

@@ -39,9 +37,9 @@ Complete the following steps to create an MSTest project that connects to the `g
3937
```dotnetcli
4038
dotnet add package Azure.AI.OpenAI
4139
dotnet add package Azure.Identity
42-
dotnet add package Microsoft.Extensions.AI.Abstractions --prerelease
43-
dotnet add package Microsoft.Extensions.AI.Evaluation --prerelease
44-
dotnet add package Microsoft.Extensions.AI.Evaluation.Quality --prerelease
40+
dotnet add package Microsoft.Extensions.AI.Abstractions
41+
dotnet add package Microsoft.Extensions.AI.Evaluation
42+
dotnet add package Microsoft.Extensions.AI.Evaluation.Quality
4543
dotnet add package Microsoft.Extensions.AI.OpenAI --prerelease
4644
dotnet add package Microsoft.Extensions.Configuration
4745
dotnet add package Microsoft.Extensions.Configuration.UserSecrets
@@ -51,9 +49,9 @@ Complete the following steps to create an MSTest project that connects to the `g
5149
5250
```bash
5351
dotnet user-secrets init
54-
dotnet user-secrets set AZURE_OPENAI_ENDPOINT <your-azure-openai-endpoint>
52+
dotnet user-secrets set AZURE_OPENAI_ENDPOINT <your-Azure-OpenAI-endpoint>
5553
dotnet user-secrets set AZURE_OPENAI_GPT_NAME gpt-4o
56-
dotnet user-secrets set AZURE_TENANT_ID <your-tenant-id>
54+
dotnet user-secrets set AZURE_TENANT_ID <your-tenant-ID>
5755
```
5856
5957
(Depending on your environment, the tenant ID might not be needed. In that case, remove it from the code that instantiates the <xref:Azure.Identity.DefaultAzureCredential>.)

docs/ai/quickstarts/prompt-model.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -14,9 +14,6 @@ zone_pivot_groups: openai-library
1414

1515
In this quickstart, you learn how to create a .NET console chat app to connect to and prompt an OpenAI or Azure OpenAI model. The app uses the <xref:Microsoft.Extensions.AI> library so you can write code using AI abstractions rather than a specific SDK. AI abstractions enable you to change the underlying AI model with minimal code changes.
1616

17-
> [!NOTE]
18-
> The <xref:Microsoft.Extensions.AI> library is currently in Preview.
19-
2017
:::zone target="docs" pivot="openai"
2118

2219
[!INCLUDE [openai-prereqs](includes/prerequisites-openai.md)]

docs/ai/quickstarts/structured-output.md

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -10,9 +10,6 @@ ms.custom: devx-track-dotnet, devx-track-dotnet-ai
1010

1111
In this quickstart, you create a chat app that requests a response with *structured output*. A structured output response is a chat response that's of a type you specify instead of just plain text. The chat app you create in this quickstart analyzes sentiment of various product reviews, categorizing each review according to the values of a custom enumeration.
1212

13-
> [!NOTE]
14-
> The <xref:Microsoft.Extensions.AI> library, which is used in this quickstart, is currently in Preview.
15-
1613
## Prerequisites
1714

1815
- [.NET 8 or a later version](https://dotnet.microsoft.com/download)
@@ -37,7 +34,7 @@ Complete the following steps to create a console app that connects to the `gpt-4
3734
```dotnetcli
3835
dotnet add package Azure.AI.OpenAI
3936
dotnet add package Azure.Identity
40-
dotnet add package Microsoft.Extensions.AI --prerelease
37+
dotnet add package Microsoft.Extensions.AI
4138
dotnet add package Microsoft.Extensions.AI.OpenAI --prerelease
4239
dotnet add package Microsoft.Extensions.Configuration
4340
dotnet add package Microsoft.Extensions.Configuration.UserSecrets

docs/ai/quickstarts/use-function-calling.md

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -14,9 +14,6 @@ zone_pivot_groups: openai-library
1414

1515
In this quickstart, you create a .NET console AI chat app to connect to an AI model with local function calling enabled. The app uses the <xref:Microsoft.Extensions.AI> library so you can write code using AI abstractions rather than a specific SDK. AI abstractions enable you to change the underlying AI model with minimal code changes.
1616

17-
> [!NOTE]
18-
> The [`Microsoft.Extensions.AI`](https://www.nuget.org/packages/Microsoft.Extensions.AI/) library is currently in Preview.
19-
2017
:::zone target="docs" pivot="openai"
2118

2219
[!INCLUDE [openai-prereqs](includes/prerequisites-openai.md)]
@@ -54,7 +51,7 @@ Complete the following steps to create a .NET console app to connect to an AI mo
5451
```bash
5552
dotnet add package Azure.Identity
5653
dotnet add package Azure.AI.OpenAI
57-
dotnet add package Microsoft.Extensions.AI --prerelease
54+
dotnet add package Microsoft.Extensions.AI
5855
dotnet add package Microsoft.Extensions.AI.OpenAI --prerelease
5956
dotnet add package Microsoft.Extensions.Configuration
6057
dotnet add package Microsoft.Extensions.Configuration.UserSecrets
@@ -65,7 +62,7 @@ Complete the following steps to create a .NET console app to connect to an AI mo
6562
:::zone target="docs" pivot="openai"
6663
6764
```bash
68-
dotnet add package Microsoft.Extensions.AI --prerelease
65+
dotnet add package Microsoft.Extensions.AI
6966
dotnet add package Microsoft.Extensions.AI.OpenAI --prerelease
7067
dotnet add package Microsoft.Extensions.Configuration
7168
dotnet add package Microsoft.Extensions.Configuration.UserSecrets

docs/ai/toc.yml

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -81,9 +81,11 @@ items:
8181
items:
8282
- name: The Microsoft.Extensions.AI.Evaluation libraries
8383
href: conceptual/evaluation-libraries.md
84-
- name: "Quickstart: Evaluate a model's response"
84+
- name: "Quickstart: Evaluate the quality of a response"
8585
href: quickstarts/evaluate-ai-response.md
86-
- name: "Tutorial: Evaluate a response with response caching and reporting"
86+
- name: "Tutorial: Evaluate the safety of a response"
87+
href: tutorials/evaluate-safety.md
88+
- name: "Tutorial: Evaluate a response with caching and reporting"
8789
href: tutorials/evaluate-with-reporting.md
8890
- name: Resources
8991
items:

0 commit comments

Comments
 (0)