Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions docs/ai/tutorials/evaluate-with-reporting.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ Complete the following steps to create an MSTest project that connects to the `g

**Scenario name**

The [scenario name](xref:Microsoft.Extensions.AI.Evaluation.Reporting.ScenarioRun.ScenarioName) is set to the fully qualified name of the current test method. However, you can set it to any string of your choice when you call <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ReportingConfiguration.CreateScenarioRunAsync(System.String,System.String,System.Collections.Generic.IEnumerable{System.String},System.Threading.CancellationToken)>. Here are some considerations for choosing a scenario name:
The [scenario name](xref:Microsoft.Extensions.AI.Evaluation.Reporting.ScenarioRun.ScenarioName) is set to the fully qualified name of the current test method. However, you can set it to any string of your choice when you call <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ReportingConfiguration.CreateScenarioRunAsync(System.String,System.String,System.Collections.Generic.IEnumerable{System.String},System.Collections.Generic.IEnumerable{System.String},System.Threading.CancellationToken)>. Here are some considerations for choosing a scenario name:

- When using disk-based storage, the scenario name is used as the name of the folder under which the corresponding evaluation results are stored. So it's a good idea to keep the name reasonably short and avoid any characters that aren't allowed in file and directory names.
- By default, the generated evaluation report splits scenario names on `.` so that the results can be displayed in a hierarchical view with appropriate grouping, nesting, and aggregation. This is especially useful in cases where the scenario name is set to the fully qualified name of the corresponding test method, since it allows the results to be grouped by namespaces and class names in the hierarchy. However, you can also take advantage of this feature by including periods (`.`) in your own custom scenario names to create a reporting hierarchy that works best for your scenarios.
Expand All @@ -94,7 +94,7 @@ Complete the following steps to create an MSTest project that connects to the `g

A <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ReportingConfiguration> identifies:

- The set of evaluators that should be invoked for each <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ScenarioRun> that's created by calling <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ReportingConfiguration.CreateScenarioRunAsync(System.String,System.String,System.Collections.Generic.IEnumerable{System.String},System.Threading.CancellationToken)>.
- The set of evaluators that should be invoked for each <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ScenarioRun> that's created by calling <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ReportingConfiguration.CreateScenarioRunAsync(System.String,System.String,System.Collections.Generic.IEnumerable{System.String},System.Collections.Generic.IEnumerable{System.String},System.Threading.CancellationToken)>.
- The LLM endpoint that the evaluators should use (see <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ReportingConfiguration.ChatConfiguration?displayProperty=nameWithType>).
- How and where the results for the scenario runs should be stored.
- How LLM responses related to the scenario runs should be cached.
Expand Down Expand Up @@ -171,4 +171,4 @@ Run the test using your preferred test workflow, for example, by using the CLI c
- Navigate to the directory where the test results are stored (which is `C:\TestReports`, unless you modified the location when you created the <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ReportingConfiguration>). In the `results` subdirectory, notice that there's a folder for each test run named with a timestamp (`ExecutionName`). Inside each of those folders is a folder for each scenario name&mdash;in this case, just the single test method in the project. That folder contains a JSON file with the all the data including the messages, response, and evaluation result.
- Expand the evaluation. Here are a couple ideas:
- Add an additional custom evaluator, such as [an evaluator that uses AI to determine the measurement system](https://github.com/dotnet/ai-samples/blob/main/src/microsoft-extensions-ai-evaluation/api/evaluation/Evaluators/MeasurementSystemEvaluator.cs) that's used in the response.
- Add another test method, for example, [a method that evaluates multiple responses](https://github.com/dotnet/ai-samples/blob/main/src/microsoft-extensions-ai-evaluation/api/reporting/ReportingExamples.Example02_SamplingAndEvaluatingMultipleResponses.cs) from the LLM. Since each response can be different, it's good to sample and evaluate at least a few responses to a question. In this case, you specify an iteration name each time you call <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ReportingConfiguration.CreateScenarioRunAsync(System.String,System.String,System.Collections.Generic.IEnumerable{System.String},System.Threading.CancellationToken)>.
- Add another test method, for example, [a method that evaluates multiple responses](https://github.com/dotnet/ai-samples/blob/main/src/microsoft-extensions-ai-evaluation/api/reporting/ReportingExamples.Example02_SamplingAndEvaluatingMultipleResponses.cs) from the LLM. Since each response can be different, it's good to sample and evaluate at least a few responses to a question. In this case, you specify an iteration name each time you call <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ReportingConfiguration.CreateScenarioRunAsync(System.String,System.String,System.Collections.Generic.IEnumerable{System.String},System.Collections.Generic.IEnumerable{System.String},System.Threading.CancellationToken)>.
2 changes: 0 additions & 2 deletions docs/architecture/cloud-native/resilient-communications.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,8 +84,6 @@ The Azure cloud embraces Istio and provides direct support for it within Azure K

- [Resilience in Azure whitepaper](https://azure.microsoft.com/mediahandler/files/resourcefiles/resilience-in-azure-whitepaper/Resilience%20in%20Azure.pdf)

- [network latency](https://www.techopedia.com/definition/8553/network-latency)

- [Redundancy](/azure/architecture/guide/design-principles/redundancy)

- [geo-replication](/azure/sql-database/sql-database-active-geo-replication)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -374,9 +374,9 @@ But the application is configured so it accesses all the microservices through t

### The Gateway aggregation pattern in eShopOnContainers

As introduced previously, a flexible way to implement requests aggregation is with custom services, by code. You could also implement request aggregation with the [Request Aggregation feature in Ocelot](https://ocelot.readthedocs.io/en/latest/features/requestaggregation.html#request-aggregation), but it might not be as flexible as you need. Therefore, the selected way to implement aggregation in eShopOnContainers is with an explicit ASP.NET Core Web API service for each aggregator.
As introduced previously, a flexible way to implement requests aggregation is with custom services, by code. The selected way to implement aggregation in eShopOnContainers is with an explicit ASP.NET Core Web API service for each aggregator.

According to that approach, the API Gateway composition diagram is in reality a bit more extended when considering the aggregator services that are not shown in the simplified global architecture diagram shown previously.
According to that approach, the API Gateway composition diagram is in reality a bit more extended when considering the aggregator services that aren't shown in the simplified global architecture diagram shown previously.

In the following diagram, you can also see how the aggregator services work with their related API Gateways.

Expand Down Expand Up @@ -572,7 +572,7 @@ There are other important features to research and use, when using an Ocelot API

- **Rate limiting** \
[https://ocelot.readthedocs.io/en/latest/features/ratelimiting.html](https://ocelot.readthedocs.io/en/latest/features/ratelimiting.html )

- **Swagger for Ocelot** \
[https://github.com/Burgyn/MMLib.SwaggerForOcelot](https://github.com/Burgyn/MMLib.SwaggerForOcelot)

Expand Down
6 changes: 3 additions & 3 deletions docs/azure/index.yml
Original file line number Diff line number Diff line change
Expand Up @@ -262,11 +262,11 @@ additionalContent:
- title: Webcasts and shows
links:
- text: Azure Friday
url: https://azure.microsoft.com/resources/videos/azure-friday/
url: /shows/azure-friday/
- text: The Cloud Native Show
url: /Shows/The-Cloud-Native-Show
url: /shows/The-Cloud-Native-Show
- text: On .NET
url: /Shows/On-NET
url: /shows/On-NET
- text: .NET Community Standup
url: https://dotnet.microsoft.com/platform/community/standup
- text: On .NET Live
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
19 changes: 9 additions & 10 deletions docs/core/testing/unit-testing-with-copilot.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,12 @@ title: Generate Unit Tests with Copilot
author: sigmade
description: How to generate unit tests and test projects in C# using the xUnit framework with the help of Visual Studio commands and GitHub Copilot
ms.date: 01/12/2025
ms.collection: ce-skilling-ai-copilot
---

# Generate unit tests with GitHub Copilot

In this article, you explore how to generate unit tests and test projects in C# using the xUnit framework with the help of Visual Studio commands and GitHub Copilot.
In this article, you explore how to generate unit tests and test projects in C# using the xUnit framework with the help of Visual Studio commands and GitHub Copilot. Using Visual Studio in combination with GitHub Copilot significantly simplifies the process of generating and writing unit tests.

## Create a test project

Expand Down Expand Up @@ -53,21 +54,21 @@ In the **Create Unit Tests** dialog, select **xUnit** from the **Test Framework*

:::image type="content" source="media/create-unit-test-window.png" lightbox="media/create-unit-test-window.png" alt-text="Create Unit Tests window":::

* If you don't have a test project yet, choose "New Test Project" or select an existing one.
* If necessary, specify a template for the namespace, class, and method name, then click OK.
- If you don't have a test project yet, choose **New Test Project** or select an existing one.
- If necessary, specify a template for the namespace, class, and method name, then click **OK**.

After a few seconds, Visual Studio will pull in the necessary packages, and we will get a generated xUnit project with the required packages, structure, a reference to the project being tested, and with the `ProductServiceTests` class and a stub method.
After a few seconds, Visual Studio will pull in the necessary packages, and you'll get a generated xUnit project with the required packages and structure, a reference to the project being tested, and the `ProductServiceTests` class and a stub method.

:::image type="content" source="media/test-mehod-stub.png" lightbox="media/test-mehod-stub.png" alt-text="Generated stub method":::

## Generate the tests themselves

- Select the method being tested again.
- Right-click - **Ask Copilot**.
- Right-click and select **Ask Copilot**.
- Enter a simple prompt, such as:

"generate unit tests using xunit, nsubstitute and insert the result into #ProductServiceTests file."
"Generate unit tests using xunit, nsubstitute and insert the result into #ProductServiceTests file."

You need to select your test class when you type the `#` character.

> [!TIP]
Expand All @@ -77,8 +78,6 @@ After a few seconds, Visual Studio will pull in the necessary packages, and we w

Execute the prompt, click **Accept**, and Copilot generates the test code. After that, it remains to install the necessary packages.

When the packages are installed, the tests can be run. This example worked on the first try: Copilot knows very well how to work with NSubstitute, and all dependencies were defined through interfaces.
When the packages are installed, the tests can be run. This example worked on the first try: Copilot knows how to work with NSubstitute, and all dependencies were defined through interfaces.

:::image type="content" source="media/test-copilot-result.png" lightbox="media/test-copilot-result.png" alt-text="Generated tests":::

Thus, using **Visual Studio** in combination with **GitHub Copilot** significantly simplifies the process of generating and writing unit tests.
1 change: 0 additions & 1 deletion docs/csharp/whats-new/csharp-version-history.md
Original file line number Diff line number Diff line change
Expand Up @@ -340,7 +340,6 @@ C# version 5.0, released with Visual Studio 2012, was a focused version of the l

- [Asynchronous members](../asynchronous-programming/index.md)
- [Caller info attributes](../language-reference/attributes/caller-information.md)
- [Code Project: Caller Info Attributes in C# 5.0](https://www.codeproject.com/Tips/606379/Caller-Info-Attributes-in-Csharp)

The caller info attribute lets you easily retrieve information about the context in which you're running without resorting to a ton of boilerplate reflection code. It has many uses in diagnostics and logging tasks.

Expand Down
Loading
Loading