Skip to content

Commit 2856752

Browse files
Merge pull request #43784 from dotnet/main
Merge main into live
2 parents b90f476 + db88975 commit 2856752

File tree

359 files changed

+875
-1007
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

359 files changed

+875
-1007
lines changed

.github/policies/close-issues.yml

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,3 +19,12 @@ configuration:
1919
- addReply:
2020
reply: This issue has been automatically closed due to no response from the original author. Feel free to reopen it if you have more information that can help us investigate the issue further.
2121
- closeIssue
22+
23+
eventResponderTasks:
24+
- description: Close issues labeled 'code-of-conduct'
25+
if:
26+
- payloadType: Issues
27+
- hasLabel:
28+
label: code-of-conduct
29+
then:
30+
- closeIssue

docs/ai/conceptual/agents.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: "Agents and Copilots Bring Automation and Interactive Assistance to Your
33
description: "Learn how agents and copilots intelligently extend the functionality of LLMs to automatically meet user goals in .NET."
44
author: catbutler
55
ms.topic: concept-article #Don't change.
6-
ms.date: 04/15/2024
6+
ms.date: 11/24/2024
77

88
#customer intent: As a .NET developer, I want to understand how agents and copilots extend the functionality of LLMs, so that my apps can handle any type of content and automatically meet user goals.
99

docs/ai/conceptual/chain-of-thought-prompting.md

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -3,19 +3,17 @@ title: "Chain of Thought Prompting - .NET"
33
description: "Learn how chain of thought prompting can simplify prompt engineering."
44
author: catbutler
55
ms.topic: concept-article #Don't change.
6-
ms.date: 04/10/2024
6+
ms.date: 11/24/2024
77

88
#customer intent: As a .NET developer, I want to understand what chain-of-thought prompting is and how it can help me save time and get better completions out of prompt engineering.
99

1010
---
1111

1212
# Chain of thought prompting
1313

14-
This article explains the use of chain of thought prompting in .NET.
14+
GPT model performance and response quality benefits from *prompt engineering*, which is the practice of providing instructions and examples to a model to prime or refine its output. As they process instructions, models make more reasoning errors when they try to answer right away rather than taking time to work out an answer. You can help the model reason its way toward correct answers more reliably by asking for the model to include its chain of thought—that is, the steps it took to follow an instruction, along with the results of each step.
1515

16-
GPT model performance benefits from *prompt engineering*, which is the practice of providing instructions and examples to a model to prime or refine its output. As they process instructions, models make more reasoning errors when they try to answer right away rather than taking time to work out an answer. You can help the model reason its way toward correct answers more reliably by asking for the model to include its chain of thought—that is, the steps it took to follow an instruction, along with the results of each step.
17-
18-
*Chain of thought prompting* is the practice of prompting a GPT model to perform a task step-by-step and to present each step and its result in order in the output. This simplifies prompt engineering by offloading some execution planning to the model, and makes it easier to connect any problem to a specific step so you know where to focus further efforts.
16+
*Chain of thought prompting* is the practice of prompting a model to perform a task step-by-step and to present each step and its result in order in the output. This simplifies prompt engineering by offloading some execution planning to the model, and makes it easier to connect any problem to a specific step so you know where to focus further efforts.
1917

2018
It's generally simpler to just instruct the model to include its chain of thought, but you can use examples to show the model how to break down tasks. The following sections show both ways.
2119

@@ -24,7 +22,8 @@ It's generally simpler to just instruct the model to include its chain of though
2422
To use an instruction for chain of thought prompting, include a directive that tells the model to perform the task step-by-step and to output the result of each step.
2523

2624
```csharp
27-
prompt= "Instructions: Compare the pros and cons of EVs and petroleum-fueled vehicles. Break the task into steps, and output the result of each step as you perform it.";
25+
prompt= """Instructions: Compare the pros and cons of EVs and petroleum-fueled vehicles.
26+
Break the task into steps, and output the result of each step as you perform it.""";
2827
```
2928

3029
## Use chain of thought prompting in examples

docs/ai/conceptual/how-genai-and-llms-work.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: "How Generative AI and LLMs work"
33
description: "Understand how Generative AI and large language models (LLMs) work and how they might be useful in your .NET projects."
44
author: haywoodsloan
55
ms.topic: concept-article
6-
ms.date: 04/04/2024
6+
ms.date: 11/24/2024
77

88
#customer intent: As a .NET developer, I want to understand how Generative AI and large language models (LLMs) work and how they may be useful in my .NET projects.
99

docs/ai/conceptual/rag.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: "Integrate Your Data into AI Apps with Retrieval-Augmented Generation"
33
description: "Learn how retrieval-augmented generation lets you use your data with LLMs to generate better completions in .NET."
44
author: catbutler
55
ms.topic: concept-article #Don't change.
6-
ms.date: 04/15/2024
6+
ms.date: 11/24/2024
77

88
#customer intent: As a .NET developer, I want to understand how retrieval-augmented generation works in .NET so that LLMs can use my data sources to provide more valuable completions.
99

docs/ai/conceptual/vector-databases.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: "Using Vector Databases to Extend LLM Capabilities"
33
description: "Learn how vector databases extend LLM capabilities by storing and processing embeddings in .NET."
44
author: catbutler
55
ms.topic: concept-article #Don't change.
6-
ms.date: 05/16/2024
6+
ms.date: 11/24/2024
77

88
#customer intent: As a .NET developer, I want to learn how vector databases store and process embeddings in .NET so I can make more data available to LLMs in my apps.
99

docs/ai/conceptual/zero-shot-learning.md

Lines changed: 16 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: "Zero-shot and few-shot learning"
33
description: "Learn the use cases for zero-shot and few-shot learning in prompt engineering."
44
author: catbutler
55
ms.topic: concept-article #Don't change.
6-
ms.date: 04/12/2024
6+
ms.date: 11/25/2024
77

88
#customer intent: As a .NET developer, I want to understand how zero-shot and few-shot learning techniques can help me improve my prompt engineering.
99

@@ -13,11 +13,13 @@ ms.date: 04/12/2024
1313

1414
This article explains zero-shot learning and few-shot learning for prompt engineering in .NET, including their primary use cases.
1515

16-
GPT model performance benefits from *prompt engineering*, the practice of providing instructions and examples to a model to refine its output. Zero-shot learning and few-shot learning are techniques that you can use when providing examples.
16+
GPT model performance benefits from *prompt engineering*, the practice of providing instructions and examples to a model to refine its output. Zero-shot learning and few-shot learning are techniques you can use when providing examples.
1717

18-
With zero-shot learning, you include prompts but not verbatim completions. You can include completions that only consist of cues. Zero-shot learning relies entirely on the model's existing knowledge to generate responses, which reduces the number of tokens created and can help you control costs. However, zero-shot learning doesn't add to the model's knowledge.
18+
## Zero-shot learning
1919

20-
Here's an example zero-shot prompt that tells the model to evaluate user input to determine which of four possible intents the input represents, and then to preface its response with **"Intent: "**.
20+
Zero-shot learning is the practice of passing prompts that aren't paired with verbatim completions, although you can include completions that consist of cues. Zero-shot learning relies entirely on the model's existing knowledge to generate responses, which reduces the number of tokens created and can help you control costs. However, zero-shot learning doesn't add to the model's knowledge or context.
21+
22+
Here's an example zero-shot prompt that tells the model to evaluate user input to determine which of four possible intents the input represents, and then to preface the response with **"Intent: "**.
2123

2224
```csharp
2325
prompt = $"""
@@ -29,7 +31,14 @@ Intent:
2931
""";
3032
```
3133

32-
With few-shot learning, you include prompts paired with verbatim completions. Compared to zero-shot learning, this means few-shot learning produces more tokens and causes the model to update its knowledge, which can make few-shot learning more resource-intensive. However, for the same reasons few-shot learning also helps the model produce more relevant responses.
34+
There are two primary use cases for zero-shot learning:
35+
36+
- **Work with fined-tuned LLMs** - Because it relies on the model's existing knowledge, zero-shot learning is not as resource-intensive as few-shot learning, and it works well with LLMs that have already been fined-tuned on instruction datasets. You might be able to rely solely on zero-shot learning and keep costs relatively low.
37+
- **Establish performance baselines** - Zero-shot learning can help you simulate how your app would perform for actual users. This lets you evaluate various aspects of your model's current performance, such as accuracy or precision. In this case, you typically use zero-shot learning to establish a performance baseline and then experiment with few-shot learning to improve performance.
38+
39+
## Few-shot learning
40+
41+
Few-shot learning is the practice of passing prompts paired with verbatim completions (few-shot prompts) to show your model how to respond. Compared to zero-shot learning, this means few-shot learning produces more tokens and causes the model to update its knowledge, which can make few-shot learning more resource-intensive. However, few-shot learning also helps the model produce more relevant responses.
3342

3443
```csharp
3544
prompt = $"""
@@ -48,21 +57,10 @@ Intent:
4857
""";
4958
```
5059

51-
## Zero-shot learning use cases
52-
53-
Zero-shot learning is the practice of passing prompts that aren't paired with verbatim completions, although they can be paired with a cue. There are two primary use cases for zero-shot learning:
54-
55-
- **Working with fined-tuned LLMs** - Because it relies on the model's existing knowledge, zero-shot learning is not as resource-intensive as few-shot learning, and it works well with LLMs that have already been fined-tuned on instruction datasets. You might be able to rely solely on zero-shot learning and keep costs relatively low.
56-
- **Establish performance baselines** - Zero-shot learning can help you simulate how your app would perform for actual users. This lets you evaluate various aspects of your model's current performance, such as accuracy or precision. In this case, you typically use zero-shot learning to establish a performance baseline and then experiment with few-shot learning to improve performance.
57-
58-
## Few-shot learning use cases
59-
60-
Few-shot learning is the practice of passing prompts paired with verbatim completions (few-shot prompts) to show your model how to respond. Unlike zero-shot learning, few-shot learning can add to the model's knowledge. You can even use your own datasets to automatically generate few-shot prompts, by performing retrieval-augmented generation.
61-
6260
Few-shot learning has two primary use cases:
6361

64-
- **Tuning an LLM** - Because it can add to the model's knowledge, few-shot learning can improve a model's performance. It also causes the model to create more tokens than zero-shot learning does, which can eventually become prohibitively expensive or even infeasible. However, if your LLM isn't fined-tuned yet, you won't get good performance with zero-shot prompts, and few-shot learning is warranted.
65-
- **Fixing performance issues** - You can use few-shot learning as a follow-on to zero-shot learning. In this case, you use zero-shot learning to establish a performance baseline, and then experiment with few-shot learning based on the zero-shot prompts you used. This lets you add to the model's knowledge after seeing how it currently responds, so you can iterate and improve performance while minimizing the number of tokens you introduce.
62+
- **Tuning an LLM** - Because it can add to the model's knowledge, few-shot learning can improve a model's performance. It also causes the model to create more tokens than zero-shot learning does, which can eventually become prohibitively expensive or even infeasible. However, if your LLM isn't fined-tuned yet, you won't always get good performance with zero-shot prompts, and few-shot learning is warranted.
63+
- **Fixing performance issues** - You can use few-shot learning as a follow-up to zero-shot learning. In this case, you use zero-shot learning to establish a performance baseline, and then experiment with few-shot learning based on the zero-shot prompts you used. This lets you add to the model's knowledge after seeing how it currently responds, so you can iterate and improve performance while minimizing the number of tokens you introduce.
6664

6765
### Caveats
6866

docs/ai/dotnet-ai-ecosystem.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: Overview of the .NET + AI ecosystem
33
description: This article provides an overview of the ecosystem of SDKs and tools available to .NET developers integrating AI into their applications.
4-
ms.date: 04/04/2024
4+
ms.date: 11/24/2024
55
ms.topic: overview
66
ms.custom: devx-track-dotnet, devx-track-dotnet-ai
77
---

docs/ai/how-to/app-service-aoai-auth.md

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -4,14 +4,14 @@ description: "Learn how to authenticate and authorize your app service applicati
44
author: haywoodsloan
55
ms.topic: how-to
66
ms.custom: devx-track-azurecli
7-
ms.date: 04/19/2024
7+
ms.date: 11/24/2024
88
zone_pivot_groups: azure-interface
99
#customer intent: As a .NET developer, I want authenticate and authorize my App Service to Azure OpenAI by using Microsoft Entra so that I can securely use AI in my .NET application.
1010
---
1111

12-
# Authenticate and authorize App Service to Azure OpenAI using Microsoft Entra and the Semantic Kernel SDK
12+
# Authenticate an AI app hosted on Azure App Service to Azure OpenAI using Microsoft Entra ID
1313

14-
This article demonstrates how to use [Microsoft Entra-managed identities](/azure/app-service/overview-managed-identity) to authenticate and authorize an App Service application to an Azure OpenAI resource.
14+
This article demonstrates how to use [Microsoft Entra ID managed identities](/azure/app-service/overview-managed-identity) to authenticate and authorize an App Service application to an Azure OpenAI resource.
1515

1616
This article also demonstrates how to use the [Semantic Kernel SDK](/semantic-kernel/overview) to easily implement Microsoft Entra authentication in your .NET application.
1717

@@ -33,32 +33,18 @@ Your application can be granted two types of identities:
3333
* A **system-assigned identity** is tied to your application and is deleted if your app is deleted. An app can have only one system-assigned identity.
3434
* A **user-assigned identity** is a standalone Azure resource that can be assigned to your app. An app can have multiple user-assigned identities.
3535

36-
### Add a system-assigned identity
37-
3836
:::zone target="docs" pivot="azure-portal"
3937

38+
# [System-assigned](#tab/system-assigned)
39+
4040
1. Navigate to your app's page in the [Azure portal](https://aka.ms/azureportal), and then scroll down to the **Settings** group.
4141
1. Select **Identity**.
4242
1. On the **System assigned** tab, toggle *Status* to **On**, and then select **Save**.
4343

44-
:::zone-end
45-
46-
:::zone target="docs" pivot="azure-cli"
47-
48-
Run the `az webapp identity assign` command to create a system-assigned identity:
49-
50-
```azurecli
51-
az webapp identity assign --name <appName> --resource-group <groupName>
52-
```
53-
54-
:::zone-end
55-
56-
### Add a user-assigned identity
44+
## [User-assigned](#tab/user-assigned)
5745

5846
To add a user-assigned identity to your app, create the identity, and then add its resource identifier to your app config.
5947

60-
:::zone target="docs" pivot="azure-portal"
61-
6248
1. Create a user-assigned managed identity resource by following [these instructions](/azure/active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal#create-a-user-assigned-managed-identity).
6349
1. In the left navigation pane of your app's page, scroll down to the **Settings** group.
6450
1. Select **Identity**.
@@ -68,10 +54,22 @@ To add a user-assigned identity to your app, create the identity, and then add i
6854
> [!IMPORTANT]
6955
> After you select **Add**, the app restarts.
7056
57+
---
58+
7159
:::zone-end
7260

7361
:::zone target="docs" pivot="azure-cli"
7462

63+
## [System-assigned](#tab/system-assigned)
64+
65+
Run the `az webapp identity assign` command to create a system-assigned identity:
66+
67+
```azurecli
68+
az webapp identity assign --name <appName> --resource-group <groupName>
69+
```
70+
71+
## [User-assigned](#tab/user-assigned)
72+
7573
1. Create a user-assigned identity:
7674

7775
```azurecli
@@ -84,6 +82,8 @@ To add a user-assigned identity to your app, create the identity, and then add i
8482
az webapp identity assign --resource-group <groupName> --name <appName> --identities <identityId>
8583
```
8684
85+
---
86+
8787
:::zone-end
8888
8989
## Add an Azure OpenAI user role to your managed identity
@@ -135,7 +135,7 @@ az role assignment create --assignee "<managedIdentityObjectID>" \
135135

136136
:::zone-end
137137

138-
## Implement token-based authentication by using Semantic Kernel SDK
138+
## Implement token-based authentication using Semantic Kernel SDK
139139

140140
1. Initialize a `DefaultAzureCredential` object to assume your app's managed identity:
141141

0 commit comments

Comments
 (0)