You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/ai/conceptual/agents.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: "Agents and Copilots Bring Automation and Interactive Assistance to Your
3
3
description: "Learn how agents and copilots intelligently extend the functionality of LLMs to automatically meet user goals in .NET."
4
4
author: catbutler
5
5
ms.topic: concept-article #Don't change.
6
-
ms.date: 04/15/2024
6
+
ms.date: 11/24/2024
7
7
8
8
#customer intent: As a .NET developer, I want to understand how agents and copilots extend the functionality of LLMs, so that my apps can handle any type of content and automatically meet user goals.
Copy file name to clipboardExpand all lines: docs/ai/conceptual/chain-of-thought-prompting.md
+5-6Lines changed: 5 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,19 +3,17 @@ title: "Chain of Thought Prompting - .NET"
3
3
description: "Learn how chain of thought prompting can simplify prompt engineering."
4
4
author: catbutler
5
5
ms.topic: concept-article #Don't change.
6
-
ms.date: 04/10/2024
6
+
ms.date: 11/24/2024
7
7
8
8
#customer intent: As a .NET developer, I want to understand what chain-of-thought prompting is and how it can help me save time and get better completions out of prompt engineering.
9
9
10
10
---
11
11
12
12
# Chain of thought prompting
13
13
14
-
This article explains the use of chain of thought prompting in .NET.
14
+
GPT model performance and response quality benefits from *prompt engineering*, which is the practice of providing instructions and examples to a model to prime or refine its output. As they process instructions, models make more reasoning errors when they try to answer right away rather than taking time to work out an answer. You can help the model reason its way toward correct answers more reliably by asking for the model to include its chain of thought—that is, the steps it took to follow an instruction, along with the results of each step.
15
15
16
-
GPT model performance benefits from *prompt engineering*, which is the practice of providing instructions and examples to a model to prime or refine its output. As they process instructions, models make more reasoning errors when they try to answer right away rather than taking time to work out an answer. You can help the model reason its way toward correct answers more reliably by asking for the model to include its chain of thought—that is, the steps it took to follow an instruction, along with the results of each step.
17
-
18
-
*Chain of thought prompting* is the practice of prompting a GPT model to perform a task step-by-step and to present each step and its result in order in the output. This simplifies prompt engineering by offloading some execution planning to the model, and makes it easier to connect any problem to a specific step so you know where to focus further efforts.
16
+
*Chain of thought prompting* is the practice of prompting a model to perform a task step-by-step and to present each step and its result in order in the output. This simplifies prompt engineering by offloading some execution planning to the model, and makes it easier to connect any problem to a specific step so you know where to focus further efforts.
19
17
20
18
It's generally simpler to just instruct the model to include its chain of thought, but you can use examples to show the model how to break down tasks. The following sections show both ways.
21
19
@@ -24,7 +22,8 @@ It's generally simpler to just instruct the model to include its chain of though
24
22
To use an instruction for chain of thought prompting, include a directive that tells the model to perform the task step-by-step and to output the result of each step.
25
23
26
24
```csharp
27
-
prompt="Instructions: Compare the pros and cons of EVs and petroleum-fueled vehicles. Break the task into steps, and output the result of each step as you perform it.";
25
+
prompt="""Instructions: Compare the pros and cons of EVs and petroleum-fueled vehicles.
26
+
Break the task into steps, and output the result of each step as you perform it.""";
Copy file name to clipboardExpand all lines: docs/ai/conceptual/how-genai-and-llms-work.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: "How Generative AI and LLMs work"
3
3
description: "Understand how Generative AI and large language models (LLMs) work and how they might be useful in your .NET projects."
4
4
author: haywoodsloan
5
5
ms.topic: concept-article
6
-
ms.date: 04/04/2024
6
+
ms.date: 11/24/2024
7
7
8
8
#customer intent: As a .NET developer, I want to understand how Generative AI and large language models (LLMs) work and how they may be useful in my .NET projects.
Copy file name to clipboardExpand all lines: docs/ai/conceptual/rag.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: "Integrate Your Data into AI Apps with Retrieval-Augmented Generation"
3
3
description: "Learn how retrieval-augmented generation lets you use your data with LLMs to generate better completions in .NET."
4
4
author: catbutler
5
5
ms.topic: concept-article #Don't change.
6
-
ms.date: 04/15/2024
6
+
ms.date: 11/24/2024
7
7
8
8
#customer intent: As a .NET developer, I want to understand how retrieval-augmented generation works in .NET so that LLMs can use my data sources to provide more valuable completions.
description: "Learn how vector databases extend LLM capabilities by storing and processing embeddings in .NET."
4
4
author: catbutler
5
5
ms.topic: concept-article #Don't change.
6
-
ms.date: 05/16/2024
6
+
ms.date: 11/24/2024
7
7
8
8
#customer intent: As a .NET developer, I want to learn how vector databases store and process embeddings in .NET so I can make more data available to LLMs in my apps.
Copy file name to clipboardExpand all lines: docs/ai/conceptual/zero-shot-learning.md
+16-18Lines changed: 16 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: "Zero-shot and few-shot learning"
3
3
description: "Learn the use cases for zero-shot and few-shot learning in prompt engineering."
4
4
author: catbutler
5
5
ms.topic: concept-article #Don't change.
6
-
ms.date: 04/12/2024
6
+
ms.date: 11/25/2024
7
7
8
8
#customer intent: As a .NET developer, I want to understand how zero-shot and few-shot learning techniques can help me improve my prompt engineering.
9
9
@@ -13,11 +13,13 @@ ms.date: 04/12/2024
13
13
14
14
This article explains zero-shot learning and few-shot learning for prompt engineering in .NET, including their primary use cases.
15
15
16
-
GPT model performance benefits from *prompt engineering*, the practice of providing instructions and examples to a model to refine its output. Zero-shot learning and few-shot learning are techniques that you can use when providing examples.
16
+
GPT model performance benefits from *prompt engineering*, the practice of providing instructions and examples to a model to refine its output. Zero-shot learning and few-shot learning are techniques you can use when providing examples.
17
17
18
-
With zero-shot learning, you include prompts but not verbatim completions. You can include completions that only consist of cues. Zero-shot learning relies entirely on the model's existing knowledge to generate responses, which reduces the number of tokens created and can help you control costs. However, zero-shot learning doesn't add to the model's knowledge.
18
+
## Zero-shot learning
19
19
20
-
Here's an example zero-shot prompt that tells the model to evaluate user input to determine which of four possible intents the input represents, and then to preface its response with **"Intent: "**.
20
+
Zero-shot learning is the practice of passing prompts that aren't paired with verbatim completions, although you can include completions that consist of cues. Zero-shot learning relies entirely on the model's existing knowledge to generate responses, which reduces the number of tokens created and can help you control costs. However, zero-shot learning doesn't add to the model's knowledge or context.
21
+
22
+
Here's an example zero-shot prompt that tells the model to evaluate user input to determine which of four possible intents the input represents, and then to preface the response with **"Intent: "**.
21
23
22
24
```csharp
23
25
prompt=$"""
@@ -29,7 +31,14 @@ Intent:
29
31
""";
30
32
```
31
33
32
-
With few-shot learning, you include prompts paired with verbatim completions. Compared to zero-shot learning, this means few-shot learning produces more tokens and causes the model to update its knowledge, which can make few-shot learning more resource-intensive. However, for the same reasons few-shot learning also helps the model produce more relevant responses.
34
+
There are two primary use cases for zero-shot learning:
35
+
36
+
-**Work with fined-tuned LLMs** - Because it relies on the model's existing knowledge, zero-shot learning is not as resource-intensive as few-shot learning, and it works well with LLMs that have already been fined-tuned on instruction datasets. You might be able to rely solely on zero-shot learning and keep costs relatively low.
37
+
-**Establish performance baselines** - Zero-shot learning can help you simulate how your app would perform for actual users. This lets you evaluate various aspects of your model's current performance, such as accuracy or precision. In this case, you typically use zero-shot learning to establish a performance baseline and then experiment with few-shot learning to improve performance.
38
+
39
+
## Few-shot learning
40
+
41
+
Few-shot learning is the practice of passing prompts paired with verbatim completions (few-shot prompts) to show your model how to respond. Compared to zero-shot learning, this means few-shot learning produces more tokens and causes the model to update its knowledge, which can make few-shot learning more resource-intensive. However, few-shot learning also helps the model produce more relevant responses.
33
42
34
43
```csharp
35
44
prompt=$"""
@@ -48,21 +57,10 @@ Intent:
48
57
""";
49
58
```
50
59
51
-
## Zero-shot learning use cases
52
-
53
-
Zero-shot learning is the practice of passing prompts that aren't paired with verbatim completions, although they can be paired with a cue. There are two primary use cases for zero-shot learning:
54
-
55
-
-**Working with fined-tuned LLMs** - Because it relies on the model's existing knowledge, zero-shot learning is not as resource-intensive as few-shot learning, and it works well with LLMs that have already been fined-tuned on instruction datasets. You might be able to rely solely on zero-shot learning and keep costs relatively low.
56
-
-**Establish performance baselines** - Zero-shot learning can help you simulate how your app would perform for actual users. This lets you evaluate various aspects of your model's current performance, such as accuracy or precision. In this case, you typically use zero-shot learning to establish a performance baseline and then experiment with few-shot learning to improve performance.
57
-
58
-
## Few-shot learning use cases
59
-
60
-
Few-shot learning is the practice of passing prompts paired with verbatim completions (few-shot prompts) to show your model how to respond. Unlike zero-shot learning, few-shot learning can add to the model's knowledge. You can even use your own datasets to automatically generate few-shot prompts, by performing retrieval-augmented generation.
61
-
62
60
Few-shot learning has two primary use cases:
63
61
64
-
-**Tuning an LLM** - Because it can add to the model's knowledge, few-shot learning can improve a model's performance. It also causes the model to create more tokens than zero-shot learning does, which can eventually become prohibitively expensive or even infeasible. However, if your LLM isn't fined-tuned yet, you won't get good performance with zero-shot prompts, and few-shot learning is warranted.
65
-
-**Fixing performance issues** - You can use few-shot learning as a follow-on to zero-shot learning. In this case, you use zero-shot learning to establish a performance baseline, and then experiment with few-shot learning based on the zero-shot prompts you used. This lets you add to the model's knowledge after seeing how it currently responds, so you can iterate and improve performance while minimizing the number of tokens you introduce.
62
+
-**Tuning an LLM** - Because it can add to the model's knowledge, few-shot learning can improve a model's performance. It also causes the model to create more tokens than zero-shot learning does, which can eventually become prohibitively expensive or even infeasible. However, if your LLM isn't fined-tuned yet, you won't always get good performance with zero-shot prompts, and few-shot learning is warranted.
63
+
-**Fixing performance issues** - You can use few-shot learning as a follow-up to zero-shot learning. In this case, you use zero-shot learning to establish a performance baseline, and then experiment with few-shot learning based on the zero-shot prompts you used. This lets you add to the model's knowledge after seeing how it currently responds, so you can iterate and improve performance while minimizing the number of tokens you introduce.
Copy file name to clipboardExpand all lines: docs/ai/how-to/app-service-aoai-auth.md
+21-21Lines changed: 21 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,14 +4,14 @@ description: "Learn how to authenticate and authorize your app service applicati
4
4
author: haywoodsloan
5
5
ms.topic: how-to
6
6
ms.custom: devx-track-azurecli
7
-
ms.date: 04/19/2024
7
+
ms.date: 11/24/2024
8
8
zone_pivot_groups: azure-interface
9
9
#customer intent: As a .NET developer, I want authenticate and authorize my App Service to Azure OpenAI by using Microsoft Entra so that I can securely use AI in my .NET application.
10
10
---
11
11
12
-
# Authenticate and authorize App Service to Azure OpenAI using Microsoft Entra and the Semantic Kernel SDK
12
+
# Authenticate an AI app hosted on Azure App Service to Azure OpenAI using Microsoft Entra ID
13
13
14
-
This article demonstrates how to use [Microsoft Entra-managed identities](/azure/app-service/overview-managed-identity) to authenticate and authorize an App Service application to an Azure OpenAI resource.
14
+
This article demonstrates how to use [Microsoft Entra ID managed identities](/azure/app-service/overview-managed-identity) to authenticate and authorize an App Service application to an Azure OpenAI resource.
15
15
16
16
This article also demonstrates how to use the [Semantic Kernel SDK](/semantic-kernel/overview) to easily implement Microsoft Entra authentication in your .NET application.
17
17
@@ -33,32 +33,18 @@ Your application can be granted two types of identities:
33
33
* A **system-assigned identity** is tied to your application and is deleted if your app is deleted. An app can have only one system-assigned identity.
34
34
* A **user-assigned identity** is a standalone Azure resource that can be assigned to your app. An app can have multiple user-assigned identities.
35
35
36
-
### Add a system-assigned identity
37
-
38
36
:::zone target="docs" pivot="azure-portal"
39
37
38
+
# [System-assigned](#tab/system-assigned)
39
+
40
40
1. Navigate to your app's page in the [Azure portal](https://aka.ms/azureportal), and then scroll down to the **Settings** group.
41
41
1. Select **Identity**.
42
42
1. On the **System assigned** tab, toggle *Status* to **On**, and then select **Save**.
43
43
44
-
:::zone-end
45
-
46
-
:::zone target="docs" pivot="azure-cli"
47
-
48
-
Run the `az webapp identity assign` command to create a system-assigned identity:
49
-
50
-
```azurecli
51
-
az webapp identity assign --name <appName> --resource-group <groupName>
52
-
```
53
-
54
-
:::zone-end
55
-
56
-
### Add a user-assigned identity
44
+
## [User-assigned](#tab/user-assigned)
57
45
58
46
To add a user-assigned identity to your app, create the identity, and then add its resource identifier to your app config.
59
47
60
-
:::zone target="docs" pivot="azure-portal"
61
-
62
48
1. Create a user-assigned managed identity resource by following [these instructions](/azure/active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal#create-a-user-assigned-managed-identity).
63
49
1. In the left navigation pane of your app's page, scroll down to the **Settings** group.
64
50
1. Select **Identity**.
@@ -68,10 +54,22 @@ To add a user-assigned identity to your app, create the identity, and then add i
68
54
> [!IMPORTANT]
69
55
> After you select **Add**, the app restarts.
70
56
57
+
---
58
+
71
59
:::zone-end
72
60
73
61
:::zone target="docs" pivot="azure-cli"
74
62
63
+
## [System-assigned](#tab/system-assigned)
64
+
65
+
Run the `az webapp identity assign` command to create a system-assigned identity:
66
+
67
+
```azurecli
68
+
az webapp identity assign --name <appName> --resource-group <groupName>
69
+
```
70
+
71
+
## [User-assigned](#tab/user-assigned)
72
+
75
73
1. Create a user-assigned identity:
76
74
77
75
```azurecli
@@ -84,6 +82,8 @@ To add a user-assigned identity to your app, create the identity, and then add i
84
82
az webapp identity assign --resource-group <groupName> --name <appName> --identities <identityId>
85
83
```
86
84
85
+
---
86
+
87
87
:::zone-end
88
88
89
89
## Add an Azure OpenAI user role to your managed identity
@@ -135,7 +135,7 @@ az role assignment create --assignee "<managedIdentityObjectID>" \
135
135
136
136
:::zone-end
137
137
138
-
## Implement token-based authentication by using Semantic Kernel SDK
138
+
## Implement token-based authentication using Semantic Kernel SDK
139
139
140
140
1. Initialize a `DefaultAzureCredential` object to assume your app's managed identity:
0 commit comments