You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Prompts are conversational cues you give to large language models (LLMs), shaping responses based on your queries or instructions. For example, you can prompt LLMs to convert a sentence from English to French, or to generate a summary of a text.
1
+
Prompts are conversational cues you give to large language models (LLMs), shaping responses based on your queries or instructions. For example, you can prompt LLMs to convert a sentence from English to French, or to generate a summary of a text.
2
2
3
3
In the previous unit, you created the prompt as the input string:
4
4
5
-
```c#
5
+
::: zone pivot="csharp"
6
+
7
+
```c#
6
8
string input = @"I'm a vegan in search of new recipes. I love spicy food!
7
9
Can you give me a list of breakfast recipes that are vegan friendly?";
8
-
```
10
+
```
11
+
12
+
::: zone-end
13
+
14
+
::: zone pivot="python"
15
+
16
+
```python
17
+
input = """I'm a vegan in search of new recipes. I love spicy food!
18
+
Can you give me a list of breakfast recipes that are vegan friendly?"""
19
+
```
20
+
21
+
::: zone-end
9
22
10
23
In this prompt, you provide content to the language model along with the instructions. The content helps the model generate results that are more relevant to the user.
11
24
12
-
Prompting involves crafting clear, context rich instructions to guide the model to generate a desired response. To craft an effective prompt, precision and clarity are key. You may need to experiment and adjust your prompts for accurate results.
25
+
Prompting involves crafting clear, context rich instructions to guide the model to generate a desired response. To craft an effective prompt, precision and clarity are key. You might need to experiment and adjust your prompts for accurate results.
13
26
14
27
## Use examples to guide the model
15
28
@@ -21,82 +34,177 @@ With zero-shot learning, you include the instructions but exclude verbatim compl
21
34
22
35
Here's an example of a zero-shot prompt that tells the model to evaluate user input, determine the user's intent, and preface the output with "Intent: ".
23
36
24
-
```c#
25
-
stringprompt=$"""
26
-
Instructions: What is the intent of this request?
27
-
If you don't know the intent, don't guess; instead respond with "Unknown".
With few-shot learning, you include verbatim completions in your prompt to help guide the model's response. Typically one to five examples are included. The examples demonstrate the structure, style, or type of response you want. Few-shot learning produces more tokens and also causes the model to update its knowledge. Few-shot prompting is especially valuable for reducing ambiguity and aligning results with the desired outcome.
37
68
38
69
Here's an example of a few-shot prompt that tells the model to evaluate user input, determine the user's intent, and preface the output with "Intent: ".
39
70
40
-
```c#
41
-
stringprompt=$"""
42
-
Instructions: What is the intent of this request?
43
-
If you don't know the intent, don't guess; instead respond with "Unknown".
User Input: Can you send a very quick approval to the marketing team?
101
+
Intent: SendMessage
102
+
103
+
User Input: Can you send the full update to the marketing team?
104
+
Intent: SendEmail
105
+
106
+
User Input: {request}
107
+
Intent:
108
+
"""
109
+
```
110
+
111
+
::: zone-end
45
112
46
-
User Input: Can you send a very quick approval to the marketing team?
47
-
Intent: SendMessage
113
+
## Use personas in prompts
48
114
49
-
User Input: Can you send the full update to the marketing team?
50
-
Intent: SendEmail
115
+
Assigning personas in prompts is a technique used to guide the model to adopt a specific point of view, tone, or expertise when generating responses. Personas allow you to tailor the output to better suit the context or audience of the task. The persona is useful when you need the response to simulate a profession or reflect a tone of voice. To assign a persona, you should clearly describe the role definition in your prompt.
51
116
52
-
User Input: {request}
53
-
Intent:
54
-
""";
55
-
```
117
+
Here's an example of a prompt that assigns a persona:
56
118
57
-
## Use personas in prompts
119
+
::: zone pivot="csharp"
58
120
59
-
Assigning personas in prompts is a technique used to guide the model to adopt a specific point of view, tone, or expertise when generating responses. Personas allow you to tailor the output to better suit the context or audience of the task. This is useful when you need the response to simulate a profession or reflect a tone of voice. To assign a persona, you should clearly describe the role definition in your prompt.
121
+
```c#
122
+
string prompt = $"""
123
+
You are a highly experienced software engineer. Explain the concept of asynchronous programming to a beginner.
124
+
""";
125
+
```
60
126
61
-
Here's an example of a prompt that assigns a persona:
127
+
::: zone-end
62
128
63
-
```c#
64
-
stringprompt=$"""
65
-
You are a highly experienced software engineer. Explain the concept of asynchronous programming to a beginner.
66
-
""";
67
-
```
129
+
::: zone pivot="python"
130
+
131
+
```python
132
+
prompt = """
133
+
You are a highly experienced software engineer. Explain the concept of asynchronous programming to a beginner.
134
+
"""
135
+
```
136
+
137
+
::: zone-end
68
138
69
139
## Chain of thought prompting
70
140
71
-
With chain of thought prompting, you prompt the model to perform a task step-by-step and to present each step and its result in order in the output. This can simplify prompt engineering by offloading some execution planning to the model, and makes it easier to isolate any problems to a specific step so you know where to focus further efforts. You can instruct the model to include its chain of thought, or you can use examples to show the model how to break down tasks.
141
+
With chain of thought prompting, you prompt the model to perform a task step-by-step and to present each step and its result in order in the output. Chain prompting can simplify prompt engineering by offloading some execution planning to the model. The chain prompts make it easier to isolate any problems to a specific step so you know where to focus further efforts. You can instruct the model to include its chain of thought, or you can use examples to show the model how to break down tasks.
72
142
73
143
Here's an example that instructs the model to describe the step-by-step reasoning:
74
144
75
-
```c#
76
-
stringprompt=$"""
77
-
A farmer has 150 apples and wants to sell them in baskets. Each basket can hold 12 apples. If any apples remain after filling as many baskets as possible, the farmer will eat them. How many apples will the farmer eat?
78
-
Instructions: Explain your reasoning step by step before providing the answer.
79
-
""";
80
-
```
145
+
::: zone pivot="csharp"
146
+
147
+
```c#
148
+
string prompt = $"""
149
+
A farmer has 150 apples and wants to sell them in baskets. Each basket can hold 12 apples. If any apples remain after filling as many baskets as possible, the farmer will eat them. How many apples will the farmer eat?
150
+
Instructions: Explain your reasoning step by step before providing the answer.
151
+
""";
152
+
```
153
+
154
+
::: zone-end
155
+
156
+
::: zone pivot="python"
157
+
158
+
```python
159
+
prompt = """
160
+
A farmer has 150 apples and wants to sell them in baskets. Each basket can hold 12 apples. If any apples remain after filling as many baskets as possible, the farmer will eat them. How many apples will the farmer eat?
161
+
Instructions: Explain your reasoning step by step before providing the answer.
162
+
"""
163
+
```
164
+
165
+
::: zone-end
81
166
82
167
Here's an example that describes the steps to complete to the model:
83
168
84
-
```c#
85
-
prompt=$"""
86
-
Instructions: A farmer has 150 apples and wants to sell them in baskets. Each basket can hold 12 apples. If any apples remain after filling as many baskets as possible, the farmer will eat them. How many apples will the farmer eat?
169
+
::: zone pivot="csharp"
87
170
88
-
First, calculate how many full baskets the farmer can make by dividing the total apples by the apples per basket:
89
-
1.
171
+
```c#
172
+
prompt = $"""
173
+
Instructions: A farmer has 150 apples and wants to sell them in baskets. Each basket can hold 12 apples. If any apples remain after filling as many baskets as possible, the farmer will eat them. How many apples will the farmer eat?
90
174
91
-
Next, subtract the number of apples used in the baskets from the total number of apples to find the remainder:
92
-
1.
175
+
First, calculate how many full baskets the farmer can make by dividing the total apples by the apples per basket:
176
+
1.
93
177
94
-
"Finally, the farmer will eat the remaining apples:
95
-
1.
96
-
""";
97
-
```
178
+
Next, subtract the number of apples used in the baskets from the total number of apples to find the remainder:
179
+
1.
180
+
181
+
"Finally, the farmer will eat the remaining apples:
182
+
1.
183
+
""";
184
+
```
185
+
186
+
::: zone-end
187
+
188
+
::: zone pivot="python"
189
+
190
+
```python
191
+
prompt = """
192
+
Instructions: A farmer has 150 apples and wants to sell them in baskets. Each basket can hold 12 apples. If any apples remain after filling as many baskets as possible, the farmer will eat them. How many apples will the farmer eat?
193
+
194
+
First, calculate how many full baskets the farmer can make by dividing the total apples by the apples per basket:
195
+
1.
196
+
197
+
Next, subtract the number of apples used in the baskets from the total number of apples to find the remainder:
198
+
1.
199
+
200
+
Finally, the farmer will eat the remaining apples:
201
+
1.
202
+
"""
203
+
```
204
+
205
+
::: zone-end
98
206
99
-
The output of this prompt should resemble the following:
207
+
The output of this prompt should resemble the following output:
100
208
101
209
```output
102
210
Divide 150 by 12 to find the number of full baskets the farmer can make: 150 / 12 = 12.5 full baskets
@@ -110,12 +218,12 @@ The farmer will eat 6 remaining apples.
110
218
111
219
-**Specific Inputs Yield Specific Outputs**: LLMs respond based on the input they receive. Crafting clear and specific prompts is crucial to get the desired output.
112
220
113
-
-**Experimentation is Key**: You may need to iterate and experiment with different prompts to understand how the model interprets and generates responses. Small tweaks can lead to significant changes in outcomes.
221
+
-**Experimentation is Key**: You might need to iterate and experiment with different prompts to understand how the model interprets and generates responses. Small tweaks can lead to significant changes in outcomes.
114
222
115
223
-**Context Matters**: LLMs consider the context provided in the prompt. You should ensure that the context is well-defined and relevant to obtain accurate and coherent responses.
116
224
117
-
-**Handle Ambiguity**: Bear in mind that LLMs may struggle with ambiguous queries. Provide context or structure to avoid vague or unexpected results.
225
+
-**Handle Ambiguity**: Bear in mind that LLMs might struggle with ambiguous queries. Provide context or structure to avoid vague or unexpected results.
118
226
119
227
-**Length of Prompts**: While LLMs can process both short and long prompts, you should consider the trade-off between brevity and clarity. Experimenting with prompt length can help you find the optimal balance.
120
228
121
-
Crafting effective prompts requires clarity, precision, and thoughtful design. Techniques like zero-shot and few-shot learning, persona assignments, and chain-of-thought prompting can enhance the quality and relevance of the responses. By providing clear instructions, well-defined context, and examples when needed, you can guide the model to generate finely tuned relevant responses. Remember to experiment and refine your prompts to achieve the best results.
229
+
Crafting effective prompts requires clarity, precision, and thoughtful design. Techniques like zero-shot and few-shot learning, persona assignments, and chain-of-thought prompting can enhance the quality and relevance of the responses. By providing clear instructions, well-defined context, and examples when needed, you can guide the model to generate finely tuned relevant responses. To achieve the best results, remember to experiment and refine your prompts.
Copy file name to clipboardExpand all lines: learn-pr/wwl-azure/create-plugins-semantic-kernel/includes/3-use-semantic-kernel-prompt-templates.md
+38-14Lines changed: 38 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,7 +14,7 @@ To call a function and use the results in your prompt, use the {{namespace.funct
14
14
15
15
You can also pass parameters to the function, either using variables or hardcoded values. For example, if `weather.getForecast` takes a city name as input, you can use the following examples:
16
16
17
-
```txt
17
+
```console
18
18
The weather today in {{$city}} is {{weather.getForecast $city}}.
19
19
The weather today in Barcelona is {{weather.getForecast "Barcelona"}}.
20
20
```
@@ -23,20 +23,44 @@ The weather today in Barcelona is {{weather.getForecast "Barcelona"}}.
23
23
24
24
To run your prompt, you first need to create a `KernelFunction` object from the prompt using `kernel.CreateFunctionFromPrompt`. Then you can create a `KernelArguments` object containing any variables, and invoke your function using `InvokeAsync`. You can either call `InvokeAsync` on the kernel itself or on the `KernelFunction` object. Here's an example:
25
25
26
-
```c#
27
-
stringcity="Rome";
28
-
varprompt="I'm visiting {{$city}}. What are some activities I should do today?";
result = await activities_function.invoke_async(kernel, arguments)
57
+
print(result)
58
+
59
+
# Invoke on the kernel object
60
+
result = await kernel.invoke_async(activities_function, arguments)
61
+
print(result)
62
+
```
63
+
64
+
::: zone-end
41
65
42
-
The Semantic Kernel prompt template language makes it easy to add AI-driven features to your apps using natural language. With support for variables, function calls, and parameters, you can create reusable and dynamic templates without complicated code. It’s a simple yet powerful way to build smarter, more adaptable applications.
66
+
The Semantic Kernel prompt template language makes it easy to add AI-driven features to your apps using natural language. With support for variables, function calls, and parameters, you can create reusable and dynamic templates without complicated code. It’s a simple yet powerful way to build smarter, more adaptable applications.
0 commit comments