Skip to content

Commit c04f7d3

Browse files
author
Eric Camplin
committed
create-plugins-semantic-kernel 2-,3-,4-, 5- python samples
1 parent 53e8a38 commit c04f7d3

9 files changed

+514
-152
lines changed

learn-pr/wwl-azure/combine-prompts-functions/4-exercise-apply-function-filters.yml

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,6 @@ metadata:
1010
ms.topic: unit
1111
ms.custom:
1212
- N/A
13-
zone_pivot_groups: dev-lang-csharp-python
1413
durationInMinutes: 8
1514
content: |
1615
[!include[](includes/4-exercise-apply-function-filters.md)]

learn-pr/wwl-azure/create-plugins-semantic-kernel/2-optimize-language-model-prompts.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@ metadata:
1010
ms.topic: unit
1111
ms.custom:
1212
- N/A
13+
zone_pivot_groups: dev-lang-csharp-python
1314
durationInMinutes: 5
1415
content: |
1516
[!include[](includes/2-optimize-language-model-prompts.md)]

learn-pr/wwl-azure/create-plugins-semantic-kernel/3-use-semantic-kernel-prompt-templates.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@ metadata:
1010
ms.topic: unit
1111
ms.custom:
1212
- N/A
13+
zone_pivot_groups: dev-lang-csharp-python
1314
durationInMinutes: 5
1415
content: |
1516
[!include[](includes/3-use-semantic-kernel-prompt-templates.md)]

learn-pr/wwl-azure/create-plugins-semantic-kernel/4-use-handlebars-prompt-templates.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@ metadata:
1010
ms.topic: unit
1111
ms.custom:
1212
- N/A
13+
zone_pivot_groups: dev-lang-csharp-python
1314
durationInMinutes: 8
1415
content: |
1516
[!include[](includes/4-use-handlebars-prompt-templates.md)]

learn-pr/wwl-azure/create-plugins-semantic-kernel/5-store-chat-history.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@ metadata:
1010
ms.topic: unit
1111
ms.custom:
1212
- N/A
13+
zone_pivot_groups: dev-lang-csharp-python
1314
durationInMinutes: 8
1415
content: |
1516
[!include[](includes/5-store-chat-history.md)]
Lines changed: 164 additions & 56 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,28 @@
1-
Prompts are conversational cues you give to large language models (LLMs), shaping responses based on your queries or instructions. For example, you can prompt LLMs to convert a sentence from English to French, or to generate a summary of a text.
1+
Prompts are conversational cues you give to large language models (LLMs), shaping responses based on your queries or instructions. For example, you can prompt LLMs to convert a sentence from English to French, or to generate a summary of a text.
22

33
In the previous unit, you created the prompt as the input string:
44

5-
```c#
5+
::: zone pivot="csharp"
6+
7+
```c#
68
string input = @"I'm a vegan in search of new recipes. I love spicy food!
79
Can you give me a list of breakfast recipes that are vegan friendly?";
8-
```
10+
```
11+
12+
::: zone-end
13+
14+
::: zone pivot="python"
15+
16+
```python
17+
input = """I'm a vegan in search of new recipes. I love spicy food!
18+
Can you give me a list of breakfast recipes that are vegan friendly?"""
19+
```
20+
21+
::: zone-end
922

1023
In this prompt, you provide content to the language model along with the instructions. The content helps the model generate results that are more relevant to the user.
1124

12-
Prompting involves crafting clear, context rich instructions to guide the model to generate a desired response. To craft an effective prompt, precision and clarity are key. You may need to experiment and adjust your prompts for accurate results.
25+
Prompting involves crafting clear, context rich instructions to guide the model to generate a desired response. To craft an effective prompt, precision and clarity are key. You might need to experiment and adjust your prompts for accurate results.
1326

1427
## Use examples to guide the model
1528

@@ -21,82 +34,177 @@ With zero-shot learning, you include the instructions but exclude verbatim compl
2134

2235
Here's an example of a zero-shot prompt that tells the model to evaluate user input, determine the user's intent, and preface the output with "Intent: ".
2336

24-
```c#
25-
string prompt = $"""
26-
Instructions: What is the intent of this request?
27-
If you don't know the intent, don't guess; instead respond with "Unknown".
28-
Choices: SendEmail, SendMessage, CompleteTask, CreateDocument, Unknown.
29-
User Input: {request}
30-
Intent:
31-
""";
32-
```
37+
::: zone pivot="csharp"
38+
39+
```c#
40+
string prompt = $"""
41+
Instructions: What is the intent of this request?
42+
If you don't know the intent, don't guess; instead respond with "Unknown".
43+
Choices: SendEmail, SendMessage, CompleteTask, CreateDocument, Unknown.
44+
User Input: {request}
45+
Intent:
46+
""";
47+
```
48+
49+
::: zone-end
50+
51+
::: zone pivot="python"
52+
53+
```python
54+
prompt = f"""
55+
Instructions: What is the intent of this request?
56+
If you don't know the intent, don't guess; instead respond with "Unknown".
57+
Choices: SendEmail, SendMessage, CompleteTask, CreateDocument, Unknown.
58+
User Input: {request}
59+
Intent:
60+
"""
61+
```
62+
63+
::: zone-end
3364

3465
### Few shot learning
3566

3667
With few-shot learning, you include verbatim completions in your prompt to help guide the model's response. Typically one to five examples are included. The examples demonstrate the structure, style, or type of response you want. Few-shot learning produces more tokens and also causes the model to update its knowledge. Few-shot prompting is especially valuable for reducing ambiguity and aligning results with the desired outcome.
3768

3869
Here's an example of a few-shot prompt that tells the model to evaluate user input, determine the user's intent, and preface the output with "Intent: ".
3970

40-
```c#
41-
string prompt = $"""
42-
Instructions: What is the intent of this request?
43-
If you don't know the intent, don't guess; instead respond with "Unknown".
44-
Choices: SendEmail, SendMessage, CompleteTask, CreateDocument, Unknown.
71+
::: zone pivot="csharp"
72+
73+
```c#
74+
string prompt = $"""
75+
Instructions: What is the intent of this request?
76+
If you don't know the intent, don't guess; instead respond with "Unknown".
77+
Choices: SendEmail, SendMessage, CompleteTask, CreateDocument, Unknown.
78+
79+
User Input: Can you send a very quick approval to the marketing team?
80+
Intent: SendMessage
81+
82+
User Input: Can you send the full update to the marketing team?
83+
Intent: SendEmail
84+
85+
User Input: {request}
86+
Intent:
87+
""";
88+
```
89+
90+
::: zone-end
91+
92+
::: zone pivot="python"
93+
94+
```python
95+
prompt = f"""
96+
Instructions: What is the intent of this request?
97+
If you don't know the intent, don't guess; instead respond with "Unknown".
98+
Choices: SendEmail, SendMessage, CompleteTask, CreateDocument, Unknown.
99+
100+
User Input: Can you send a very quick approval to the marketing team?
101+
Intent: SendMessage
102+
103+
User Input: Can you send the full update to the marketing team?
104+
Intent: SendEmail
105+
106+
User Input: {request}
107+
Intent:
108+
"""
109+
```
110+
111+
::: zone-end
45112

46-
User Input: Can you send a very quick approval to the marketing team?
47-
Intent: SendMessage
113+
## Use personas in prompts
48114

49-
User Input: Can you send the full update to the marketing team?
50-
Intent: SendEmail
115+
Assigning personas in prompts is a technique used to guide the model to adopt a specific point of view, tone, or expertise when generating responses. Personas allow you to tailor the output to better suit the context or audience of the task. The persona is useful when you need the response to simulate a profession or reflect a tone of voice. To assign a persona, you should clearly describe the role definition in your prompt.
51116

52-
User Input: {request}
53-
Intent:
54-
""";
55-
```
117+
Here's an example of a prompt that assigns a persona:
56118

57-
## Use personas in prompts
119+
::: zone pivot="csharp"
58120

59-
Assigning personas in prompts is a technique used to guide the model to adopt a specific point of view, tone, or expertise when generating responses. Personas allow you to tailor the output to better suit the context or audience of the task. This is useful when you need the response to simulate a profession or reflect a tone of voice. To assign a persona, you should clearly describe the role definition in your prompt.
121+
```c#
122+
string prompt = $"""
123+
You are a highly experienced software engineer. Explain the concept of asynchronous programming to a beginner.
124+
""";
125+
```
60126

61-
Here's an example of a prompt that assigns a persona:
127+
::: zone-end
62128

63-
```c#
64-
string prompt = $"""
65-
You are a highly experienced software engineer. Explain the concept of asynchronous programming to a beginner.
66-
""";
67-
```
129+
::: zone pivot="python"
130+
131+
```python
132+
prompt = """
133+
You are a highly experienced software engineer. Explain the concept of asynchronous programming to a beginner.
134+
"""
135+
```
136+
137+
::: zone-end
68138

69139
## Chain of thought prompting
70140

71-
With chain of thought prompting, you prompt the model to perform a task step-by-step and to present each step and its result in order in the output. This can simplify prompt engineering by offloading some execution planning to the model, and makes it easier to isolate any problems to a specific step so you know where to focus further efforts. You can instruct the model to include its chain of thought, or you can use examples to show the model how to break down tasks.
141+
With chain of thought prompting, you prompt the model to perform a task step-by-step and to present each step and its result in order in the output. Chain prompting can simplify prompt engineering by offloading some execution planning to the model. The chain prompts make it easier to isolate any problems to a specific step so you know where to focus further efforts. You can instruct the model to include its chain of thought, or you can use examples to show the model how to break down tasks.
72142

73143
Here's an example that instructs the model to describe the step-by-step reasoning:
74144

75-
```c#
76-
string prompt = $"""
77-
A farmer has 150 apples and wants to sell them in baskets. Each basket can hold 12 apples. If any apples remain after filling as many baskets as possible, the farmer will eat them. How many apples will the farmer eat?
78-
Instructions: Explain your reasoning step by step before providing the answer.
79-
""";
80-
```
145+
::: zone pivot="csharp"
146+
147+
```c#
148+
string prompt = $"""
149+
A farmer has 150 apples and wants to sell them in baskets. Each basket can hold 12 apples. If any apples remain after filling as many baskets as possible, the farmer will eat them. How many apples will the farmer eat?
150+
Instructions: Explain your reasoning step by step before providing the answer.
151+
""";
152+
```
153+
154+
::: zone-end
155+
156+
::: zone pivot="python"
157+
158+
```python
159+
prompt = """
160+
A farmer has 150 apples and wants to sell them in baskets. Each basket can hold 12 apples. If any apples remain after filling as many baskets as possible, the farmer will eat them. How many apples will the farmer eat?
161+
Instructions: Explain your reasoning step by step before providing the answer.
162+
"""
163+
```
164+
165+
::: zone-end
81166

82167
Here's an example that describes the steps to complete to the model:
83168

84-
```c#
85-
prompt = $"""
86-
Instructions: A farmer has 150 apples and wants to sell them in baskets. Each basket can hold 12 apples. If any apples remain after filling as many baskets as possible, the farmer will eat them. How many apples will the farmer eat?
169+
::: zone pivot="csharp"
87170

88-
First, calculate how many full baskets the farmer can make by dividing the total apples by the apples per basket:
89-
1.
171+
```c#
172+
prompt = $"""
173+
Instructions: A farmer has 150 apples and wants to sell them in baskets. Each basket can hold 12 apples. If any apples remain after filling as many baskets as possible, the farmer will eat them. How many apples will the farmer eat?
90174

91-
Next, subtract the number of apples used in the baskets from the total number of apples to find the remainder:
92-
1.
175+
First, calculate how many full baskets the farmer can make by dividing the total apples by the apples per basket:
176+
1.
93177

94-
"Finally, the farmer will eat the remaining apples:
95-
1.
96-
""";
97-
```
178+
Next, subtract the number of apples used in the baskets from the total number of apples to find the remainder:
179+
1.
180+
181+
"Finally, the farmer will eat the remaining apples:
182+
1.
183+
""";
184+
```
185+
186+
::: zone-end
187+
188+
::: zone pivot="python"
189+
190+
```python
191+
prompt = """
192+
Instructions: A farmer has 150 apples and wants to sell them in baskets. Each basket can hold 12 apples. If any apples remain after filling as many baskets as possible, the farmer will eat them. How many apples will the farmer eat?
193+
194+
First, calculate how many full baskets the farmer can make by dividing the total apples by the apples per basket:
195+
1.
196+
197+
Next, subtract the number of apples used in the baskets from the total number of apples to find the remainder:
198+
1.
199+
200+
Finally, the farmer will eat the remaining apples:
201+
1.
202+
"""
203+
```
204+
205+
::: zone-end
98206

99-
The output of this prompt should resemble the following:
207+
The output of this prompt should resemble the following output:
100208

101209
```output
102210
Divide 150 by 12 to find the number of full baskets the farmer can make: 150 / 12 = 12.5 full baskets
@@ -110,12 +218,12 @@ The farmer will eat 6 remaining apples.
110218

111219
- **Specific Inputs Yield Specific Outputs**: LLMs respond based on the input they receive. Crafting clear and specific prompts is crucial to get the desired output.
112220

113-
- **Experimentation is Key**: You may need to iterate and experiment with different prompts to understand how the model interprets and generates responses. Small tweaks can lead to significant changes in outcomes.
221+
- **Experimentation is Key**: You might need to iterate and experiment with different prompts to understand how the model interprets and generates responses. Small tweaks can lead to significant changes in outcomes.
114222

115223
- **Context Matters**: LLMs consider the context provided in the prompt. You should ensure that the context is well-defined and relevant to obtain accurate and coherent responses.
116224

117-
- **Handle Ambiguity**: Bear in mind that LLMs may struggle with ambiguous queries. Provide context or structure to avoid vague or unexpected results.
225+
- **Handle Ambiguity**: Bear in mind that LLMs might struggle with ambiguous queries. Provide context or structure to avoid vague or unexpected results.
118226

119227
- **Length of Prompts**: While LLMs can process both short and long prompts, you should consider the trade-off between brevity and clarity. Experimenting with prompt length can help you find the optimal balance.
120228

121-
Crafting effective prompts requires clarity, precision, and thoughtful design. Techniques like zero-shot and few-shot learning, persona assignments, and chain-of-thought prompting can enhance the quality and relevance of the responses. By providing clear instructions, well-defined context, and examples when needed, you can guide the model to generate finely tuned relevant responses. Remember to experiment and refine your prompts to achieve the best results.
229+
Crafting effective prompts requires clarity, precision, and thoughtful design. Techniques like zero-shot and few-shot learning, persona assignments, and chain-of-thought prompting can enhance the quality and relevance of the responses. By providing clear instructions, well-defined context, and examples when needed, you can guide the model to generate finely tuned relevant responses. To achieve the best results, remember to experiment and refine your prompts.

learn-pr/wwl-azure/create-plugins-semantic-kernel/includes/3-use-semantic-kernel-prompt-templates.md

Lines changed: 38 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ To call a function and use the results in your prompt, use the {{namespace.funct
1414

1515
You can also pass parameters to the function, either using variables or hardcoded values. For example, if `weather.getForecast` takes a city name as input, you can use the following examples:
1616

17-
```txt
17+
```console
1818
The weather today in {{$city}} is {{weather.getForecast $city}}.
1919
The weather today in Barcelona is {{weather.getForecast "Barcelona"}}.
2020
```
@@ -23,20 +23,44 @@ The weather today in Barcelona is {{weather.getForecast "Barcelona"}}.
2323

2424
To run your prompt, you first need to create a `KernelFunction` object from the prompt using `kernel.CreateFunctionFromPrompt`. Then you can create a `KernelArguments` object containing any variables, and invoke your function using `InvokeAsync`. You can either call `InvokeAsync` on the kernel itself or on the `KernelFunction` object. Here's an example:
2525

26-
```c#
27-
string city = "Rome";
28-
var prompt = "I'm visiting {{$city}}. What are some activities I should do today?";
26+
::: zone pivot="csharp"
2927

30-
var activitiesFunction = kernel.CreateFunctionFromPrompt(prompt);
31-
var arguments = new KernelArguments { ["city"] = city };
28+
```c#
29+
string city = "Rome";
30+
var prompt = "I'm visiting {{$city}}. What are some activities I should do today?";
3231

33-
// InvokeAsync on the KernelFunction object
34-
var result = await activitiesFunction.InvokeAsync(kernel, arguments);
35-
Console.WriteLine(result);
32+
var activitiesFunction = kernel.CreateFunctionFromPrompt(prompt);
33+
var arguments = new KernelArguments { ["city"] = city };
3634

37-
// InvokeAsync on the kernel object
38-
result = await kernel.InvokeAsync(activitiesFunction, arguments);
39-
Console.WriteLine(result);
40-
```
35+
// InvokeAsync on the KernelFunction object
36+
var result = await activitiesFunction.InvokeAsync(kernel, arguments);
37+
Console.WriteLine(result);
38+
39+
// InvokeAsync on the kernel object
40+
result = await kernel.InvokeAsync(activitiesFunction, arguments);
41+
Console.WriteLine(result);
42+
```
43+
44+
::: zone-end
45+
46+
::: zone pivot="python"
47+
48+
```python
49+
city = "Rome"
50+
prompt = "I'm visiting {{$city}}. What are some activities I should do today?"
51+
52+
activities_function = kernel.create_function_from_prompt(prompt)
53+
arguments = {"city": city}
54+
55+
# Invoke on the KernelFunction object
56+
result = await activities_function.invoke_async(kernel, arguments)
57+
print(result)
58+
59+
# Invoke on the kernel object
60+
result = await kernel.invoke_async(activities_function, arguments)
61+
print(result)
62+
```
63+
64+
::: zone-end
4165

42-
The Semantic Kernel prompt template language makes it easy to add AI-driven features to your apps using natural language. With support for variables, function calls, and parameters, you can create reusable and dynamic templates without complicated code. It’s a simple yet powerful way to build smarter, more adaptable applications.
66+
The Semantic Kernel prompt template language makes it easy to add AI-driven features to your apps using natural language. With support for variables, function calls, and parameters, you can create reusable and dynamic templates without complicated code. It’s a simple yet powerful way to build smarter, more adaptable applications.

0 commit comments

Comments
 (0)