You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ Our overarching goals of this workshop is as follows:
17
17
* Learn about Prompt Engineering and how to leverage a local LLM in daily tasks.
18
18
19
19
!!! tip
20
-
working with AI is all about exploration and hands-on engagement. These labs are designed to give you everything you need to get started — so you can collaborate, experiment, and learn together. Don’t hesitate to ask questions, raise your hand, and connect with other participants.
20
+
Working with AI is all about exploration and hands-on engagement. These labs are designed to give you everything you need to get started — so you can collaborate, experiment, and learn together. Don’t hesitate to ask questions, raise your hand, and connect with other participants.
Copy file name to clipboardExpand all lines: docs/lab-1.5/README.md
+8-3Lines changed: 8 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,6 +4,8 @@ description: Set up Open-WebUI to start using an LLM locally
4
4
logo: images/ibm-blue-background.png
5
5
---
6
6
7
+
## Setup
8
+
7
9
Let's start by configuring [Open-WebUI](../pre-work/README.md#installing-open-webui) and `ollama` to talk to one another. The following screenshots will be from a Mac, but this should be similar on Windows and Linux.
8
10
9
11
First, if you haven't already, download the Granite 3.1 model. Make sure that `ollama` is running in the background (you may have to run `ollama serve` in its own terminal depending on how you installed it) and in another terminal run the following command:
@@ -25,11 +27,12 @@ Click *Getting Started*. Fill out the next screen and click the *Create Admin Ac
Test it out! I like asking the question, "Who is Batman?" as a sanity check. Every LLM should know who Batman is.
34
37
35
38
The first response may take a minute to process. This is because `ollama` is spinning up to serve the model. Subsequent responses should be much faster.
@@ -38,4 +41,6 @@ The first response may take a minute to process. This is because `ollama` is spi
38
41
39
42
You may notice that your answer is slighty different then the screen shot above. This is expected and nothing to worry about!
40
43
41
-
**Congratulations!** Now you have Open-WebUI running and it's configured to work with `granite3.1-dense` and `ollama`. Move on to the next lab and have a chat with your model!
44
+
## Conclusion
45
+
46
+
**Congratulations!** Now you have Open-WebUI running and it's configured to work with `granite3.1-dense` and `ollama`. Move on to [Lab 2](https://ibm.github.io/opensource-ai-workshop/lab-2/) and have a chat with your model!
Copy file name to clipboardExpand all lines: docs/lab-1/README.md
+7-1Lines changed: 7 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,6 +4,8 @@ description: Set up AnythingLLM to start using an LLM locally
4
4
logo: images/ibm-blue-background.png
5
5
---
6
6
7
+
## Setup
8
+
7
9
Let's start by configuring [AnythingLLM installed](../pre-work/README.md#anythingllm) and `ollama` to talk to one another. The following screenshots will be from a Mac, but this should be similar on Windows and Linux.
8
10
9
11
First, if you haven't already, download the Granite 3.1 model. Make sure that `ollama` is running in the background (you may have to run `ollama serve` in its own terminal depending on how you installed it) and in another terminal run the following command:
@@ -33,6 +35,8 @@ Give it a name (e.g. the event you're attending today):
33
35
34
36

35
37
38
+
## Testing the Connection
39
+
36
40
Now, let's test our connection AnythingLLM! I like asking the question, "Who is Batman?" as a sanity check. Every LLM should know who Batman is.
37
41
38
42
The first response may take a minute to process. This is because `ollama` is spinning up to serve the model. Subsequent responses should be much faster.
@@ -41,4 +45,6 @@ The first response may take a minute to process. This is because `ollama` is spi
41
45
42
46
You may notice that your answer is slighty different then the screen shot above. This is expected and nothing to worry about!
43
47
44
-
**Congratulations!** Now you have AnythingLLM running and it's configured to work with `granite3.1-dense` and `ollama`. Move on to the next lab and have a chat with your model!
48
+
## Conclusion
49
+
50
+
**Congratulations!** Now you have AnythingLLM running and it's configured to work with `granite3.1-dense` and `ollama`. Move on to [Lab 2](https://ibm.github.io/opensource-ai-workshop/lab-2/) and have a chat with your model!
Copy file name to clipboardExpand all lines: docs/lab-2/README.md
+8-7Lines changed: 8 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,9 @@ description: Get acquainted with your local LLM
4
4
logo: images/ibm-blue-background.png
5
5
---
6
6
7
-
It's time for the fun exploration part your Prompt Engineering (PE) journey.
7
+
It's time for the fun exploration part your Prompt Engineering (PE) journey. In this lab, you're encouraged to spend as much time as you can chatting with the model, especially if you have little experience doing so. Keep some questions in mind: can you make it speak in a different tone? Can it provide a recipe for a cake or a poem about technology? Is it self-aware?
8
+
9
+
## Chatting with the Model
8
10
9
11
Open a brand _new_ Workspace in AnythingLLM (or Open-WebUI) called "Learning Prompt Engineering".
10
12
@@ -15,15 +17,14 @@ For some inspiration, I like to start with `Who is Batman?` then work from there
15
17
Batman's top 10 enemies are, or what was the most creative way Batman saved the day? Some example responses to those questions are below.
16
18
17
19
!! note
18
-
If you treat the LLM like a knowledge repository, you can get a lot of useful information out of it. But remember not to
20
+
If you treat the LLM like a knowledge repository, you can get a lot of useful information out of it. But, remember not to
19
21
blindly accept its output. You should always cross-reference important things. Treat it like a confident librarian! They've read
20
22
a lot and they can be very fast at finding books, but they can mix things up too!
21
23
22
-
## Example Output using the `ollama` CLI
24
+
## Using the `ollama` CLI
23
25
24
26
This is an example of of using the CLI with vanilla ollama:
25
27
26
-
27
28
```
28
29
$ ollama run granite3.1-dense
29
30
>>> Who is Batman?
@@ -99,8 +100,8 @@ good - all hallmarks of his character. The innovative approach to saving the day
99
100
in Batman's extensive history.
100
101
```
101
102
102
-
## Try it Yourself
103
+
## Conclusion
103
104
104
-
Spend some time asking your LLM about anything about any topic and exploring how you can alter its output to provide you with more interesting or satisfying responses.
105
+
Spend as much time as you want asking your LLM about anything about any topic and exploring how you can alter its output to provide you with more interesting or satisfying responses.
105
106
106
-
When you feel acquainted with your model, move on to [Lab 3](/docs/lab-3/README.md) to learn about Prompt Engineering.
107
+
When you are acquainted with your model, move on to [Lab 3](https://ibm.github.io/opensource-ai-workshop/lab-3/) to learn about Prompt Engineering.
Prompt engineering is the practice of designing clear, intentional instructions to guide the behavior of an AI model.
13
13
14
-
It involves crafting prompts—usually in natural language—that help a model identify what task to perform, how to perform it, and if there are considerations in style or format.
15
-
This can include specifying tone, structure, context, or even assigning the AI a particular role.
16
-
Prompt engineering is essential because the quality and precision of the prompt can significantly influence the quality, relevance, and creativity of the generated output.
17
-
As generative models become more powerful, skillful prompting becomes a key tool for unlocking their full potential.
14
+
It involves crafting prompts that help a model identify what task to perform, how to perform it, and if there are considerations in style or format. This can include specifying tone, structure, context, or even assigning the AI a particular role.
15
+
16
+
Prompt engineering is essential because the quality and precision of the prompt can significantly influence the quality, relevance, and creativity of the generated output. As generative models become more powerful, skillful prompting becomes a key tool for unlocking their full potential.
18
17
19
18
### The Three Key Principles of PE
20
19
@@ -105,7 +104,6 @@ to be repaired and we should be able to reach out in a couple weeks.
105
104
106
105
So much better! By providing more context and more insight into what you are expecting in a response, we can improve the quality of our responses greatly. Also, by providing **multiple** examples, you're achieving *multi-shot prompting*!.
107
106
108
-
Let's move on to the next lab and apply what you've learned with some exercises.
107
+
## Conclusion
109
108
110
-
!!! tip
111
-
You could even use `ollama`'s CLI in a terminal to interact with your model by using `ollama run granite3.1-dense`
109
+
Now that you know the basics of prompt engineering and simple techniques you can use to level-up your prompts, let's move on to [Lab 4](https://ibm.github.io/opensource-ai-workshop/lab-4/) and apply what you've learned with some exercises.
Copy file name to clipboardExpand all lines: docs/lab-4/README.md
+6-2Lines changed: 6 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,11 +4,11 @@ description: Refine your prompting skills
4
4
logo: images/ibm-blue-background.png
5
5
---
6
6
7
-
Complete the following exercises using your local LLM.
7
+
Complete the following exercises using your local LLM. Try to come up with your own prompts from scratch! Take note of what works and what doesn't.
8
8
9
9
-**Be curious!** What if you ask the same question but in a different way, does the response significantly change?
10
10
-**Be creative!** Do you want the response to be organized in a numbered or bulleted list instead of sentences?
11
-
-**Be specific!** Aim for perfection. Use descriptive language, examples, and parameters to perfect your output.
11
+
-**Be specific!** Aim for perfection. Use descriptive languageand examples to perfect your output.
12
12
13
13
!!! note
14
14
Discovered something cool or unexpected? Don’t keep it to yourself, raise your hand or let the TA know!
@@ -209,3 +209,7 @@ all designed to be completed in a single session of gameplay
209
209
210
210
The best part of this prompt is that you can take the output and extend or shorten the portions it starts with, and tailor the story to your adventurers' needs!
211
211
</details>
212
+
213
+
## Conclusion
214
+
215
+
Well done! By completing these exercises, you're well on your way to being a prompt expert. In [Lab 5](https://ibm.github.io/opensource-ai-workshop/lab-5/), we'll move towards code-generation and learn how to use a local coding assistant.
Copy file name to clipboardExpand all lines: docs/lab-5/README.md
+4-1Lines changed: 4 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -69,11 +69,14 @@ For inline code suggestions, it's generally recommended that you use smaller mod
69
69
70
70
Now that you have everything configured in VSCode, let's make sure that it works. Ensure that `ollama` is running in the background either as a status bar item or in the terminal using `ollama serve`.
71
71
72
-
73
72
Open the Continue exension and test your local assistant.
74
73
75
74
```text
76
75
What language is popular for backend development?
77
76
```
78
77
79
78
Additionally, if you open a file for editing you should see possible tab completions to the right of your cursor (it may take a few seconds to show up).
79
+
80
+
## Conclusion
81
+
82
+
With your AI coding assistant now set up, move on to [Lab 6](https://ibm.github.io/opensource-ai-workshop/lab-6/) and actually use it!
Copy file name to clipboardExpand all lines: docs/lab-6/README.md
+7-14Lines changed: 7 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,7 +36,7 @@ Write the code for conway's game of life using pygame
36
36
!!! note
37
37
[What is Conway's Game of Life?](https://en.wikipedia.org/wiki/Conway's_Game_of_Life)
38
38
39
-
After a few moments, the mode should start writing code in the file, it might look something like:
39
+
After a few moments, the model should start writing code in the file, it might look something like:
40
40

41
41
42
42
## AI-Generated Code
@@ -56,7 +56,7 @@ At this point, you can practice debugging or refactoring code with the AI co-pil
56
56
In the example generated code, a "main" entry point to the script is missing. In this case, using `cmd+I` again and trying the prompt: "write a main function for my game that plays ten rounds of Conway's
57
57
game of life using the `board()` function." might help. What happens?
58
58
59
-
It's hard to read the generated case in the example case, making it hard to read the logic. To clean it up, I'll define a `main` function so the entry point exists. There was also a `tkinter` section in the generated code, I decided to put the main game loop there:
59
+
It's hard to read the generated code in the example case, making it difficult to understand the logic. To clean it up, I'll define a `main` function so the entry point exists. There's also a `tkinter` section in the generated code, I decided to put the main game loop there:
60
60
61
61
```python
62
62
if__name__=='__main__':
@@ -78,7 +78,7 @@ It looks like the code is improving:
78
78
79
79
## Explaining the Code
80
80
81
-
To debug further, use Granite-Code to explain what the different functions do. Simply highlight one of them, and use `cmd+L` to add it to the context window of your assistant and write a prompt similar to:
81
+
To debug further, use Granite-Code to explain what the different functions do. Simply highlight one of them, use `cmd+L` to add it to the context window of your assistant and write a prompt similar to:
82
82
83
83
```text
84
84
what does this function do?
@@ -98,24 +98,17 @@ Assuming you still have a function you wanted explained above in the context-win
98
98
write a pytest test for this function
99
99
```
100
100
101
-
Now I got a good framework for a test here:
101
+
The model generated a great framework for a test here:
102
102

103
103
104
-
Notice that my test only spans what is provided in the context, so the test isn't integrated into my project yet. But, the code provides a good start. I'll need to create a new test file and integrate `pytest` into my project.
104
+
Notice that the test only spans what is provided in the context, so it isn't integrated into my project yet. But, the code provides a good start. I'll need to create a new test file and integrate `pytest` into my project to use it.
105
105
106
106
## Adding Comments
107
107
108
-
Continue also provides the ability to automatically add comments to code:
108
+
Continue also provides the ability to automatically add comments to code. Try it out!
109
109
110
110

111
111
112
-
113
112
## Conclusion
114
113
115
-
116
-
!!! success
117
-
Thank you SO MUCH for joining us on this workshop, if you have any thoughts or questions
118
-
the TAs would love answer them for you. If you found any issues or bugs, don't hesitate
119
-
to put a [Pull Request](https://github.com/IBM/opensource-ai-workshop/pulls) or an
120
-
[Issue](https://github.com/IBM/opensource-ai-workshop/issues/new) in and we'll get to it
121
-
ASAP.
114
+
This lab was all about using our local, open-source AI co-pilot to write complex code in Python. By combining Continue and Granite-Code, we were able to generate code, explain functions, write tests, and add comments to our code!
Copy file name to clipboardExpand all lines: docs/pre-work/README.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -96,6 +96,6 @@ pip install open-webui
96
96
open-webui serve
97
97
```
98
98
99
-
Now that you have all of the tools you need, let's start building our local AI co-pilot.
99
+
## Conclusion
100
100
101
-
**Head over to [Lab 1](/docs/lab-1/README.md) if you have AnythingLLM or [Lab 1.5](/docs/lab-1.5/README.md) for Open-WebUI.**
101
+
Now that you have all of the tools you need, head over to [Lab 1](https://ibm.github.io/opensource-ai-workshop/lab-1/) if you have AnythingLLM or [Lab 1.5](https://ibm.github.io/opensource-ai-workshop/lab-1.5/) for Open-WebUI.
0 commit comments