Skip to content

Commit 465c2c9

Browse files
authored
Merge pull request #24 from rafvasq/pass-through-update
Pass through update
2 parents de320cd + 7542b65 commit 465c2c9

File tree

6 files changed

+131
-252
lines changed

6 files changed

+131
-252
lines changed

docs/README.md

Lines changed: 18 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -6,17 +6,14 @@ logo: images/ibm-blue-background.png
66

77
## Open Source AI workshop
88

9-
Welcome to our workshop! Thank you for trusting us to help you learn about this
10-
new and exciting space. There is a lot going on here, and we want to give you
11-
enough to be able to feel confident in consuming LLM(s) and ideally find success
12-
quickly. In this workshop we'll be using a local AI Model for code completion,
13-
and learning best practices leveraging an Open Source LLM.
9+
Welcome to the Open Source AI workshop! Thank you for trusting us to help you learn about this
10+
new and exciting space. In this workshop, you'll gain the skills and confidence to effectively use LLMs locally through simple exercises and experimentation, and learn best practices for leveraging open source AI.
1411

1512
Our overarching goals of this workshop is as follows:
1613

17-
* Understand what Open Source AI is, and its general use cases
18-
* How to use an Open Source AI model that is built in a verifiable and legal way
19-
* Learn about Prompt Engineering, how to leverage a local LLM in starter daily tasks
14+
* Learn about Open Source AI and its general use cases.
15+
* Use an open source LLM that is built in a verifiable and legal way.
16+
* Learn about Prompt Engineering and how to leverage a local LLM in daily tasks.
2017

2118
!!! tip
2219
This workshop may seem short, but a lot of working with AI is exploration and engagement.
@@ -31,21 +28,19 @@ Our overarching goals of this workshop is as follows:
3128

3229
| Lab | Description |
3330
| :--- | :--- |
34-
| [Lab 0: Pre-work](pre-work/README.md) | Pre-work and set up for the workshop |
35-
| [Lab 1: Building a local AI co-pilot](lab-1/README.md) | Let's get VSCode and our local AI working together |
36-
| [Lab 2: Using the local AI co-pilot](lab-2/README.md) | Let's learn about how to use a local AI co-pilot |
37-
| [Lab 3: Configuring AnythingLLM](lab-3/README.md) | Let's configure AnythingLLM or Open-WebUI |
38-
| [Lab 3.5: Configuring Open-WebUI](lab-3.5/README.md) | Let's configure Open-WebUI or AnythingLLM |
39-
| [Lab 4: Prompt engineering overview](lab-4/README.md) | Let's learn about leveraging and engaging with the `granite3.1-dense` model |
40-
| [Lab 5: Useful prompts and use cases](lab-5/README.md) | Let's get some good over arching prompts and uses cases with `granite3.1-dense` model |
41-
| [Lab 6: Using AnythingLLM for a local RAG](lab-6/README.md) | Let's build a local RAG and use `granite3.1-dense` to talk to it |
42-
43-
!!! success
44-
Thank you SO MUCH for joining us on this workshop, if you have any thoughts or questions
45-
the TAs would love answer them for you. If you found any issues or bugs, don't hesitate
46-
to put a [Pull Request](https://github.com/IBM/opensource-ai-workshop/pulls) or an
47-
[Issue](https://github.com/IBM/opensource-ai-workshop/issues/new) in and we'll get to it
48-
ASAP.
31+
| [Lab 0: Pre-work](pre-work/README.md) | Install pre-requisites for the workshop |
32+
| [Lab 1: Configuring AnythingLLM](lab-1/README.md) | Set up AnythingLLM to start using an LLM locally |
33+
| [Lab 2: Using the local LLM](lab-2/README.md) | Test some general prompt templates |
34+
| [Lab 3: Engineering prompts](lab-3/README.md) | Learn and apply Prompt Engineering concepts |
35+
| [Lab 4: Using AnythingLLM for a local RAG](lab-4/README.md) | Build a simple local RAG |
36+
| [Lab 5: Building an AI co-pilot](lab-5/README.md) | Build a coding assistant |
37+
| [Lab 6: Using your coding co-pilot](lab-6/README.md) | Use your coding assistant for tasks |
38+
39+
Thank you SO MUCH for joining us in this workshop! If you have any thoughts or questions at any point,
40+
the TAs would love answer them for you. If you found any issues or bugs, don't hesitate
41+
to open a [Pull Request](https://github.com/IBM/opensource-ai-workshop/pulls) or an
42+
[Issue](https://github.com/IBM/opensource-ai-workshop/issues/new) in and we'll get to it
43+
ASAP.
4944

5045
## Compatibility
5146

@@ -60,4 +55,3 @@ This workshop has been tested on the following platforms:
6055
* [JJ Asghar](https://github.com/jjasghar)
6156
* [Gabe Goodhart](https://github.com/gabe-l-hart)
6257
* [Ming Zhao](https://github.com/mingxzhao)
63-
136 KB
Loading

docs/lab-1.5/README.md

Lines changed: 12 additions & 46 deletions
Original file line numberDiff line numberDiff line change
@@ -5,50 +5,26 @@ logo: images/ibm-blue-background.png
55
---
66

77
!!! warning
8-
This should be noted that this is optional. You don't need Open-WebUI if you have AnythingLLM already running. This is **optional**.
8+
This is **optional**. You don't need Open-WebUI if you have AnythingLLM already running.
99

10-
Now that you've gotten [Open-WebUI installed](../pre-work/README.md#open-webui) we need to configure it with `ollama` and Open-WebUI
11-
to talk to one another. The following screenshots will be from a Mac, but the gist of this should be the same on Windows and Linux.
10+
Now that you have [Open-WebUI installed](../pre-work/README.md#installing-open-webui) let's configure it with `ollama` and Open-WebUI to talk to one another. The following screenshots will be from a Mac, but the gist of this should be the same on Windows and Linux.
11+
12+
Open up Open-WebUI (assuming you've run `open-webui serve` and nothing else), and you should see something like the following:
1213

13-
Open up Open-WebUI (assuming all you have done is `open-webui serve` and
14-
nothing else), and you should see something like the following:
1514
![default screen](../images/openwebui_open_screen.png)
1615

17-
If you see this that means Open-WebUI is installed correctly, and we can continue configuration, if not, please find a workshop TA or
18-
raise your hand we'll be there to help you ASAP.
16+
If you see something similar, Open-WebUI is installed correctly! Continue on, if not, please find a workshop TA or raise your hand for some help.
1917

20-
Before clicking the "Getting Started" button, make sure that `ollama` has
21-
`granite3.1-dense` pulled down.
18+
Before clicking the *Getting Started* button, make sure that `ollama` has `granite3.1-dense` downloaded:
2219

2320
```bash
2421
ollama pull granite3.1-dense:8b
2522
```
2623

27-
Run the following command to confirm you have the [granite3.1-dense](https://ollama.com/library/granite3.1-dense)
28-
model downloaded in `ollama`. This may take a bit, but we should have a way to copy it directly on your laptop.
29-
30-
If you didn't know, the supported languages with `granite3.1-dense` now include:
31-
32-
- English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, Chinese (Simplified)
33-
34-
And the Capabilities also include:
35-
36-
- Summarization
37-
- Text classification
38-
- Text extraction
39-
- Question-answering
40-
- Retrieval Augmented Generation (RAG)
41-
- Code related tasks
42-
- Function-calling tasks
43-
- Multilingual dialog use cases
44-
- Long-context tasks including long document/meeting summarization, long document QA, etc.
45-
4624
!!! note
47-
We need to figure out a way to copy the models into ollama without downloading.
25+
The download may take a few minutes depending on your internet connection. In the meantime, you can check out information about model we're using [here](https://ollama.com/library/granite3.1-dense). Check out how many languages it supports and take note of its capabilities. It'll help you decide what tasks you might want to use it for.
4826

49-
Click the "Getting Started" button, and fill out the next screen, and click the
50-
"Create Admin Account". This will be your login for your local machine, remember this because
51-
it will also be the Open-WebUI configuration user if want to dig deeper into it after this workshop.
27+
Click *Getting Started*. Fill out the next screen and click the *Create Admin Account*. This will be your login for your local machine. Remember that this because it will be your Open-WebUI configuration login information if want to dig deeper into it after this workshop.
5228

5329
![user setup screen](../images/openwebui_user_setup_screen.png)
5430

@@ -57,22 +33,12 @@ the center!
5733

5834
![main screen](../images/openwebui_main_screen.png)
5935

60-
Ask it a question, see that it works as you expect...may I suggest:
36+
Test it out! I like asking the question, "Who is Batman?" as a sanity check. Every LLM should know who Batman is.
6137

62-
```
63-
Who is Batman?
64-
```
38+
The first response may take a minute to process. This is because `ollama` is spinning up to serve the model. Subsequent responses should be much faster.
6539

6640
![batman](../images/openwebui_who_is_batman.png)
6741

68-
Now you may notice that the answer is slighty different then the screen shot above. That's expected and nothing to worry about. If
69-
you have more questions about it raise your hand and one of the helpers would love to talk you about it.
70-
71-
Congratulations! You have Open-WebUI running now, configured to work with `granite3.1-dense` and `ollama`!
72-
73-
!!! note
74-
This was done on your local machine, take a moment and realize if you
75-
needed to create a shared AI enviroment, this could be easily leveraged
76-
here. This is very out of scope of this workshop, but the TAs can help if
77-
you have some general questions around running this in this "space."
42+
You may notice that your answer is slighty different then the screen shot above. This is expected and nothing to worry about!
7843

44+
**Congratulations!** Now you have Open-WebUI running and it's configured to work with `granite3.1-dense` and `ollama`. Have a quick chat with your model before moving on to the next lab!

docs/lab-1/README.md

Lines changed: 14 additions & 40 deletions
Original file line numberDiff line numberDiff line change
@@ -4,67 +4,41 @@ description: Steps to configure AnythingLLM for usage
44
logo: images/ibm-blue-background.png
55
---
66

7-
Now that you've gotten [AnythingLLM installed](../pre-work/README.md#anythingllm) we need to configure it with `ollama` and AnythingLLM
8-
to talk to one another. The following screenshots will be from a Mac, but the gist of this should be the same on Windows and Linux.
7+
Now that you've got [AnythingLLM installed](../pre-work/README.md#anythingllm), we need to configure it with `ollama`. The following screenshots are taken from a Mac, but the gist of this should be the same on Windows and Linux.
98

10-
Open up AnyThingLLM, and you should see something like the following:
11-
![default screen](../images/anythingllm_open_screen.png)
12-
13-
If you see this that means AnythingLLM is installed correctly, and we can continue configuration, if not, please find a workshop TA or
14-
raise your hand we'll be there to help you ASAP.
15-
16-
Next as a sanity check, run the following command to confirm you have the [granite3.1-dense](https://ollama.com/library/granite3.1-dense)
17-
model downloaded in `ollama`. This may take a bit, but we should have a way to copy it directly on your laptop.
9+
First, if you haven't already, download the Granite 3.1 model. Open up a terminal and run the following command:
1810

1911
```bash
2012
ollama pull granite3.1-dense:8b
2113
```
2214

23-
If you didn't know, the supported languages with `granite3.1-dense` now include:
24-
25-
- English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, Chinese (Simplified)
26-
27-
And the Capabilities also include:
28-
29-
- Summarization
30-
- Text classification
31-
- Text extraction
32-
- Question-answering
33-
- Retrieval Augmented Generation (RAG)
34-
- Code related tasks
35-
- Function-calling tasks
36-
- Multilingual dialog use cases
37-
- Long-context tasks including long document/meeting summarization, long document QA, etc.
38-
3915
!!! note
40-
We need to figure out a way to copy the models into ollama without downloading, conference wifi is never fast enough.
16+
The download may take a few minutes depending on your internet connection. In the meantime, you can check out information about model we're using [here](https://ollama.com/library/granite3.1-dense). Check out how many languages it supports and take note of its capabilities. It'll help you decide what tasks you might want to use it for.
4117

42-
Next click on the `wrench` icon, and open up the settings. For now we are going to configure the global settings for `ollama`
43-
but you may want to change it in the future.
18+
Either click on the *Get Started* button or open up settings (the 🔧 button). For now, we are going to configure the global settings for `ollama` but you can always change it in the future.
4419

4520
![wrench icon](../images/anythingllm_wrench_icon.png)
4621

47-
Click on the "LLM" section, and select **Ollama** as the LLM Provider. Also select the `granite3-dense:8b` model. (You should be able to
48-
see all the models you have access to through `ollama` there.)
22+
Click on the *LLM* section, and select **Ollama** as the LLM Provider. Select the `granite3-dense:8b` model you downloaded. You'd be able to see all the models you have access to through `ollama` here.
4923

5024
![llm configuration](../images/anythingllm_llm_config.png)
5125

52-
Click the "Back to workspaces" button where the wrench was. And Click "New Workspace."
26+
Click the *Back to workspaces* button (where the 🔧 was) and head back to the homepage.
27+
28+
Click *New Workspace*.
5329

5430
![new workspace](../images/anythingllm_new_workspace.png)
5531

56-
Name it something like "learning llm" or the name of the event we are right now, something so you know it's somewhere you are learning
57-
how to use this LLM.
32+
Give it a name (e.g. the event you're attending today):
5833

5934
![naming new workspace](../images/anythingllm_naming_workspace.png)
6035

61-
Now we can test our connections _through_ AnythingLLM! I like the "Who is Batman?" question, as a sanity check on connections and that
62-
it knows _something_.
36+
Now, let's test our connection AnythingLLM! I like asking the question, "Who is Batman?" as a sanity check. Every LLM should know who Batman is.
6337

64-
![who is batman](../images/anythingllm_who_is_batman.png)
38+
The first response may take a minute to process. This is because `ollama` is spinning up to serve the model. Subsequent responses should be much faster.
6539

66-
Now you may notice that the answer is slighty different then the screen shot above. That's expected and nothing to worry about. If
67-
you have more questions about it raise your hand and one of the helpers would love to talk you about it.
40+
![who is batman](../images/anythingllm_who_is_batman.png)
6841

69-
Congratulations! You have AnythingLLM running now, configured to work with `granite3.1-dense` and `ollama`!
42+
You may notice that your answer is slighty different then the screen shot above. This is expected and nothing to worry about!
7043

44+
**Congratulations!** Now you have AnythingLLM running and it's configured to work with `granite3.1-dense` and `ollama`. Have a quick chat with your model before moving on to the next lab!

0 commit comments

Comments
 (0)