You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Welcome to our workshop! Thank you for trusting us to help you learn about this
10
-
new and exciting space. There is a lot going on here, and we want to give you
11
-
enough to be able to feel confident in consuming LLM(s) and ideally find success
12
-
quickly. In this workshop we'll be using a local AI Model for code completion,
13
-
and learning best practices leveraging an Open Source LLM.
9
+
Welcome to the Open Source AI workshop! Thank you for trusting us to help you learn about this
10
+
new and exciting space. In this workshop, you'll gain the skills and confidence to effectively use LLMs locally through simple exercises and experimentation, and learn best practices for leveraging open source AI.
14
11
15
12
Our overarching goals of this workshop is as follows:
16
13
17
-
*Understand what Open Source AI is, and its general use cases
18
-
*How to use an Open Source AI model that is built in a verifiable and legal way
19
-
* Learn about Prompt Engineering, how to leverage a local LLM in starter daily tasks
14
+
*Learn about Open Source AI and its general use cases.
15
+
*Use an open source LLM that is built in a verifiable and legal way.
16
+
* Learn about Prompt Engineering and how to leverage a local LLM in daily tasks.
20
17
21
18
!!! tip
22
19
This workshop may seem short, but a lot of working with AI is exploration and engagement.
@@ -31,21 +28,19 @@ Our overarching goals of this workshop is as follows:
31
28
32
29
| Lab | Description |
33
30
| :--- | :--- |
34
-
|[Lab 0: Pre-work](pre-work/README.md)| Pre-work and set up for the workshop |
35
-
|[Lab 1: Building a local AI co-pilot](lab-1/README.md)| Let's get VSCode and our local AI working together |
36
-
|[Lab 2: Using the local AI co-pilot](lab-2/README.md)| Let's learn about how to use a local AI co-pilot |
37
-
|[Lab 3: Configuring AnythingLLM](lab-3/README.md)| Let's configure AnythingLLM or Open-WebUI |
38
-
|[Lab 3.5: Configuring Open-WebUI](lab-3.5/README.md)| Let's configure Open-WebUI or AnythingLLM |
39
-
|[Lab 4: Prompt engineering overview](lab-4/README.md)| Let's learn about leveraging and engaging with the `granite3.1-dense` model |
40
-
|[Lab 5: Useful prompts and use cases](lab-5/README.md)| Let's get some good over arching prompts and uses cases with `granite3.1-dense` model |
41
-
|[Lab 6: Using AnythingLLM for a local RAG](lab-6/README.md)| Let's build a local RAG and use `granite3.1-dense` to talk to it |
42
-
43
-
!!! success
44
-
Thank you SO MUCH for joining us on this workshop, if you have any thoughts or questions
45
-
the TAs would love answer them for you. If you found any issues or bugs, don't hesitate
46
-
to put a [Pull Request](https://github.com/IBM/opensource-ai-workshop/pulls) or an
47
-
[Issue](https://github.com/IBM/opensource-ai-workshop/issues/new) in and we'll get to it
48
-
ASAP.
31
+
|[Lab 0: Pre-work](pre-work/README.md)| Install pre-requisites for the workshop |
32
+
|[Lab 1: Configuring AnythingLLM](lab-1/README.md)| Set up AnythingLLM to start using an LLM locally |
33
+
|[Lab 2: Using the local LLM](lab-2/README.md)| Test some general prompt templates |
This should be noted that this is optional. You don't need Open-WebUI if you have AnythingLLM already running. This is **optional**.
8
+
This is **optional**. You don't need Open-WebUI if you have AnythingLLM already running.
9
9
10
-
Now that you've gotten [Open-WebUI installed](../pre-work/README.md#open-webui) we need to configure it with `ollama` and Open-WebUI
11
-
to talk to one another. The following screenshots will be from a Mac, but the gist of this should be the same on Windows and Linux.
10
+
Now that you have [Open-WebUI installed](../pre-work/README.md#installing-open-webui) let's configure it with `ollama` and Open-WebUI to talk to one another. The following screenshots will be from a Mac, but the gist of this should be the same on Windows and Linux.
11
+
12
+
Open up Open-WebUI (assuming you've run `open-webui serve` and nothing else), and you should see something like the following:
12
13
13
-
Open up Open-WebUI (assuming all you have done is `open-webui serve` and
14
-
nothing else), and you should see something like the following:
- Long-context tasks including long document/meeting summarization, long document QA, etc.
45
-
46
24
!!! note
47
-
We need to figure out a way to copy the models into ollama without downloading.
25
+
The download may take a few minutes depending on your internet connection. In the meantime, you can check out information about model we're using [here](https://ollama.com/library/granite3.1-dense). Check out how many languages it supports and take note of its capabilities. It'll help you decide what tasks you might want to use it for.
48
26
49
-
Click the "Getting Started" button, and fill out the next screen, and click the
50
-
"Create Admin Account". This will be your login for your local machine, remember this because
51
-
it will also be the Open-WebUI configuration user if want to dig deeper into it after this workshop.
27
+
Click *Getting Started*. Fill out the next screen and click the *Create Admin Account*. This will be your login for your local machine. Remember that this because it will be your Open-WebUI configuration login information if want to dig deeper into it after this workshop.
Ask it a question, see that it works as you expect...may I suggest:
36
+
Test it out! I like asking the question, "Who is Batman?" as a sanity check. Every LLM should know who Batman is.
61
37
62
-
```
63
-
Who is Batman?
64
-
```
38
+
The first response may take a minute to process. This is because `ollama` is spinning up to serve the model. Subsequent responses should be much faster.
65
39
66
40

67
41
68
-
Now you may notice that the answer is slighty different then the screen shot above. That's expected and nothing to worry about. If
69
-
you have more questions about it raise your hand and one of the helpers would love to talk you about it.
70
-
71
-
Congratulations! You have Open-WebUI running now, configured to work with `granite3.1-dense` and `ollama`!
72
-
73
-
!!! note
74
-
This was done on your local machine, take a moment and realize if you
75
-
needed to create a shared AI enviroment, this could be easily leveraged
76
-
here. This is very out of scope of this workshop, but the TAs can help if
77
-
you have some general questions around running this in this "space."
42
+
You may notice that your answer is slighty different then the screen shot above. This is expected and nothing to worry about!
78
43
44
+
**Congratulations!** Now you have Open-WebUI running and it's configured to work with `granite3.1-dense` and `ollama`. Have a quick chat with your model before moving on to the next lab!
Copy file name to clipboardExpand all lines: docs/lab-1/README.md
+14-40Lines changed: 14 additions & 40 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,67 +4,41 @@ description: Steps to configure AnythingLLM for usage
4
4
logo: images/ibm-blue-background.png
5
5
---
6
6
7
-
Now that you've gotten [AnythingLLM installed](../pre-work/README.md#anythingllm) we need to configure it with `ollama` and AnythingLLM
8
-
to talk to one another. The following screenshots will be from a Mac, but the gist of this should be the same on Windows and Linux.
7
+
Now that you've got [AnythingLLM installed](../pre-work/README.md#anythingllm), we need to configure it with `ollama`. The following screenshots are taken from a Mac, but the gist of this should be the same on Windows and Linux.
9
8
10
-
Open up AnyThingLLM, and you should see something like the following:
- Long-context tasks including long document/meeting summarization, long document QA, etc.
38
-
39
15
!!! note
40
-
We need to figure out a way to copy the models into ollama without downloading, conference wifi is never fast enough.
16
+
The download may take a few minutes depending on your internet connection. In the meantime, you can check out information about model we're using [here](https://ollama.com/library/granite3.1-dense). Check out how many languages it supports and take note of its capabilities. It'll help you decide what tasks you might want to use it for.
41
17
42
-
Next click on the `wrench` icon, and open up the settings. For now we are going to configure the global settings for `ollama`
43
-
but you may want to change it in the future.
18
+
Either click on the *Get Started* button or open up settings (the 🔧 button). For now, we are going to configure the global settings for `ollama` but you can always change it in the future.
Click on the "LLM" section, and select **Ollama** as the LLM Provider. Also select the `granite3-dense:8b` model. (You should be able to
48
-
see all the models you have access to through `ollama` there.)
22
+
Click on the *LLM* section, and select **Ollama** as the LLM Provider. Select the `granite3-dense:8b` model you downloaded. You'd be able to see all the models you have access to through `ollama` here.
Name it something like "learning llm" or the name of the event we are right now, something so you know it's somewhere you are learning
57
-
how to use this LLM.
32
+
Give it a name (e.g. the event you're attending today):
58
33
59
34

60
35
61
-
Now we can test our connections _through_ AnythingLLM! I like the "Who is Batman?" question, as a sanity check on connections and that
62
-
it knows _something_.
36
+
Now, let's test our connection AnythingLLM! I like asking the question, "Who is Batman?" as a sanity check. Every LLM should know who Batman is.
63
37
64
-

38
+
The first response may take a minute to process. This is because `ollama` is spinning up to serve the model. Subsequent responses should be much faster.
65
39
66
-
Now you may notice that the answer is slighty different then the screen shot above. That's expected and nothing to worry about. If
67
-
you have more questions about it raise your hand and one of the helpers would love to talk you about it.
40
+

68
41
69
-
Congratulations! You have AnythingLLM running now, configured to work with `granite3.1-dense`and `ollama`!
42
+
You may notice that your answer is slighty different then the screen shot above. This is expected and nothing to worry about!
70
43
44
+
**Congratulations!** Now you have AnythingLLM running and it's configured to work with `granite3.1-dense` and `ollama`. Have a quick chat with your model before moving on to the next lab!
0 commit comments