Skip to content

Commit 155cb74

Browse files
committed
Some fixups
Signed-off-by: Rafael Vasquez <[email protected]>
1 parent 302483f commit 155cb74

File tree

3 files changed

+10
-10
lines changed

3 files changed

+10
-10
lines changed

docs/lab-1/README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,22 +4,22 @@ description: Set up AnythingLLM to start using an LLM locally
44
logo: images/ibm-blue-background.png
55
---
66

7-
Now that you've got [AnythingLLM installed](../pre-work/README.md#anythingllm), we need to configure it with `ollama`. The following screenshots are taken from a Mac, but the gist of this should be the same on Windows and Linux.
7+
With [AnythingLLM installed](../pre-work/README.md#anythingllm), open the desktop application to configure it with `ollama`. The following screenshots are taken from a Mac, but this should be similar on Windows and Linux.
88

9-
First, if you haven't already, download the Granite 3.1 model. Open up a terminal and run the following command:
9+
First, if you haven't already, download the Granite 3.1 model. Make sure that `ollama` is running in the background (you may have to run `ollama serve` in its own terminal depending on how you installed it) and in another terminal run the following command:
1010

1111
```bash
1212
ollama pull granite3.1-dense:8b
1313
```
1414

1515
!!! note
16-
The download may take a few minutes depending on your internet connection. In the meantime, you can check out information about model we're using [here](https://ollama.com/library/granite3.1-dense). Check out how many languages it supports and take note of its capabilities. It'll help you decide what tasks you might want to use it for.
16+
The download may take a few minutes depending on your internet connection. In the meantime, you can check out information about model we're using [here](https://ollama.com/library/granite3.1-dense). Check out how many languages it supports and take note of its capabilities. It'll help you decide what tasks you might want to use it for in the future.
1717

1818
Either click on the *Get Started* button or open up settings (the 🔧 button). For now, we are going to configure the global settings for `ollama` but you can always change it in the future.
1919

2020
![wrench icon](../images/anythingllm_wrench_icon.png)
2121

22-
Click on the *LLM* section, and select **Ollama** as the LLM Provider. Select the `granite3-dense:8b` model you downloaded. You'd be able to see all the models you have access to through `ollama` here.
22+
Click on the *LLM* section, and select **Ollama** as the LLM Provider. Select the `granite3.1-dense:8b` model you downloaded. You'd be able to see all the models you have access to through `ollama` here.
2323

2424
![llm configuration](../images/anythingllm_llm_config.png)
2525

docs/lab-7/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ but you may want to change it in the future.
4040

4141
![wrench icon](../images/anythingllm_wrench_icon.png)
4242

43-
Click on the "LLM" section, and select **Ollama** as the LLM Provider. Also select the `granite3-dense:8b` model. (You should be able to
43+
Click on the "LLM" section, and select **Ollama** as the LLM Provider. Also select the `granite3.1-dense:8b` model. (You should be able to
4444
see all the models you have access to through `ollama` there.)
4545

4646
![llm configuration](../images/anythingllm_llm_config.png)

docs/pre-work/README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ brew install ollama
5757
```
5858

5959
!!! note
60-
You can save time by starting the model download used for the lab in the background by running `ollama pull granite3.1-dense:8b` in its own terminal.
60+
You can save time by starting the model download used for the lab in the background by running `ollama pull granite3.1-dense:8b` in its own terminal. You might have to run `ollama serve` first depending on how you installed it.
6161

6262
## Installing Visual Studio Code
6363

@@ -71,6 +71,10 @@ You can download and install VSCode from their [website](https://code.visualstud
7171
Download and install the IDE of your choice [here](https://www.jetbrains.com/ides/#choose-your-ide).
7272
If you'll be using `python` (like this workshop does), pick [PyCharm](https://www.jetbrains.com/pycharm/).
7373

74+
## Installing Continue
75+
76+
Choose your IDE on their [website](https://www.continue.dev/) and install the extension.
77+
7478
## Installing AnythingLLM
7579

7680
Download and install it from their [website](https://anythingllm.com/desktop) based on your operating system. We'll configure it later in the workshop.
@@ -95,7 +99,3 @@ open-webui serve
9599
Now that you have all of the tools you need, let's start building our local AI co-pilot.
96100

97101
**Head over to [Lab 1](/docs/lab-1/README.md) if you have AnythingLLM or [Lab 1.5](/docs/lab-1.5/README.md) for Open-WebUI.**
98-
99-
## Installing Continue
100-
101-
Choose your IDE on their [website](https://www.continue.dev/) and install the extension.

0 commit comments

Comments
 (0)