You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/pre-work/README.md
+1-7Lines changed: 1 addition & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,7 +14,7 @@ These are the required applications and general installation notes for this work
14
14
15
15
-[Python](#installing-python)
16
16
-[Ollama](#installing-ollama) - Allows you to locally host an LLM model on your computer.
17
-
-[AnythingLLM](#installing-anythingllm)**(Recommended)** or [Open WebUI](#installing-open-webui). AnythingLLM is a desktop app while Open WebUI is browser-based.
17
+
-[Open WebUI](#installing-open-webui)
18
18
19
19
## Installing Python
20
20
@@ -60,12 +60,6 @@ brew install ollama
60
60
!!! note
61
61
You can save time by starting the model download used for the lab in the background by running `ollama pull granite4:micro` in its own terminal. You might have to run `ollama serve` first depending on how you installed it.
62
62
63
-
## Installing AnythingLLM
64
-
65
-
Download and install it from their [website](https://anythingllm.com/desktop) based on your operating system. We'll configure it later in the workshop.
66
-
67
-
!!! note
68
-
You only need one of AnythingLLM or Open-WebUI for this lab.
0 commit comments