Skip to content

Commit 1b1ddaa

Browse files
authored
Merge pull request #38 from IBM/jjasghar/remove_anythingllm
removing anything llm
2 parents 842ce7e + 2c981d4 commit 1b1ddaa

File tree

2 files changed

+3
-11
lines changed

2 files changed

+3
-11
lines changed

docs/README.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -24,13 +24,11 @@ Our overarching goals of this workshop is as follows:
2424
| Lab | Description |
2525
| :--- | :--- |
2626
| [Lab 0: Workshop Pre-work](pre-work/README.md) | Install pre-requisites for the workshop |
27-
| [Lab 1: Configuring AnythingLLM](lab-1/README.md) | Set up AnythingLLM to start using an LLM locally |
28-
| [Lab 1.5: Configuring Open-WebUI](lab-1.5/README.md) | Set up Open-WebUI to start using an LLM locally |
27+
| [Lab 1: Configuring Open-WebUI](lab-1.5/README.md) | Set up Open-WebUI to start using an LLM locally |
2928
| [Lab 2: Chatting with Your Local AI](lab-2/README.md) | Get acquainted with your local LLM |
3029
| [Lab 3: Prompt Engineering](lab-3/README.md) | Learn about prompt engineering techniques |
3130
| [Lab 4: Applying What You Learned](lab-4/README.md) | Refine your prompting skills |
32-
| [Lab 5: Using AnythingLLM for a local RAG](lab-5/README.md) | Build a Granite coding assistant |
33-
| [Lab 6: Using Open-WebUI for a local RAG](lab-6/README.md) | Write code using Continue and Granite |
31+
| [Lab 5: Using Open-WebUI for a local RAG](lab-6/README.md) | Write code using Continue and Granite |
3432
| [Lab 7: Using Mellea to help with Generative Computing](lab-7/README.md) | Learn how to leverage Mellea for Advanced AI situations |
3533

3634
Thank you SO MUCH for joining us in this workshop! If you have any questions or feedback,

docs/pre-work/README.md

Lines changed: 1 addition & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ These are the required applications and general installation notes for this work
1414

1515
- [Python](#installing-python)
1616
- [Ollama](#installing-ollama) - Allows you to locally host an LLM model on your computer.
17-
- [AnythingLLM](#installing-anythingllm) **(Recommended)** or [Open WebUI](#installing-open-webui). AnythingLLM is a desktop app while Open WebUI is browser-based.
17+
- [Open WebUI](#installing-open-webui)
1818

1919
## Installing Python
2020

@@ -60,12 +60,6 @@ brew install ollama
6060
!!! note
6161
You can save time by starting the model download used for the lab in the background by running `ollama pull granite4:micro` in its own terminal. You might have to run `ollama serve` first depending on how you installed it.
6262

63-
## Installing AnythingLLM
64-
65-
Download and install it from their [website](https://anythingllm.com/desktop) based on your operating system. We'll configure it later in the workshop.
66-
67-
!!! note
68-
You only need one of AnythingLLM or Open-WebUI for this lab.
6963

7064
## Installing Open-WebUI
7165

0 commit comments

Comments
 (0)