Skip to content

Commit de320cd

Browse files
committed
reorganized the workshop from feedback around the python vs just using the models
1 parent 0a7985f commit de320cd

File tree

8 files changed

+590
-589
lines changed

8 files changed

+590
-589
lines changed

docs/lab-3.5/README.md renamed to docs/lab-1.5/README.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,9 @@ description: Steps to configure Open-WebUI for usage
44
logo: images/ibm-blue-background.png
55
---
66

7+
!!! warning
8+
This should be noted that this is optional. You don't need Open-WebUI if you have AnythingLLM already running. This is **optional**.
9+
710
Now that you've gotten [Open-WebUI installed](../pre-work/README.md#open-webui) we need to configure it with `ollama` and Open-WebUI
811
to talk to one another. The following screenshots will be from a Mac, but the gist of this should be the same on Windows and Linux.
912

docs/lab-1/README.md

Lines changed: 42 additions & 74 deletions
Original file line numberDiff line numberDiff line change
@@ -1,102 +1,70 @@
11
---
2-
title: Building a local AI co-pilot
3-
description: Learn how to leverage Open Source AI
2+
title: Configuring AnythingLLM
3+
description: Steps to configure AnythingLLM for usage
44
logo: images/ibm-blue-background.png
55
---
66

7-
## Overview
7+
Now that you've gotten [AnythingLLM installed](../pre-work/README.md#anythingllm) we need to configure it with `ollama` and AnythingLLM
8+
to talk to one another. The following screenshots will be from a Mac, but the gist of this should be the same on Windows and Linux.
89

9-
Success! We're ready to start with the first steps on your AI journey with us today.
10-
With this first lab, we'll be working through the steps in this [blogpost using Granite as a code assistant](https://developer.ibm.com/tutorials/awb-local-ai-copilot-ibm-granite-code-ollama-continue/).
10+
Open up AnyThingLLM, and you should see something like the following:
11+
![default screen](../images/anythingllm_open_screen.png)
1112

12-
In this tutorial, we will show how to use a collection of open-source components to run a feature-rich developer code assistant in Visual Studio Code while addressing data privacy, licensing, and cost challenges that are common to enterprise users. The setup is powered by local large language models (LLMs) with IBM's open-source LLM family, [Granite Code](https://github.com/ibm-granite/granite-code-models). All components run on a developer's workstation and have business-friendly licensing.
13+
If you see this that means AnythingLLM is installed correctly, and we can continue configuration, if not, please find a workshop TA or
14+
raise your hand we'll be there to help you ASAP.
1315

14-
There are three main barriers to adopting these tools in an enterprise setting:
15-
16-
- **Data Privacy:** Many corporations have privacy regulations that prohibit sending internal code or data to third party services.
17-
- **Generated Material Licensing:** Many models, even those with permissive usage licenses, do not disclose their training data and therefore may produce output that is derived from training material with licensing restrictions.
18-
- **Cost:** Many of these tools are paid solutions which require investment by the organization. For larger organizations, this would often include paid support and maintenance contracts which can be extremely costly and slow to negotiate.
19-
20-
## Fetching the Granite Models
21-
22-
Why did we select Granite as the LLM of choice for this exercise?
23-
24-
Granite Code was produced by IBM Research, with the goal of building an LLM that had only seen code which used enterprise-friendly licenses. According to section 2 of the Granite Code paper ([Granite Code Models: A Family of Open Foundation Models for Code Intelligence][paper]), the IBM Granite Code models meticulously curated their training data for licenses, and to make sure that all text did not contain any hate, abuse, or profanity.
25-
26-
Many open LLMs available today license the model itself for derivative work, but because they bring in large amounts of training data without discriminating by license, most companies can't use the output of those models since it potentially presents intellectual property concerns.
27-
28-
Granite Code comes in a wide range of sizes to fit your workstation's available resources. Generally, the bigger the model, the better the results, with a tradeoff: model responses will be slower, and it will take up more resources on your machine. We chose the 20b option as my starting point for chat and the 8b option for code generation. Ollama offers a convenient pull feature to download models:
29-
30-
Open up a second terminal, and run the following command:
31-
32-
```bash
33-
ollama pull granite-code:8b
34-
```
35-
36-
## Set up Continue
37-
38-
Now we need to install [continue.dev](https://continue.dev) so VSCode can "talk" to the ollama instance, and work with the
39-
granite model(s). There are two different ways of getting `continue` installed. If you have your `terminal` already open
40-
you can run:
16+
Next as a sanity check, run the following command to confirm you have the [granite3.1-dense](https://ollama.com/library/granite3.1-dense)
17+
model downloaded in `ollama`. This may take a bit, but we should have a way to copy it directly on your laptop.
4118

4219
```bash
43-
code --install-extension continue.continue
20+
ollama pull granite3.1-dense:8b
4421
```
4522

46-
If not you can use these steps in VSCode:
23+
If you didn't know, the supported languages with `granite3.1-dense` now include:
4724

48-
1. Open the Extensions tab.
49-
2. Search for "continue."
50-
3. Click the Install button.
25+
- English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, Chinese (Simplified)
5126

52-
Next you'll need to configure `continue` which will require you to take the following `json` and open the `config.json`
53-
file via the command palette.
27+
And the Capabilities also include:
5428

55-
1. Open the command palette (Press Cmd+Shift+P)
56-
2. Select Continue: Open `config.json`.
29+
- Summarization
30+
- Text classification
31+
- Text extraction
32+
- Question-answering
33+
- Retrieval Augmented Generation (RAG)
34+
- Code related tasks
35+
- Function-calling tasks
36+
- Multilingual dialog use cases
37+
- Long-context tasks including long document/meeting summarization, long document QA, etc.
5738

58-
In `config.json`, add a section for each model you want to use. Here, we're registering the Granite Code 8b model we downloaded earlier. Replace the line that says `"models": []` with the following:
39+
!!! note
40+
We need to figure out a way to copy the models into ollama without downloading, conference wifi is never fast enough.
5941

60-
```json
61-
"models": [
62-
{
63-
"title": "Granite Code 8b",
64-
"provider": "ollama",
65-
"model": "granite-code:8b"
66-
}
67-
],
68-
```
69-
70-
For inline code suggestions, we're going to use the smaller 8b model since tab completion runs constantly as you type. This will reduce load on the machine. In the section that starts with `"tabAutocompleteModel"`, replace the whole section with the following:
42+
Next click on the `wrench` icon, and open up the settings. For now we are going to configure the global settings for `ollama`
43+
but you may want to change it in the future.
7144

72-
```json
73-
"tabAutocompleteModel": {
74-
"title": "Granite Code 8b",
75-
"provider": "ollama",
76-
"model": "granite-code:8b"
77-
},
78-
```
45+
![wrench icon](../images/anythingllm_wrench_icon.png)
7946

80-
## Sanity Check
47+
Click on the "LLM" section, and select **Ollama** as the LLM Provider. Also select the `granite3-dense:8b` model. (You should be able to
48+
see all the models you have access to through `ollama` there.)
8149

82-
Now that you have everything wired together in VSCode, let's make sure that everything works. Go ahead and open
83-
up `continue` on the extension bar:
50+
![llm configuration](../images/anythingllm_llm_config.png)
8451

85-
![](https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/lKHl3FNCegebKYdHuXR-GA/continue-sidebar.png)
52+
Click the "Back to workspaces" button where the wrench was. And Click "New Workspace."
8653

87-
And ask it something! Something fun I like is:
54+
![new workspace](../images/anythingllm_new_workspace.png)
8855

89-
```text
90-
What language should I use for backend development?
91-
```
56+
Name it something like "learning llm" or the name of the event we are right now, something so you know it's somewhere you are learning
57+
how to use this LLM.
9258

93-
If you open a file for editing, you should also see possible tab completions to the right of your cursor.
59+
![naming new workspace](../images/anythingllm_naming_workspace.png)
9460

95-
It should give you a pretty generic answer, but as you can see, it works, and hopefully will help spur a thought
96-
or two.
61+
Now we can test our connections _through_ AnythingLLM! I like the "Who is Batman?" question, as a sanity check on connections and that
62+
it knows _something_.
9763

98-
Now let's continue on to Lab 2, where we are going to actually try this process in-depth!
64+
![who is batman](../images/anythingllm_who_is_batman.png)
9965

100-
[paper]: https://arxiv.org/pdf/2405.04324?utm_source=ibm_developer&utm_content=in_content_link&utm_id=tutorials_awb-local-ai-copilot-ibm-granite-code-ollama-continue
66+
Now you may notice that the answer is slighty different then the screen shot above. That's expected and nothing to worry about. If
67+
you have more questions about it raise your hand and one of the helpers would love to talk you about it.
10168

69+
Congratulations! You have AnythingLLM running now, configured to work with `granite3.1-dense` and `ollama`!
10270

0 commit comments

Comments
 (0)