You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/lab-1/README.md
+4-3Lines changed: 4 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@
5
5
Success! We're ready to start with the first steps on your AI journey with us today.
6
6
With this first lab, we'll be working through the steps in this [blogpost using Granite as a code assistant](https://developer.ibm.com/tutorials/awb-local-ai-copilot-ibm-granite-code-ollama-continue/).
7
7
8
-
In this tutorial, I will show how to use a collection of open-source components to run a feature-rich developer code assistant in Visual Studio Code while addressing data privacy, licensing, and cost challenges that are common to enterprise users. The setup is powered by local large language models (LLMs) with IBM's open-source LLM family, [Granite Code](https://github.com/ibm-granite/granite-code-models). All components run on a developer's workstation and have business-friendly licensing.
8
+
In this tutorial, we will show how to use a collection of open-source components to run a feature-rich developer code assistant in Visual Studio Code while addressing data privacy, licensing, and cost challenges that are common to enterprise users. The setup is powered by local large language models (LLMs) with IBM's open-source LLM family, [Granite Code](https://github.com/ibm-granite/granite-code-models). All components run on a developer's workstation and have business-friendly licensing.
9
9
10
10
There are three main barriers to adopting these tools in an enterprise setting:
11
11
@@ -17,13 +17,14 @@ There are three main barriers to adopting these tools in an enterprise setting:
17
17
18
18
Why did we select Granite as the LLM of choice for this exercise?
19
19
20
-
Granite Code was produced by IBM Research, with the goal of building an LLM that had only seen code which used enterprise-friendly licenses. According to section 2 of the Granite Code paper ([Granite Code Models: A Family of Open Foundation Models for Code Intelligence][paper]),the IBM Granite Code models meticulously curated their training data for licenses, and to make sure that all text did not contain any hate, abuse, or profanity.
20
+
Granite Code was produced by IBM Research, with the goal of building an LLM that had only seen code which used enterprise-friendly licenses. According to section 2 of the Granite Code paper ([Granite Code Models: A Family of Open Foundation Models for Code Intelligence][paper]),the IBM Granite Code models meticulously curated their training data for licenses, and to make sure that all text did not contain any hate, abuse, or profanity.
21
21
22
22
Many open LLMs available today license the model itself for derivative work, but because they bring in large amounts of training data without discriminating by license, most companies can't use the output of those models since it potentially presents intellectual property concerns. Granite
23
23
24
-
Granite Code comes in a wide range of sizes to fit your workstation's available resources. Generally, the bigger the model, the better the results, with a tradeoff: model responses will be slower, and it will take up more resources on your machine. I chose the 20b option as my starting point for chat and the 8b option for code generation. Ollama offers a convenient pull feature to download models:
24
+
Granite Code comes in a wide range of sizes to fit your workstation's available resources. Generally, the bigger the model, the better the results, with a tradeoff: model responses will be slower, and it will take up more resources on your machine. We chose the 20b option as my starting point for chat and the 8b option for code generation. Ollama offers a convenient pull feature to download models:
25
25
26
26
Open up your terminal, and run the following commands:
Before we go any farther, write in "Who is batman?" to verify that `ollama`,
11
11
VSCode, and `continue` are all working correctly.
12
12
13
13
!!! troubleshooting
14
14
If Continue is taking a long time to respond, restart Visual Studio Code. If that doesn't resolve your issue, restart Ollama.
15
15
16
-
If you would like to go deeper with continue, take a look at the [official Continue.dev how-to guide](https://docs.continue.dev/how-to-use-continue).
16
+
If you would like to go deeper with `continue`, take a look at the [official Continue.dev how-to guide](https://docs.continue.dev/how-to-use-continue).
17
+
Its worth taken the moment if you want, otherwise, when you get home and try this on your own
18
+
hardware, it's awesome to see what `continue` can do.
17
19
18
20
Now that we have our local AI co-pilot with us, let's start using it. Now these
19
21
next examples are going to be focused on `python` but there is nothing stopping
@@ -39,6 +41,10 @@ Now use the `command-i` or `ctrl-i` to open up the `generate code` command palet
39
41
write me out conways game of life using pygame
40
42
```
41
43
44
+
!!! note
45
+
If you don't know what Conway's Game of Life is, take a look [here](https://en.wikipedia.org/wiki/Conway's_Game_of_Life) or
46
+
raise your hand, I'm betting the TA's would love to talk to you about it. 😁
47
+
42
48
Now granite-code should start giving you a good suggestion here, it should look something like:
43
49

44
50
@@ -59,17 +65,22 @@ Don't believe me? Bring up the terminal and attempt to run this code after you a
59
65

60
66
61
67
Well that isn't good is it? Yours may be different code, or maybe it does work, but at least in this
62
-
example we need to to get it fixed.
68
+
example we need to to get the code fixed.
63
69
64
70
## First pass at debugging
65
71
66
72
I'll run the following commands to build up an virtual environment, and install some modules, lets
67
73
see how far we get.
68
74
75
+
!!! tip
76
+
If these next commands are foreign to you, it's ok. These are `python` commands, and you can just
77
+
copy paste it in. If you'd like to know more or _why_ it is, raise your hand a TA should be able
78
+
to explain it to you.
79
+
69
80
```bash
70
81
python3.11 -m venv venv
71
82
source venv/bin/activate
72
-
workshop-python-ai pip install pygame
83
+
pip install pygame
73
84
```
74
85
75
86
Well better, I think, but nothing still happens. So even noticing the `import pygame` tells me I need to
@@ -79,7 +90,10 @@ so it's more readable.
79
90
## Cleaning up the AI generated code
80
91
81
92
!!! note
82
-
You can try using the built-in autocomplete and code assistant functions to generate any missing code. In our example, we're missing a "main" entry point to the script. Try hitting `cmd/ctrl + I` again, and typing in something like: "write a main function for my game that plays twenty rounds of Conway's game of life using the `board()` function". What happens?
93
+
You can try using the built-in autocomplete and code assistant functions to generate any missing code.
94
+
In our example, we're missing a "main" entry point to the script. Try hitting `cmd/ctrl + I` again,
95
+
and typing in something like: "write a main function for my game that plays twenty rounds of Conway's
96
+
game of life using the `board()` function." What happens?
83
97
84
98
Cleaning up the code. Now everything is smashed together, it's hard to read the logic here, so first
85
99
thing first, going to break up the code and add a `def main` function so I know what the entry point is.
@@ -126,7 +140,7 @@ function so I get a better understanding of what the program is doing.
126
140
Go ahead and walk through your version, see the logic, and who knows maybe it'll highlight why yours
127
141
isn't working yet, or not, the next step will help you even more!
128
142
129
-
## Automagicly creating tests!
143
+
## Automagicly creating tests
130
144
131
145
One of the most powerful/helping stream line your workflow as a developer is writing good tests
132
146
around your code. It's a safety blanket to help make sure your custom code has a way to check for
Copy file name to clipboardExpand all lines: docs/lab-3/README.md
+24-23Lines changed: 24 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,6 +6,11 @@ logo: images/ilab_dog.png
6
6
7
7
# Getting Started with InstructLab
8
8
9
+
!!! tip
10
+
We are jumping in the deep end here, don't hesitate to raise your hand and ask questions.
11
+
We also assume you have a good _foundational_ (heh) knowledege of `python` if not, ask
12
+
one of the TA's to run through this on the overhead screen.
13
+
9
14
Now that we have a working VSCode instance working with the granite model, lets talk about more
10
15
things you can do with an Open Source AI system. This is where the [InstructLab](https://instructlab.ai/)
11
16
project comes into play.
@@ -25,12 +30,12 @@ it to have.
25
30
These steps will pull down a premade `qna.yaml` so you can do a local build. Skip the `wget`, `mv`, and `ilab taxonomy diff` if you don't want to do this.
@@ -210,22 +215,18 @@ Path to taxonomy repo [taxonomy]: <ENTER>
210
215
211
216
```shell
212
217
(venv) $ ilab config init
213
-
Welcome to InstructLab CLI. This guide will help you set up your environment.
218
+
Welcome to InstructLab CLI. This guide will help you to setup your environment.
214
219
Please provide the following values to initiate the environment [press Enter for defaults]:
215
-
Path to taxonomy repo [taxonomy]: <ENTER>
216
-
`taxonomy` seems to not exists or is empty. Should I clone https://github.com/instructlab/taxonomy.git for you? [y/N]: y
220
+
Path to taxonomy repo [/Users/USERNAME/.local/share/instructlab/taxonomy]:
221
+
`/Users/USERNAME/.local/share/instructlab/taxonomy` seems to not exist or is empty. Should I clone https://github.com/instructlab/taxonomy.git for you? [Y/n]: y
5) When prompted, please choose a train profile. Train profiles are GPU specific profiles that enable accelerated training behavior. **YOU ARE ON MacOS**, please choose `No Profile (CPU-Only)` by hitting Enter. There are various flags you can utilize with individual `ilab` commands that will allow you to utilize your GPU if applicable.
222
228
223
229
```shell
224
-
Welcome to InstructLab CLI. This guide will help you to setup your environment.
225
-
Please provide the following values to initiate the environment [press Enter for defaults]:
226
-
Path to taxonomy repo [~/Library/Application\ Support/instructlab/taxonomy]:
227
-
Path to your model [/Users/USERNAME/Library/Caches//instructlab/models/merlinite-7b-lab-Q4_K_M.gguf]:
1)`/Users/USERNAME/Library/Caches/instructlab/models/`: Contains all downloaded large language models, including the saved output of ones you generate with ilab.
256
+
1)`/Users/USERNAME/.config/instructlab/models/`: Contains all downloaded large language models, including the saved output of ones you generate with ilab.
256
257
257
-
2)`~/Library/Application\ Support/instructlab/datasets/`: Contains data output from the SDG phase, built on modifications to the taxonomy repository.
258
+
2)`~/.config/instructlab/datasets/`: Contains data output from the SDG phase, built on modifications to the taxonomy repository.
258
259
259
-
3)`~/Library/Application\ Support/instructlab/taxonomy/`: Contains the skill and knowledge data.
260
+
3)`~/.config/instructlab/taxonomy/`: Contains the skill and knowledge data.
260
261
261
-
4)`~/Users/USERNAME/Library/Caches/instructlab/checkpoints/`: Contains the output of the training process
262
+
4)`~/Users/USERNAME/.config/instructlab/checkpoints/`: Contains the output of the training process
262
263
263
264
### 📥 Download the model
264
265
@@ -272,9 +273,9 @@ ilab model download
272
273
273
274
```shell
274
275
(venv) $ ilab model download
275
-
Downloading model from Hugging Face: instructlab/merlinite-7b-lab-GGUF@main to /Users/USERNAME/Library/Caches/instructlab/models...
276
+
Downloading model from Hugging Face: instructlab/merlinite-7b-lab-GGUF@main to /Users/USERNAME/.config/instructlab/models...
276
277
...
277
-
INFO 2024-08-01 15:05:48,464 huggingface_hub.file_download:1893: Download complete. Moving file to /Users/USERNAME/Library/Caches/instructlab/models/merlinite-7b-lab-Q4_K_M.gguf
278
+
INFO 2024-08-01 15:05:48,464 huggingface_hub.file_download:1893: Download complete. Moving file to /Users/USERNAME/.config/instructlab/models/merlinite-7b-lab-Q4_K_M.gguf
0 commit comments