You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/lab-1.5/README.md
-2Lines changed: 0 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,8 +13,6 @@ First, if you haven't already, download the Granite 4 model. Make sure that `oll
13
13
```bash
14
14
ollama pull granite4:micro
15
15
```
16
-
!!! note
17
-
If the granite4:micro model isn't available yet, you can choose granite3.3:2b or granite3.3:8b
18
16
19
17
!!! note
20
18
The download may take a few minutes depending on your internet connection. In the meantime, you can check out information about model we're using [here](https://ollama.com/library/granite4). Check out how many languages it supports and take note of its capabilities. It'll help you decide what tasks you might want to use it for in the future.
Copy file name to clipboardExpand all lines: docs/lab-1/README.md
+1-3Lines changed: 1 addition & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,11 +13,9 @@ First, if you haven't already, download the Granite 4 model. Make sure that `oll
13
13
```bash
14
14
ollama pull granite4:micro
15
15
```
16
-
!!! note
17
-
If the granite4:micro model isn't available yet, you can choose granite3.3:2b or granite3.3:8b
18
16
19
17
!!! note
20
-
The download may take a few minutes depending on your internet connection. In the meantime, you can check out information about model we're using [here](https://ollama.com/library/granite3.3). Check out how many languages it supports and take note of its capabilities. It'll help you decide what tasks you might want to use it for in the future.
18
+
The download may take a few minutes depending on your internet connection. In the meantime, you can check out information about model we're using [here](https://ollama.com/library/granite4). Check out how many languages it supports and take note of its capabilities. It'll help you decide what tasks you might want to use it for in the future.
21
19
22
20
Open the AnythingLLM desktop application and either click on the *Get Started* button or open up settings (the 🔧 button). For now, we are going to configure the global settings for `ollama` but you can always change it in the future.
Copy file name to clipboardExpand all lines: docs/lab-6/README.md
+6-10Lines changed: 6 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,20 +21,19 @@ Open up [Open-WebUI](http://localhost:8080/), and you should see something like
21
21
If you see this that means Open-WebUI is installed correctly, and we can continue configuration, if not, please find a workshop TA or
22
22
raise your hand we'll be there to help you ASAP.
23
23
24
-
Next as a sanity check, run the following command to confirm you have the [granite3.3:2b](https://ollama.com/library/granite3.3:2b)
24
+
Next as a sanity check, run the following command to confirm you have the [granite4:micro](https://ollama.com/library/granite4:micro)
25
25
model downloaded in `ollama`. This may take a bit, but we should have a way to copy it directly on your laptop.
26
26
27
27
```bash
28
-
ollama pull granite3.3:2b
28
+
ollama pull granite4:micro
29
29
```
30
30
31
-
If you didn't know, the supported languages with `granite3.3:2b` now include:
31
+
If you didn't know, the supported languages with `granite4:micro` now include:
32
32
33
33
- English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. However, users may fine-tune this Granite model for languages beyond these 12 languages.
34
34
35
35
And the Capabilities also include:
36
36
37
-
- Thinking
38
37
- Summarization
39
38
- Text classification
40
39
- Text extraction
@@ -43,11 +42,10 @@ And the Capabilities also include:
43
42
- Code related tasks
44
43
- Function-calling tasks
45
44
- Multilingual dialog use cases
46
-
- Fill-in-the-middle
47
-
- Long-context tasks including long document/meeting summarization, long document QA, etc.
45
+
- Fill-In-the-Middle (FIM) code completions
48
46
49
47
50
-
Next click on the down arrow at the top and select the "granite3.3:2b" if it's not already selected.
48
+
Next click on the down arrow at the top and select the "granite4:micro" if it's not already selected.
@@ -58,9 +56,7 @@ List all the past and current CEOs of the IBM corporation in order of their term
58
56
For example:
59
57

60
58
61
-
At first glance, the list looks pretty good. But if you know your IBM CEOs, you'll notice that it misses a few of them, and sometimes adds new names that weren't ever IBM CEOs!
62
-
(Note: the larger granite3.3:8b does a much better job on the IBM CEOs, you can try it later)
63
-
But we can provide the small LLM with a RAG document that supplements the model's missing information with a correct list, so it will generate a better answer.
59
+
At first glance, the list looks pretty good. But if you know your IBM CEOs, you'll notice that it misses a few of them, and sometimes adds new names that weren't ever IBM CEOs! Retrieval Augmented Generation (RAG) allows us to provide the small LLM with a RAG document that supplements the model's missing information with a correct list, so it will generate a better answer.
64
60
65
61
Click on the "New Chat" icon to clear the context. Then download a small text file with the correct list of IBM CEOs to your Downloads folder:
- First requirement (r1) will be validated by LLM-as-a-judge on the output of the instruction. This is the default behavior.
256
255
- Second requirement (r2) uses a function that takes the output of a sampling step and returns a boolean value indicating successful or unsuccessful validation. While the validation_fn parameter requires to run validation on the full session context, Mellea provides a wrapper for simpler validation functions (simple_validate(fn: Callable[[str], bool])) that take the output string and return a boolean as seen in this case.
257
256
- Third requirement is a check(). Checks are only used for validation, not for generation. Checks aim to avoid the "do not think about B" effect that often primes models (and humans) to do the opposite and "think" about B.
257
+
- We also demonstrate in the m = mellea.start_session() how you can specify a different Ollama model, in case you want to try something other than Mellea's ibm/granite4:micro default.
258
258
259
259
Run this in your local instance, and you'll see it working, and ideally no purple elephants! :)
0 commit comments