Skip to content

Commit 8de26e1

Browse files
authored
Merge pull request #33 from jbusche/jbusche-small-fixes
2 parents 661e00d + 0fd07e3 commit 8de26e1

File tree

2 files changed

+5
-10
lines changed

2 files changed

+5
-10
lines changed

docs/lab-5/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -95,7 +95,7 @@ Go ahead and save it to your local machine, and be ready to grab it.
9595
!!! note
9696
Granite 4 has newer data, so since this lab was created, it DOES have the 2024 data. If you find that's the case, you can try it with the question about 2025 using the 2025 full-year budget using the link below.
9797

98-
![budget_fy2025.pdf](https://www.whitehouse.gov/wp-content/uploads/2024/03/budget_fy2025.pdf)
98+
[budget_fy2025.pdf](https://www.whitehouse.gov/wp-content/uploads/2024/03/budget_fy2025.pdf)
9999

100100
Now spin up a **New Workspace**, (yes, please a new workspace, it seems that sometimes AnythingLLM has
101101
issues with adding things, so a clean environment is always easier to teach in) and call it

docs/lab-7/README.md

Lines changed: 4 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -75,14 +75,11 @@ python
7575
import mellea
7676

7777
m = mellea.start_session()
78-
print(m.chat("What is the etymology of mellea?").content)
78+
print(m.chat("tell me some fun trivia about IBM and the early history of AI.").content)
7979
```
8080
You can either add this to a file like `main.py` or run it in the python REPL, if you get output
8181
you are set up to dig deeper with Mellea.
8282

83-
!!! note
84-
If you see an error message with: "ModuleNotFoundError: No module named 'PIL'" then you will need to install the python package pillow with "pip install pillow"
85-
8683
## Simple email examples
8784

8885
!!! note
@@ -158,7 +155,7 @@ by changing from "only lower-case" to "only upper-case" and see that it will fol
158155

159156
Pretty neat eh? Lets go even deeper.
160157

161-
Let's create an email with some sampling and have Mellea, find the best option for what we are looking for:
158+
Let's create an email with some sampling and have Mellea find the best option for what we are looking for:
162159
We add two requirements to the instruction which will be added to the model request.
163160
But we don't check yet if these requirements are satisfied, we add a strategy for validating the requirements.
164161

@@ -196,9 +193,7 @@ print(
196193
)
197194
)
198195
```
199-
You might notice it fails with the above example, just remove the `"Use only lower-case letters",` line, and
200-
it should pass on the first re-run. This brings up some interesting opportunities, so make sure that the
201-
writing you expect is within the boundaries and it'll keep trying till it gets it right.
196+
You might notice it fails with the above example, because the name "Olivia" has an upper-case letter in it. Remove the `"Use only lower-case letters",` line, and it should pass on the first re-run. This brings up some interesting opportunities, so make sure that the writing you expect is within the boundaries and it'll keep trying till it gets it right.
202197

203198
## Instruct Validate Repair
204199

@@ -241,7 +236,7 @@ We create 3 requirements:
241236

242237
- First requirement (r1) will be validated by LLM-as-a-judge on the output of the instruction. This is the default behavior.
243238
- Second requirement (r2) uses a function that takes the output of a sampling step and returns a boolean value indicating successful or unsuccessful validation. While the validation_fn parameter requires to run validation on the full session context, Mellea provides a wrapper for simpler validation functions (simple_validate(fn: Callable[[str], bool])) that take the output string and return a boolean as seen in this case.
244-
- Third requirement is a check(). Checks are only used for validation, not for generation. Don't think mention purple elephants.
239+
- Third requirement is a check(). Checks are only used for validation, not for generation. Checks aim to avoid the "do not think about B" effect that often primes models (and humans) to do the opposite and "think" about B.
245240

246241
Run this in your local instance, and you'll see it working, and ideally no purple elephants! :)
247242

0 commit comments

Comments
 (0)