Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ print(m.chat("What is the etymology of mellea?").content)

Then run it:
> [!NOTE]
> Before we get started, you will need to download and install [ollama](https://ollama.com/). Mellea can work with many different types of backends, but everything in this tutorial will "just work" on a Macbook running IBM's Granite 3.3 8B model.
> Before we get started, you will need to download and install [ollama](https://ollama.com/). Mellea can work with many different types of backends, but everything in this tutorial will "just work" on a Macbook running IBM's Granite 4 Micro 3B model.
```shell
uv run --with mellea docs/examples/tutorial/example.py
```
Expand Down Expand Up @@ -128,13 +128,13 @@ uv venv .venv && source .venv/bin/activate
Use `uv pip` to install from source with the editable flag:

```bash
uv pip install -e .[all]
uv pip install -e '.[all]'
```

If you are planning to contribute to the repo, it would be good to have all the development requirements installed:

```bash
uv pip install .[all] --group dev --group notebook --group docs
uv pip install '.[all]' --group dev --group notebook --group docs
```

or
Expand All @@ -143,7 +143,7 @@ or
uv sync --all-extras --all-groups
```

Ensure that you install the precommit hooks:
If you want to contribute, ensure that you install the precommit hooks:

```bash
pre-commit install
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
from mellea import start_session
from mellea.backends.types import ModelOption

# create a session using Granite 3.3 8B on Ollama and a simple context [see below]
# create a session using Granite 4 Micro 3B on Ollama and a simple context [see below]
m = start_session(model_options={ModelOption.MAX_NEW_TOKENS: 200})

# Write a more formal and a more funny email
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
from mellea import start_session
from mellea.backends.types import ModelOption

# create a session using Granite 3.3 8B on Ollama and a simple context [see below]
# create a session using Granite 4 Micro 3B on Ollama and a simple context [see below]
m = start_session(model_options={ModelOption.MAX_NEW_TOKENS: 200})

# write an email with automatic requirement checking.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
from mellea.stdlib.requirement import Requirement, simple_validate
from mellea.stdlib.sampling import RejectionSamplingStrategy

# create a session using Granite 3.3 8B on Ollama and a simple context [see below]
# create a session using Granite 4 Micro 3B on Ollama and a simple context [see below]
m = start_session(model_options={ModelOption.MAX_NEW_TOKENS: 200})

# Define a requirement which checks that the output is less than 100 words
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/tutorial/simple_email.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import mellea

# INFO: this line will download IBM's Granite 3.3 8B model.
# INFO: this line will download IBM's Granite 4 Micro 3B model.
m = mellea.start_session()

print("Basic email:")
Expand Down
4 changes: 2 additions & 2 deletions docs/tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ Although good generative programs can be written in any language and framework,

## Chapter 2: Getting Started with Generative Programming in Mellea

Before we get started, you will need to download and install [ollama](https://ollama.com/). Mellea can work with many different types of backends, but everything in this tutorial will "just work" on a Macbook running IBM's Granite 3.3 8B model.
Before we get started, you will need to download and install [ollama](https://ollama.com/). Mellea can work with many different types of backends, but everything in this tutorial will "just work" on a Macbook running IBM's Granite 4 Micro 3B model.

We also recommend that you download and install [uv](https://docs.astral.sh/uv/#installation). You can run any of the examples in the tutorial with:
```bash
Expand All @@ -68,7 +68,7 @@ Once you have ollama installed and running, we can get started with our first ge
# file: https://github.com/generative-computing/mellea/blob/main/docs/examples/tutorial/simple_email.py#L1-L8
import mellea

# INFO: this line will download IBM's Granite 3.3 8B model.
# INFO: this line will download IBM's Granite 4 Micro 3B model.
m = mellea.start_session()

email = m.instruct("Write an email inviting interns to an office party at 3:30pm.")
Expand Down