Skip to content

Commit a217f9a

Browse files
Merge branch 'main' into hen/litellm_310
# Conflicts: # pyproject.toml # uv.lock
2 parents 5f14ff7 + dc610ae commit a217f9a

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

61 files changed

+3034
-1257
lines changed

.github/mergify.yml

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
merge_protections:
2+
- name: Enforce conventional commit
3+
description: Make sure that we follow https://www.conventionalcommits.org/en/v1.0.0/
4+
if:
5+
- base = main
6+
success_conditions:
7+
- "title ~=
8+
^(fix|feat|docs|style|refactor|perf|test|build|ci|chore|revert)(?:\\(.+\
9+
\\))?:"

.github/workflows/quality.yml

Lines changed: 55 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,55 @@
1+
name: Verify Code Quality
2+
3+
on:
4+
push:
5+
branches: [ main ]
6+
pull_request:
7+
branches: [ main ]
8+
9+
concurrency:
10+
group: ${{ github.workflow }}-${{ github.event_name == 'pull_request' && github.event.pull_request.number || github.ref_name }}
11+
cancel-in-progress: true
12+
13+
jobs:
14+
quality:
15+
runs-on: ubuntu-latest
16+
timeout-minutes: 30
17+
strategy:
18+
matrix:
19+
python-version: ['3.10', '3.11', '3.12'] # Need to add 3.13 once we resolve outlines issues.
20+
env:
21+
CICD: 1
22+
OLLAMA_HOST: "127.0.0.1:5000"
23+
steps:
24+
- uses: actions/checkout@v4
25+
- name: Install uv and set the python version
26+
uses: astral-sh/setup-uv@v5
27+
with:
28+
python-version: ${{ matrix.python-version }}
29+
enable-cache: true
30+
- name: pre-commit cache key
31+
run: echo "PY=$(python -VV | sha256sum | cut -d' ' -f1)" >> "$GITHUB_ENV"
32+
- uses: actions/cache@v4
33+
with:
34+
path: ~/.cache/pre-commit
35+
key: pre-commit|${{ env.PY }}|${{ hashFiles('.pre-commit-config.yaml') }}
36+
- name: Install dependencies
37+
run: uv sync --frozen --all-extras --group dev
38+
- name: Check style and run tests
39+
run: pre-commit run --all-files
40+
- name: Send failure message pre-commit
41+
if: failure() # This step will only run if a previous step failed
42+
run: echo "The quality verification failed. Please run precommit "
43+
- name: Install Ollama
44+
run: curl -fsSL https://ollama.com/install.sh | sh
45+
- name: Start serving ollama
46+
run: nohup ollama serve &
47+
- name: Pull Llama 3.2:1b model
48+
run: ollama pull llama3.2:1b
49+
50+
- name: Run Tests
51+
run: uv run -m pytest -v test
52+
- name: Send failure message tests
53+
if: failure() # This step will only run if a previous step failed
54+
run: echo "Tests failed. Please verify that tests are working locally."
55+

README.md

Lines changed: 32 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -47,12 +47,30 @@ You can get started with a local install, or by using Colab notebooks.
4747

4848
<img src="https://github.com/generative-computing/mellea/raw/main/docs/GetStarted_py.png" style="max-width:800px">
4949

50-
Install with pip:
50+
Install with [uv](https://docs.astral.sh/uv/getting-started/installation/):
5151

5252
```bash
5353
uv pip install mellea
5454
```
5555

56+
Install with pip:
57+
58+
```bash
59+
pip install mellea
60+
```
61+
62+
> [!NOTE]
63+
> `mellea` comes with some additional packages as defined in our `pyproject.toml`. If you would like to install all the extra optional dependencies, please run the following commands:
64+
>
65+
> ```bash
66+
> uv pip install mellea[hf] # for Huggingface extras and Alora capabilities.
67+
> uv pip install mellea[watsonx] # for watsonx backend
68+
> uv pip install mellea[docling] # for docling
69+
> uv pip install mellea[all] # for all the optional dependencies
70+
> ```
71+
>
72+
> You can also install all the optional dependencies with `uv sync --all-extras`
73+
5674
> [!NOTE]
5775
> If running on an Intel mac, you may get errors related to torch/torchvision versions. Conda maintains updated versions of these packages. You will need to create a conda environment and run `conda install 'torchvision>=0.22.0'` (this should also install pytorch and torchvision-extra). Then, you should be able to run `uv pip install mellea`. To run the examples, you will need to use `python <filename>` inside the conda environment instead of `uv run --with mellea <filename>`.
5876
@@ -110,7 +128,19 @@ uv venv .venv && source .venv/bin/activate
110128
Use `uv pip` to install from source with the editable flag:
111129
112130
```bash
113-
uv pip install -e .
131+
uv pip install -e .[all]
132+
```
133+
134+
If you are planning to contribute to the repo, it would be good to have all the development requirements installed:
135+
136+
```bash
137+
uv pip install .[all] --group dev --group notebook --group docs
138+
```
139+
140+
or
141+
142+
```bash
143+
uv sync --all-extras --all-groups
114144
```
115145
116146
Ensure that you install the precommit hooks:

docs/dev/constrained_decoding.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ The `m` framework currently uses the `format` argument to pydantic schemas, **ou
1010

1111
> If a keyword had meaning across multiple types of backends, and if it means the same thing in all of those backends but has different names, then we use the `@@@`-style args so that the user can pass these args across all backends in the same way. Otherwise, the arguments in model_args are passed along verbatim.
1212
13-
This argues for `@@@format@@@` as opposed to a dedicated `format` option in the method signature. Or, in the alternative, for an entir re-think of ModelArgs.
13+
This argues for `@@@format@@@` as opposed to a dedicated `format` option in the method signature. Or, in the alternative, for an entire re-think of ModelArgs.
1414

1515
## Integration with grammar-targeted LLMs
1616

docs/examples/agents/react.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -103,7 +103,7 @@ def react(
103103
react_toolbox: ReactToolbox,
104104
):
105105
assert m.ctx.is_chat_context, "ReACT requires a chat context."
106-
test_ctx_lin = m.ctx.linearize()
106+
test_ctx_lin = m.ctx.render_for_generation()
107107
assert test_ctx_lin is not None and len(test_ctx_lin) == 0, (
108108
"ReACT expects a fresh context."
109109
)

docs/examples/agents/react_instruct.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,7 @@ def react(
101101
react_toolbox: ReactToolbox,
102102
):
103103
assert m.ctx.is_chat_context, "ReACT requires a chat context."
104-
test_ctx_lin = m.ctx.linearize()
104+
test_ctx_lin = m.ctx.render_for_generation()
105105
assert test_ctx_lin is not None and len(test_ctx_lin) == 0, (
106106
"ReACT expects a fresh context."
107107
)

docs/examples/generative_slots/generative_slots.py

Lines changed: 16 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -16,20 +16,19 @@ def generate_summary(text: str) -> str:
1616

1717

1818
if __name__ == "__main__":
19-
m = start_session()
20-
sentiment_component = classify_sentiment(m, text="I love this!")
21-
print("Output sentiment is : ", sentiment_component)
22-
23-
summary = generate_summary(
24-
m,
25-
text="""
26-
The eagle rays are a group of cartilaginous fishes in the family Myliobatidae,
27-
consisting mostly of large species living in the open ocean rather than on the sea bottom.
28-
Eagle rays feed on mollusks, and crustaceans, crushing their shells with their flattened teeth.
29-
They are excellent swimmers and are able to breach the water up to several meters above the
30-
surface. Compared with other rays, they have long tails, and well-defined, rhomboidal bodies.
31-
They are ovoviviparous, giving birth to up to six young at a time. They range from 0.48 to
32-
5.1 m (1.6 to 16.7 ft) in length and 7 m (23 ft) in wingspan.
33-
""",
34-
)
35-
print("Generated summary is :", summary)
19+
with start_session():
20+
sentiment_component = classify_sentiment(text="I love this!")
21+
print("Output sentiment is : ", sentiment_component)
22+
23+
summary = generate_summary(
24+
text="""
25+
The eagle rays are a group of cartilaginous fishes in the family Myliobatidae,
26+
consisting mostly of large species living in the open ocean rather than on the sea bottom.
27+
Eagle rays feed on mollusks, and crustaceans, crushing their shells with their flattened teeth.
28+
They are excellent swimmers and are able to breach the water up to several meters above the
29+
surface. Compared with other rays, they have long tails, and well-defined, rhomboidal bodies.
30+
They are ovoviviparous, giving birth to up to six young at a time. They range from 0.48 to
31+
5.1 m (1.6 to 16.7 ft) in length and 7 m (23 ft) in wingspan.
32+
"""
33+
)
34+
print("Generated summary is :", summary)

docs/examples/generative_slots/inter_module_composition/summarizers.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,4 +13,4 @@ def summarize_contract(contract_text: str) -> str:
1313

1414
@generative
1515
def summarize_short_story(story: str) -> str:
16-
"""Summarize a short story, with one paragraph on plot and one paragraph on braod themes."""
16+
"""Summarize a short story, with one paragraph on plot and one paragraph on broad themes."""

docs/examples/instruct_validate_repair/101_email.py

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,17 @@
11
# This is the 101 example for using `session` and `instruct`.
22
# helper function to wrap text
33
from docs.examples.helper import w
4-
from mellea import start_session
4+
from mellea import instruct, start_session
55
from mellea.backends.types import ModelOption
66

77
# create a session using Granite 3.3 8B on Ollama and a simple context [see below]
8-
m = start_session(model_options={ModelOption.MAX_NEW_TOKENS: 200})
8+
with start_session(model_options={ModelOption.MAX_NEW_TOKENS: 200}):
9+
# write an email
10+
email_v1 = instruct("Write an email to invite all interns to the office party.")
911

10-
# write an email
11-
email_v1 = m.instruct("Write an email to invite all interns to the office party.")
12+
with start_session(model_options={ModelOption.MAX_NEW_TOKENS: 200}) as m:
13+
# write an email
14+
email_v1 = m.instruct("Write an email to invite all interns to the office party.")
1215

1316
# print result
1417
print(f"***** email ****\n{w(email_v1)}\n*******")

docs/examples/notebooks/compositionality_with_generative_slots.ipynb

Lines changed: 18 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -33,8 +33,13 @@
3333
"!nohup ollama serve >/dev/null 2>&1 &\n",
3434
"\n",
3535
"from IPython.display import HTML, display\n",
36-
"def set_css(): display(HTML('\\n<style>\\n pre{\\n white-space: pre-wrap;\\n}\\n</style>\\n'))\n",
37-
"get_ipython().events.register('pre_run_cell',set_css)"
36+
"\n",
37+
"\n",
38+
"def set_css():\n",
39+
" display(HTML(\"\\n<style>\\n pre{\\n white-space: pre-wrap;\\n}\\n</style>\\n\"))\n",
40+
"\n",
41+
"\n",
42+
"get_ipython().events.register(\"pre_run_cell\", set_css)"
3843
]
3944
},
4045
{
@@ -76,18 +81,21 @@
7681
"source": [
7782
"from mellea import generative\n",
7883
"\n",
84+
"\n",
7985
"# The Summarizer Library\n",
8086
"@generative\n",
8187
"def summarize_meeting(transcript: str) -> str:\n",
8288
" \"\"\"Summarize the meeting transcript into a concise paragraph of main points.\"\"\"\n",
8389
"\n",
90+
"\n",
8491
"@generative\n",
8592
"def summarize_contract(contract_text: str) -> str:\n",
8693
" \"\"\"Produce a natural language summary of contract obligations and risks.\"\"\"\n",
8794
"\n",
95+
"\n",
8896
"@generative\n",
8997
"def summarize_short_story(story: str) -> str:\n",
90-
" \"\"\"Summarize a short story, with one paragraph on plot and one paragraph on braod themes.\"\"\""
98+
" \"\"\"Summarize a short story, with one paragraph on plot and one paragraph on broad themes.\"\"\""
9199
]
92100
},
93101
{
@@ -109,10 +117,12 @@
109117
"def propose_business_decision(summary: str) -> str:\n",
110118
" \"\"\"Given a structured summary with clear recommendations, propose a business decision.\"\"\"\n",
111119
"\n",
120+
"\n",
112121
"@generative\n",
113122
"def generate_risk_mitigation(summary: str) -> str:\n",
114123
" \"\"\"If the summary contains risk elements, propose mitigation strategies.\"\"\"\n",
115124
"\n",
125+
"\n",
116126
"@generative\n",
117127
"def generate_novel_recommendations(summary: str) -> str:\n",
118128
" \"\"\"Provide a list of novel recommendations that are similar in plot or theme to the short story summary.\"\"\""
@@ -135,16 +145,19 @@
135145
"outputs": [],
136146
"source": [
137147
"# Compose the libraries.\n",
138-
"from typing import Literal # noqa: E402\n",
148+
"from typing import Literal\n",
149+
"\n",
139150
"\n",
140151
"@generative\n",
141152
"def has_structured_conclusion(summary: str) -> Literal[\"yes\", \"no\"]:\n",
142153
" \"\"\"Determine whether the summary contains a clearly marked conclusion or recommendation.\"\"\"\n",
143154
"\n",
155+
"\n",
144156
"@generative\n",
145157
"def contains_actionable_risks(summary: str) -> Literal[\"yes\", \"no\"]:\n",
146158
" \"\"\"Check whether the summary contains references to business risks or exposure.\"\"\"\n",
147159
"\n",
160+
"\n",
148161
"@generative\n",
149162
"def has_theme_and_plot(summary: str) -> Literal[\"yes\", \"no\"]:\n",
150163
" \"\"\"Check whether the summary contains both a plot and thematic elements.\"\"\""
@@ -166,7 +179,7 @@
166179
},
167180
"outputs": [],
168181
"source": [
169-
"from mellea import start_session # noqa: E402\n",
182+
"from mellea import start_session\n",
170183
"\n",
171184
"m = start_session()"
172185
]

0 commit comments

Comments
 (0)