Skip to content

Commit c9ec166

Browse files
authored
Merge branch 'main' into multi_chat_context
2 parents 601a439 + 1194059 commit c9ec166

File tree

16 files changed

+967
-75
lines changed

16 files changed

+967
-75
lines changed

CHANGELOG.md

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,22 @@
1+
## [v0.2.1](https://github.com/generative-computing/mellea/releases/tag/v0.2.1) - 2025-12-10
2+
3+
### Feature
4+
5+
* Test-based Evaluation with LLM-as-a-judge ([#225](https://github.com/generative-computing/mellea/issues/225)) ([`0f1f0f8`](https://github.com/generative-computing/mellea/commit/0f1f0f8eb12e60f7940e3ad5b40ce91ded73fc88))
6+
* Add a `code_interpreter` tool ([#232](https://github.com/generative-computing/mellea/issues/232)) ([`b03c964`](https://github.com/generative-computing/mellea/commit/b03c96439501146965cd123ce5046f2c5907acfb))
7+
8+
### Fix
9+
10+
* Add simple lock to hf generation to prevent using incorrect weights ([#237](https://github.com/generative-computing/mellea/issues/237)) ([`6b2a527`](https://github.com/generative-computing/mellea/commit/6b2a5276a426be87ce02fa89f49818535f211fa6))
11+
* Collection of small fixes ([#238](https://github.com/generative-computing/mellea/issues/238)) ([`2120112`](https://github.com/generative-computing/mellea/commit/2120112ee807da4e980eabc0df54b3aae12d2cd2))
12+
* Fix unused litellm import ([#246](https://github.com/generative-computing/mellea/issues/246)) ([`633bfd7`](https://github.com/generative-computing/mellea/commit/633bfd7198eac45f8c163fefcc910d9bf8a76151))
13+
* Minor updates to answer relevance ([#245](https://github.com/generative-computing/mellea/issues/245)) ([`bde9b4d`](https://github.com/generative-computing/mellea/commit/bde9b4dd91ab92af0f7a661e321bb9d701da0589))
14+
* Pre-commit file selection ([#243](https://github.com/generative-computing/mellea/issues/243)) ([`e70d307`](https://github.com/generative-computing/mellea/commit/e70d3075f92152f8bb1a519687575fc7f30ffe1b))
15+
16+
### Documentation
17+
18+
* Fixed copyright in LICENSE ([#210](https://github.com/generative-computing/mellea/issues/210)) ([`3087051`](https://github.com/generative-computing/mellea/commit/3087051f5fc102bb8e0319af6baf9d7a0222e6ef))
19+
120
## [v0.2.0](https://github.com/generative-computing/mellea/releases/tag/v0.2.0) - 2025-11-19
221

322
### Feature

cli/eval/__init__.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
"""CLI for test-based evaluation"""

cli/eval/commands.py

Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
"""Use the eval command for LLM-as-a-judge evaluation, given a (set of) test file(s) consisting of prompts, instructions, and optionally, targets.
2+
Instantiate a generator model to produce candidate responses, and a judge model to determine whether the instructions have been followed."""
3+
4+
import typer
5+
6+
eval_app = typer.Typer(name="eval")
7+
8+
9+
def eval_run(
10+
test_files: list[str] = typer.Argument(
11+
..., help="List of paths to json/jsonl files containing test cases"
12+
),
13+
backend: str = typer.Option("ollama", "--backend", "-b", help="Generation backend"),
14+
model: str = typer.Option(None, "--model", help="Generation model name"),
15+
max_gen_tokens: int = typer.Option(
16+
256, "--max-gen-tokens", help="Max tokens to generate for responses"
17+
),
18+
judge_backend: str = typer.Option(
19+
None, "--judge-backend", "-jb", help="Judge backend"
20+
),
21+
judge_model: str = typer.Option(None, "--judge-model", help="Judge model name"),
22+
max_judge_tokens: int = typer.Option(
23+
256, "--max-judge-tokens", help="Max tokens for the judge model's judgement."
24+
),
25+
output_path: str = typer.Option(
26+
"eval_results", "--output-path", "-o", help="Output path for results"
27+
),
28+
output_format: str = typer.Option(
29+
"json", "--output-format", help="Either json or jsonl format for results"
30+
),
31+
continue_on_error: bool = typer.Option(True, "--continue-on-error"),
32+
):
33+
from cli.eval.runner import run_evaluations
34+
35+
run_evaluations(
36+
test_files=test_files,
37+
backend=backend,
38+
model=model,
39+
max_gen_tokens=max_gen_tokens,
40+
judge_backend=judge_backend,
41+
judge_model=judge_model,
42+
max_judge_tokens=max_judge_tokens,
43+
output_path=output_path,
44+
output_format=output_format,
45+
continue_on_error=continue_on_error,
46+
)
47+
48+
49+
eval_app.command("run")(eval_run)

0 commit comments

Comments
 (0)