Skip to content

Commit e24bfa5

Browse files
committed
Integration of EleutherAI's lm-evaluation-harness
1 parent 1e89614 commit e24bfa5

16 files changed

+432
-3
lines changed

.gitignore

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
results/
2+
scripts/lm_eval/prompts/system_message.txt
3+
scripts/lm_eval/prompts/evaluator_system_message.txt
4+
15
# Python
26
__pycache__/
37
*.py[cod]
@@ -48,4 +52,4 @@ htmlcov/
4852

4953
# For SR
5054
secrets.yaml
51-
problems
55+
problems

openevolve/config.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -232,7 +232,9 @@ def from_dict(cls, config_dict: Dict[str, Any]) -> "Config":
232232
if "models" in llm_dict:
233233
llm_dict["models"] = [LLMModelConfig(**m) for m in llm_dict["models"]]
234234
if "evaluator_models" in llm_dict:
235-
llm_dict["evaluator_models"] = [LLMModelConfig(**m) for m in llm_dict["evaluator_models"]]
235+
llm_dict["evaluator_models"] = [
236+
LLMModelConfig(**m) for m in llm_dict["evaluator_models"]
237+
]
236238
config.llm = LLMConfig(**llm_dict)
237239
if "prompt" in config_dict:
238240
config.prompt = PromptConfig(**config_dict["prompt"])

scripts/README.md

Whitespace-only changes.

scripts/lm_eval/README.md

Lines changed: 72 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,72 @@
1+
# lm-eval.py
2+
3+
`lm-eval.py` provides basic benchmark capability for LLM feedback-based evolutionary task solving. The benchmark framework is [EleutherAI's lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness).
4+
5+
*Limitation:* Only generation-only tasks such as gsm8k are supported. This is because tasks that require loglikelihood probabilities are not well applicable to agents.
6+
7+
## Usage
8+
9+
```bash
10+
$ python3 scripts/lm_eval/lm-eval.py -h
11+
usage: lm-eval.py [-h] [--config CONFIG] [--init_file INIT_FILE] [--evaluator_file EVALUATOR_FILE] [--iterations ITERATIONS] [--limit LIMIT] [--tasks TASKS]
12+
[--output_path OUTPUT_PATH]
13+
14+
OpenEvolve <-> lm-evaluation-harness adapter.
15+
16+
options:
17+
-h, --help show this help message and exit
18+
--config CONFIG config file
19+
--init_file INIT_FILE
20+
initial content file
21+
--evaluator_file EVALUATOR_FILE
22+
evaluator file
23+
--iterations ITERATIONS
24+
number of iterations
25+
--limit LIMIT limit the number of examples per task that are executed
26+
--tasks TASKS list of tasks to evaluate
27+
--output_path OUTPUT_PATH
28+
output path for results
29+
```
30+
31+
Early examples that **were meant to** indicate that more evolution iterations improve task performance -- I suspect the prompting may not be ideal yet:
32+
```
33+
$ python3 scripts/lm_eval/lm-eval.py --tasks gsm8k --limit 10 --iterations 1
34+
[..]
35+
Headline metrics:
36+
gsm8k exact_match,strict-match 80.000%
37+
[..]
38+
39+
40+
$ python3 scripts/lm_eval/lm-eval.py --tasks gsm8k --limit 10 --iterations 3
41+
[..]
42+
Headline metrics:
43+
gsm8k exact_match,strict-match 90.000%
44+
[..]
45+
46+
$ python3 scripts/lm_eval/lm-eval.py --tasks gsm8k --limit 10 --iterations 10
47+
[..]
48+
Headline metrics:
49+
gsm8k exact_match,strict-match 80.000%
50+
[..]
51+
```
52+
53+
## Warning
54+
55+
- Be aware that this is an early implementation. No extensive benchmarks have been executed so far. With a limit to 10 tasks and 10 iterations, the benchmark is meaningless as is.
56+
- Use the --limit parameter only for tests, not for metric generation.
57+
- Do not cite the metrics that result from the script execution blindly without reviewing the solution first.
58+
59+
## References
60+
61+
```bibtex
62+
@misc{eval-harness,
63+
author = {Gao, Leo and Tow, Jonathan and Abbasi, Baber and Biderman, Stella and Black, Sid and DiPofi, Anthony and Foster, Charles and Golding, Laurence and Hsu, Jeffrey and Le Noac'h, Alain and Li, Haonan and McDonell, Kyle and Muennighoff, Niklas and Ociepa, Chris and Phang, Jason and Reynolds, Laria and Schoelkopf, Hailey and Skowron, Aviya and Sutawika, Lintang and Tang, Eric and Thite, Anish and Wang, Ben and Wang, Kevin and Zou, Andy},
64+
title = {The Language Model Evaluation Harness},
65+
month = 07,
66+
year = 2024,
67+
publisher = {Zenodo},
68+
version = {v0.4.3},
69+
doi = {10.5281/zenodo.12608602},
70+
url = {https://zenodo.org/records/12608602}
71+
}
72+
```

scripts/lm_eval/config.yml

Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
max_iterations: 1
2+
checkpoint_interval: 10
3+
log_level: "INFO"
4+
5+
# LLM configuration
6+
llm:
7+
primary_model: "gemma3:12b-it-qat"
8+
#primary_model: "gpt-4o"
9+
primary_model_weight: 0.8
10+
secondary_model: "gemma3:12b-it-qat"
11+
#secondary_model: "gpt-4.1"
12+
secondary_model_weight: 0.2
13+
# api_base: "https://generativelanguage.googleapis.com/v1beta/openai/"
14+
# api_base: "https://api.openai.com/v1/"
15+
api_base: "http://localhost:11434/v1/"
16+
api_key: "ollama"
17+
temperature: 0.7
18+
top_p: 0.95
19+
max_tokens: 4096
20+
21+
# Prompt configuration
22+
prompt:
23+
num_top_programs: 3
24+
use_template_stochasticity: true
25+
# System prompt is created dynamically during the benchmark in file system_message.txt!
26+
template_dir: "scripts/lm_eval/prompts"
27+
28+
# Database configuration
29+
database:
30+
population_size: 50
31+
archive_size: 20
32+
num_islands: 3
33+
elite_selection_ratio: 0.2
34+
exploitation_ratio: 0.7
35+
36+
# Evaluator configuration
37+
evaluator:
38+
timeout: 60
39+
cascade_evaluation: false
40+
cascade_thresholds: [0.5, 0.75]
41+
parallel_evaluations: 4
42+
use_llm_feedback: true
43+
llm_feedback_weight: 1.0
44+
45+
46+
# Evolution settings
47+
diff_based_evolution: false
48+
allow_full_rewrites: true

scripts/lm_eval/evaluator_stub.py

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
def evaluate_stage1(file_path):
2+
return {"not_implemented": 0.0}
3+
4+
def evaluate(file_path):
5+
return evaluate_stage1(file_path)
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
insert the answer to the task here!

scripts/lm_eval/lm-eval.py

Lines changed: 200 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,200 @@
1+
"""
2+
OpenEvolve <-> lm-evaluation-harness adapter
3+
4+
Implements generation only, no loglikelihood. Tasks such as GSM8K / BoolQ / MMLU-Math /
5+
AQUA-RAT and most code suites should work fine because they grade on the generated
6+
answer string.
7+
"""
8+
9+
from __future__ import annotations
10+
import subprocess, tempfile, json, os, argparse, math, pathlib
11+
from pathlib import Path
12+
from typing import List, Dict, Tuple, Any, Iterable
13+
14+
import lm_eval
15+
from lm_eval.tasks import TaskManager
16+
from lm_eval.evaluator import evaluate
17+
from lm_eval.api.model import LM
18+
from lm_eval.api.registry import register_model
19+
from datetime import datetime
20+
21+
# cd to the parent parent directory of this file
22+
os.chdir(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
23+
24+
PIPELINE_CMD = ["python3", "openevolve-run.py"]
25+
26+
@register_model("openevolve")
27+
class OpenEvolve(LM):
28+
def __init__(
29+
self,
30+
init_file: str = "initial_content_stub.txt",
31+
evaluator_file: str = "evaluator_stub.py",
32+
config_file: str = "config.yml",
33+
iterations: int = 5,
34+
extra_param: List[str] = [],
35+
**kwargs,
36+
):
37+
super().__init__()
38+
self.init_file = init_file
39+
self.evaluator_file = evaluator_file
40+
self.iterations = iterations
41+
self.extra_param = extra_param
42+
self.config_file = config_file
43+
44+
# folder must match prompt:template_dir in config.yml!
45+
self.prompt_path = "scripts/lm_eval/prompts/system_message.txt"
46+
self.evaluator_prompt_path = "scripts/lm_eval/prompts/evaluator_system_message.txt"
47+
self.best_path = "scripts/lm_eval/openevolve_output/best/best_program.txt"
48+
self.base_system_message = "You are an expert task solver, with a lot of commonsense, math, language and coding knowledge.\n\nConsider this task:\n```{prompt}´´´"
49+
50+
def generate(self, prompts: List[str], max_gen_toks: int = None, stop=None, **kwargs):
51+
outs = []
52+
for prompt in prompts:
53+
# Task prompt becomes the system message. User prompt is the evolutionary logic.
54+
# We create temporary prompt files with the system message
55+
with Path(self.prompt_path).open("w") as f:
56+
f.write(self.base_system_message.format(prompt=prompt))
57+
58+
with Path(self.evaluator_prompt_path).open("w") as f:
59+
f.write(self.base_system_message.format(prompt=prompt))
60+
61+
cmd = (
62+
PIPELINE_CMD
63+
+ ["--config", self.config_file]
64+
+ ["--iterations", str(self.iterations)]
65+
+ self.extra_param
66+
+ [self.init_file, self.evaluator_file]
67+
)
68+
print(f"Running command: {' '.join(cmd)}")
69+
try:
70+
res = subprocess.run(cmd, capture_output=True, text=True, check=True)
71+
text = res.stdout.strip()
72+
print(f"Process output: {text}")
73+
except subprocess.CalledProcessError as e:
74+
print(f"Command failed with return code {e.returncode}")
75+
print(f"stderr: {e.stderr}")
76+
text = ""
77+
78+
print(f"# Prompt: {prompt}")
79+
with Path(self.best_path).open("r") as f:
80+
best = f.read().strip()
81+
print(f"# Answer: {best}")
82+
83+
# honour stop tokens
84+
if stop:
85+
for s in stop:
86+
idx = best.find(s)
87+
if idx != -1:
88+
best = best[:idx]
89+
break
90+
outs.append(best)
91+
return outs
92+
93+
# for tasks that ask for log likelihood, indicate that it is unsupported
94+
def loglikelihood(self, requests: Iterable[Tuple[str, str]], **kw):
95+
# return [(-math.inf, False) for _ in requests]
96+
raise NotImplementedError
97+
98+
def loglikelihood_rolling(self, requests: Iterable[str], **kw):
99+
# return [(-math.inf, False) for _ in requests]
100+
raise NotImplementedError
101+
102+
def generate_until(self, requests: Iterable[Any], **kw) -> List[str]:
103+
ctxs, stops = [], []
104+
105+
for req in requests:
106+
# ---------------- old: plain tuple ----------------
107+
if isinstance(req, tuple):
108+
ctx, until = req
109+
110+
# -------------- new: Instance object --------------
111+
else:
112+
ctx = req.args[0] # first positional arg
113+
until = []
114+
# if a second positional arg exists and is list-like,
115+
# treat it as the stop sequence
116+
if len(req.args) > 1 and isinstance(req.args[1], (list, tuple)):
117+
until = list(req.args[1])
118+
119+
ctxs.append(ctx)
120+
stops.append(until)
121+
122+
# 2) run your real generator once per context
123+
gens = self.generate(ctxs, stop=None)
124+
125+
# 3) post-trim at the first stop sequence
126+
cleaned = []
127+
for g, until in zip(gens, stops):
128+
for s in until:
129+
idx = g.find(s)
130+
if idx != -1:
131+
g = g[:idx]
132+
break
133+
cleaned.append(g)
134+
return cleaned
135+
136+
if __name__ == "__main__":
137+
# cli arguments for primary model, secondary model, iterations, config and tasks
138+
p = argparse.ArgumentParser(
139+
description="OpenEvolve <-> lm-evaluation-harness adapter.",
140+
)
141+
p.add_argument("--config", default="scripts/lm_eval/config.yml", help="config file")
142+
p.add_argument(
143+
"--init_file",
144+
default="scripts/lm_eval/initial_content_stub.txt",
145+
help="initial content file",
146+
)
147+
p.add_argument(
148+
"--evaluator_file", default="scripts/lm_eval/evaluator_stub.py", help="evaluator file"
149+
)
150+
p.add_argument("--iterations", default=5, type=int, help="number of iterations")
151+
p.add_argument("--limit", default=None, type=int, help="limit the number of examples per task that are executed")
152+
# p.add_argument("--tasks", default="boolq,gsm8k,mmlu", help="comma-list of tasks to evaluate")
153+
p.add_argument("--tasks", default="gsm8k", help="list of tasks to evaluate")
154+
p.add_argument("--output_path", default="results", help="output path for results")
155+
args = p.parse_args()
156+
157+
lm_obj = OpenEvolve(
158+
init_file=args.init_file,
159+
evaluator_file=args.evaluator_file,
160+
iterations=args.iterations,
161+
config_file=args.config,
162+
)
163+
164+
task_dict = lm_eval.tasks.get_task_dict(args.tasks.split(","))
165+
166+
results = evaluate(
167+
lm=lm_obj,
168+
task_dict=task_dict,
169+
limit=args.limit,
170+
)
171+
172+
# write out the results
173+
pathlib.Path(
174+
args.output_path,
175+
).mkdir(exist_ok=True)
176+
177+
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
178+
results_path = pathlib.Path(os.path.join(
179+
args.output_path,
180+
f"{timestamp}_iter{args.iterations}.json",
181+
))
182+
183+
with results_path.open("w") as f:
184+
json.dump(results, f, indent=2)
185+
186+
# print result summary
187+
short = {}
188+
for task, metrics in results["results"].items():
189+
# pick the first value that is a real number
190+
for key, val in metrics.items():
191+
if isinstance(val, (int, float)):
192+
short[task] = (key, val) # store *both* name & value
193+
break
194+
195+
print(f"Full results written to {results_path}\n")
196+
print("Headline metrics:")
197+
for task, (name, value) in short.items():
198+
print(f" {task:<15} {name:<12} {value:.3%}")
199+
200+
print("\nNote: Never cite the overall average when some components were skipped!")
Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
# Current Solution Information
2+
- Current performance metrics: {metrics}
3+
- Areas identified for improvement: {improvement_areas}
4+
5+
# Evolution History
6+
{evolution_history}
7+
8+
# Current Solution
9+
```
10+
{current_program}
11+
```
12+
13+
# Task
14+
Suggest improvements to the answer that will lead to better performance on the specified metrics.
15+
16+
You MUST use the exact SEARCH/REPLACE diff format shown below to indicate changes:
17+
18+
<<<<<<< SEARCH
19+
# Original text to find and replace (must match exactly)
20+
=======
21+
# New replacement text
22+
>>>>>>> REPLACE
23+
24+
Example of valid diff format:
25+
<<<<<<< SEARCH
26+
poem stub
27+
=======
28+
Tyger Tyger, burning bright, In the forests of the night; What immortal hand or eye
29+
>>>>>>> REPLACE
30+
31+
You can suggest multiple changes. Each SEARCH section must exactly match text in the current solution.
32+
Be thoughtful about your changes and explain your reasoning thoroughly.
33+
34+
IMPORTANT: Do not necessarily rewrite the entire solution - focus on targeted improvements.

0 commit comments

Comments
 (0)