Skip to content

Conversation

@jvm123
Copy link
Contributor

@jvm123 jvm123 commented Jun 3, 2025

Adapter for EleutherAI's lm-evaluation-harness to be able to execute their benchmarks, in particular for non-coding tasks with LLM feedback.

Early implementation, likely some issues with the prompts still, see README.md.

@CLAassistant
Copy link

CLAassistant commented Jun 3, 2025

CLA assistant check
All committers have signed the CLA.

@jvm123
Copy link
Contributor Author

jvm123 commented Jun 3, 2025

This PR depends on/references PR #47

@jvm123 jvm123 changed the title Benchmarks with EleutherAI lm-evaluation-harness Feature: Benchmarks with EleutherAI lm-evaluation-harness Jun 3, 2025
@codelion
Copy link
Member

codelion commented Jun 3, 2025

Would it make sense to move it as an example in the examples folder instead of the scripts? I think this is similar to the symbolic regresssion benchmark example in the current folder - https://github.com/codelion/openevolve/tree/main/examples/symbolic_regression

@jvm123
Copy link
Contributor Author

jvm123 commented Jun 3, 2025

Done!

@jvm123
Copy link
Contributor Author

jvm123 commented Jun 3, 2025

Note: I would like to contribute more benchmark results, but don't really have the compute or (big) budget. Happy for hints if anyone has any.

@codelion codelion merged commit 4b099e3 into algorithmicsuperintelligence:main Jun 4, 2025
3 checks passed
@jvm123 jvm123 deleted the feat-lm-eval branch June 5, 2025 02:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants