Skip to content

Commit db228c2

Browse files
committed
add cpu support for MMLU bench
simple_evaluate has a device option that seems to default to cuda. People with 128GB or even ~90GB+ of ram should be able to run eval on CPU. Add auto detection for if torch is available Signed-off-by: Charlie Doern <cdoern@redhat.com>
1 parent 8494a51 commit db228c2

File tree

1 file changed

+2
-0
lines changed

1 file changed

+2
-0
lines changed

src/instructlab/eval/mmlu.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@
66
# Third Party
77
from lm_eval.evaluator import simple_evaluate # type: ignore
88
from lm_eval.tasks import TaskManager # type: ignore
9+
import torch
910

1011
# First Party
1112
from instructlab.eval.evaluator import Evaluator
@@ -58,6 +59,7 @@ def run(self) -> tuple:
5859
tasks=self.tasks,
5960
num_fewshot=self.few_shots,
6061
batch_size=self.batch_size,
62+
device=("cuda" if torch.cuda.is_available() else "cpu"),
6163
)
6264

6365
results = mmlu_output["results"]

0 commit comments

Comments
 (0)