Skip to content

Commit 99162f1

Browse files
Fix task config metric typing to accept Metric enums (#1018)
Signed-off-by: Emmanuel Ferdman <[email protected]> Co-authored-by: Nathan Habib <[email protected]>
1 parent 5425c33 commit 99162f1

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

src/lighteval/tasks/lighteval_task.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ class LightevalTaskConfig:
6060
name as input.
6161
hf_repo (str): HuggingFace Hub repository path containing the evaluation dataset.
6262
hf_subset (str): Dataset subset/configuration name to use for this task.
63-
metrics (ListLike[Metric]): List of metrics to compute for this task.
63+
metrics (ListLike[Metric | Metrics]): List of metrics or metric enums to compute for this task.
6464
6565
Dataset Configuration:
6666
hf_revision (str | None, optional): Specific dataset revision to use.
@@ -112,7 +112,7 @@ class LightevalTaskConfig:
112112
] # The prompt function should be used to map a line in the dataset to a Sample
113113
hf_repo: str
114114
hf_subset: str
115-
metrics: ListLike[Metric] # List of metric , should be configurable
115+
metrics: ListLike[Metric | Metrics] # Accept both Metric objects and Metrics enums
116116

117117
# Inspect AI compatible parameters
118118
solver: None = None

0 commit comments

Comments
 (0)