Skip to content

Commit 387fe50

Browse files
committed
minor fix to readme
1 parent d74507a commit 387fe50

File tree

1 file changed

+2
-2
lines changed
  • tools/benchmarks/llm_eval_harness/meta_eval_reproduce

1 file changed

+2
-2
lines changed

tools/benchmarks/llm_eval_harness/meta_eval_reproduce/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ Given those differences, our reproduced number can not be compared to the number
2323

2424
## Environment setups
2525

26-
Please install our lm-evaluation-harness and llama-recipe repo by following:
26+
Please install lm-evaluation-harness and our llama-recipe repo by following:
2727

2828
```
2929
pip install lm-eval[math,ifeval,sentencepiece,vllm]==0.4.3
@@ -83,7 +83,7 @@ data_parallel_size: 4 # The VLLM argument that speicify the data parallel size f
8383
python prepare_meta_eval.py --config_path ./eval_config.yaml
8484
```
8585

86-
By default,this will load the default [eval_config.yaml](./eval_config.yaml) config and print out a CLI command to run `meta_instruct` group tasks, which includes `meta_ifeval`, `meta_math_hard`, `meta_gpqa` and `meta_mmlu_pro_instruct`, for `meta-llama/Meta-Llama-3.1-8B-Instruct` model using `meta-llama/Meta-Llama-3.1-8B-Instruct-evals` dataset and `lm_eval`.
86+
This script will load the default [eval_config.yaml](./eval_config.yaml) config and print out a `lm_eval` command to run `meta_instruct` group tasks, which includes `meta_ifeval`, `meta_math_hard`, `meta_gpqa` and `meta_mmlu_pro_instruct`, for `meta-llama/Meta-Llama-3.1-8B-Instruct` model using `meta-llama/Meta-Llama-3.1-8B-Instruct-evals` dataset.
8787

8888
An example output from [prepare_meta_eval.py](./prepare_meta_eval.py) looks like this:
8989

0 commit comments

Comments
 (0)