Skip to content

Commit 9dcbca5

Browse files
committed
minor typo fix
1 parent d095cd4 commit 9dcbca5

File tree

1 file changed

+2
-2
lines changed
  • tools/benchmarks/llm_eval_harness/meta_eval_reproduce

1 file changed

+2
-2
lines changed

tools/benchmarks/llm_eval_harness/meta_eval_reproduce/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ As Meta Llama models gain popularity, evaluating these models has become increas
66
## Important Notes
77

88
1. **This tutorial is not the official implementation** of Meta Llama evaluation. It is based on public third-party libraries, and the implementation may differ slightly from our internal evaluation, leading to minor differences in the reproduced numbers.
9-
2. **Model Compatibility**: This tutorial is specifically for Llama 3 based models, as our prompts include Meta Llama 3 special tokens (e.g., `<|start_header_id|>user<|end_header_id|`). It will not work with models that are not based on Llama 3.
9+
2. **Model Compatibility**: This tutorial is specifically for Llama 3 based models, as our prompts include Meta Llama 3 special tokens, e.g. `<|start_header_id|>user<|end_header_id|`. It will not work with models that are not based on Llama 3.
1010

1111

1212
### Huggingface setups
@@ -65,7 +65,7 @@ test_split: latest
6565
6666
**Note**: Remember to change the eval dataset name according to the model type and DO NOT use pretrained evals dataset on instruct models or vice versa.
6767
68-
**2.Configure for preprocessing, prompts and ground truth**
68+
**2.Configure preprocessing, prompts and ground truth**
6969
7070
Here is the example yaml snippet in the MMLU-Pro that handles dataset preprocess, prompts and ground truth.
7171

0 commit comments

Comments
 (0)