Skip to content

Commit 7295c78

Browse files
fix doc in readme (#303)
Co-authored-by: Clémentine Fourrier <[email protected]>
1 parent 6b943ec commit 7295c78

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -207,7 +207,7 @@ lighteval accelerate \
207207
An alternative to launching the evaluation locally is to serve the model on a TGI-compatible server/container and then run the evaluation by sending requests to the server. The command is the same as before, except you specify a path to a yaml config file (detailed below):
208208

209209
```shell
210-
python run_evals_accelerate.py \
210+
lighteval accelerate \
211211
--model_config_path="/path/to/config/file"\
212212
--tasks <task parameters> \
213213
--output_dir output_dir
@@ -262,7 +262,7 @@ lighteval accelerate \
262262
### Using the dummy model
263263
To debug or obtain random baseline scores for a given set of tasks, you can use the `dummy` model:
264264
```shell
265-
python run_evals_accelerate.py \
265+
lighteval accelerate \
266266
--model_args "dummy"\
267267
--tasks <task parameters> \
268268
--output_dir output_dir
@@ -279,7 +279,7 @@ However, we are very grateful to the Harness and HELM teams for their continued
279279

280280
## How to navigate this project
281281
`lighteval` is supposed to be used as a standalone evaluation library.
282-
- To run the evaluations, you can use `run_evals_accelerate.py` or `run_evals_nanotron.py`.
282+
- To run the evaluations, you can use `lighteval accelerate` or `lighteval nanotron`.
283283
- [src/lighteval](https://github.com/huggingface/lighteval/tree/main/src/lighteval) contains the core of the lib itself
284284
- [lighteval](https://github.com/huggingface/lighteval/tree/main/src/lighteval) contains the core of the library, divided in the following section
285285
- [main_accelerate.py](https://github.com/huggingface/lighteval/blob/main/src/lighteval/main_accelerate.py) and [main_nanotron.py](https://github.com/huggingface/lighteval/blob/main/src/lighteval/main_nanotron.py) are our entry points to run evaluation

0 commit comments

Comments
 (0)