You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -207,7 +207,7 @@ lighteval accelerate \
207
207
An alternative to launching the evaluation locally is to serve the model on a TGI-compatible server/container and then run the evaluation by sending requests to the server. The command is the same as before, except you specify a path to a yaml config file (detailed below):
208
208
209
209
```shell
210
-
python run_evals_accelerate.py \
210
+
lighteval accelerate \
211
211
--model_config_path="/path/to/config/file"\
212
212
--tasks <task parameters> \
213
213
--output_dir output_dir
@@ -262,7 +262,7 @@ lighteval accelerate \
262
262
### Using the dummy model
263
263
To debug or obtain random baseline scores for a given set of tasks, you can use the `dummy` model:
264
264
```shell
265
-
python run_evals_accelerate.py \
265
+
lighteval accelerate \
266
266
--model_args "dummy"\
267
267
--tasks <task parameters> \
268
268
--output_dir output_dir
@@ -279,7 +279,7 @@ However, we are very grateful to the Harness and HELM teams for their continued
279
279
280
280
## How to navigate this project
281
281
`lighteval` is supposed to be used as a standalone evaluation library.
282
-
- To run the evaluations, you can use `run_evals_accelerate.py` or `run_evals_nanotron.py`.
282
+
- To run the evaluations, you can use `lighteval accelerate` or `lighteval nanotron`.
283
283
-[src/lighteval](https://github.com/huggingface/lighteval/tree/main/src/lighteval) contains the core of the lib itself
284
284
-[lighteval](https://github.com/huggingface/lighteval/tree/main/src/lighteval) contains the core of the library, divided in the following section
285
285
- [main_accelerate.py](https://github.com/huggingface/lighteval/blob/main/src/lighteval/main_accelerate.py) and [main_nanotron.py](https://github.com/huggingface/lighteval/blob/main/src/lighteval/main_nanotron.py) are our entry points to run evaluation
0 commit comments