Skip to content

Commit a7b4492

Browse files
committed
fix 403 dead link
1 parent ebb7b4e commit a7b4492

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

tools/benchmarks/llm_eval_harness/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ Llama-Recipe make use of `lm-evaluation-harness` for evaluating our fine-tuned M
88
- Over 60 standard academic benchmarks for LLMs, with hundreds of subtasks and variants implemented.
99
- Support for models loaded via [transformers](https://github.com/huggingface/transformers/) (including quantization via [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ)), [GPT-NeoX](https://github.com/EleutherAI/gpt-neox), and [Megatron-DeepSpeed](https://github.com/microsoft/Megatron-DeepSpeed/), with a flexible tokenization-agnostic interface.
1010
- Support for fast and memory-efficient inference with [vLLM](https://github.com/vllm-project/vllm).
11-
- Support for commercial APIs including [OpenAI](https://openai.com), and [TextSynth](https://textsynth.com/).
11+
- Support for commercial APIs including OpenAI and TextSynth.
1212
- Support for evaluation on adapters (e.g. LoRA) supported in [HuggingFace's PEFT library](https://github.com/huggingface/peft).
1313
- Support for local models and benchmarks.
1414
- Evaluation with publicly available prompts ensures reproducibility and comparability between papers.

0 commit comments

Comments
 (0)