|
| 1 | +# Babilong |
| 2 | + |
| 3 | +### Paper |
| 4 | + |
| 5 | +Title: Babilong: Testing the Limits of LLMs with Long Context Reasoning-in-a-Haystack |
| 6 | +Abstract: https://arxiv.org/abs/2406.10149 |
| 7 | + |
| 8 | +In recent years, the input context sizes of large language models (LLMs) have increased dramatically. However, existing evaluation methods have not kept pace, failing to comprehensively assess the efficiency of models in handling long contexts. To bridge this gap, we introduce the BABILong benchmark, designed to test language models' ability to reason across facts distributed in extremely long documents. BABILong includes a diverse set of 20 reasoning tasks, including fact chaining, simple induction, deduction, counting, and handling lists/sets. These tasks are challenging on their own, and even more demanding when the required facts are scattered across long natural text. Our evaluations show that popular LLMs effectively utilize only 10-20\% of the context and their performance declines sharply with increased reasoning complexity. Among alternatives to in-context reasoning, Retrieval-Augmented Generation methods achieve a modest 60\% accuracy on single-fact question answering, independent of context length. Among context extension methods, the highest performance is demonstrated by recurrent memory transformers after fine-tuning, enabling the processing of lengths up to 50 million tokens. The BABILong benchmark is extendable to any length to support the evaluation of new upcoming models with increased capabilities, and we provide splits up to 10 million token lengths. |
| 9 | + |
| 10 | +Homepage: https://github.com/booydar/babilong |
| 11 | + |
| 12 | +### Citation |
| 13 | + |
| 14 | +``` |
| 15 | +@article{kuratov2024babilong, |
| 16 | + title={Babilong: Testing the Limits of LLMs with Long Context Reasoning-in-a-Haystack}, |
| 17 | + author={Kuratov, Yuri and Bulatov, Aydar and Anokhin, Petr and Rodkin, Ivan and Sorokin, Dmitry and Burtsev, Mikhail}, |
| 18 | + journal={arXiv preprint arXiv:2406.10149}, |
| 19 | + year={2024} |
| 20 | +} |
| 21 | +``` |
| 22 | + |
| 23 | +### Groups and Tasks |
| 24 | + |
| 25 | +#### Groups |
| 26 | + |
| 27 | +* `babilong`: All Babilong tasks at 0k context length |
| 28 | +* `babilong_longctx`: Babilong tasks between qa1-qa5 at context lengths up to 128k |
| 29 | + |
| 30 | + |
| 31 | +#### Tasks |
| 32 | + |
| 33 | +The benchmark includes 1000 samples of 20 reasoning tasks at various context lengths: |
| 34 | + |
| 35 | +**QA Tasks (qa1-qa20):** |
| 36 | +* `babilong_qa1`: Single supporting fact QA |
| 37 | +* `babilong_qa2`: Two supporting facts QA |
| 38 | +* `babilong_qa3`: Three supporting facts QA |
| 39 | +* `babilong_qa4`: Two argument relations |
| 40 | +* `babilong_qa5`: Three argument relations |
| 41 | +* `babilong_qa6`: Yes/No questions |
| 42 | +* `babilong_qa7`: Counting |
| 43 | +* `babilong_qa8`: Lists and sets |
| 44 | +* `babilong_qa9`: Simple negation |
| 45 | +* `babilong_qa10`: Indefinite knowledge |
| 46 | +* `babilong_qa11`: Track person through temporal references |
| 47 | +* `babilong_qa12`: Conjunction |
| 48 | +* `babilong_qa13`: Compound coreference |
| 49 | +* `babilong_qa14`: Time reasoning |
| 50 | +* `babilong_qa15`: Basic deduction |
| 51 | +* `babilong_qa16`: Basic induction |
| 52 | +* `babilong_qa17`: Positional reasoning |
| 53 | +* `babilong_qa18`: Size reasoning |
| 54 | +* `babilong_qa19`: Path finding |
| 55 | +* `babilong_qa20`: Motivation deduction |
| 56 | + |
| 57 | +> [!NOTE] |
| 58 | +> When using babilong tasks, please note: |
| 59 | +> 1. This is the implementation with 1000 samples per length. You can change the dataset path to `RMT-team/babilong` in `common_utils.py` for the dataset with 100 samples per length, which supports context lengths up to 10M tokens. |
| 60 | +> 2. Supported lengths are 0k, 1, 2, 4, 8, 16, 32, 64, 128k tokens for tasks qa1-5. Tasks qa6-20 only have a length of 0k. |
| 61 | +> 3. The default maximum sequence length is 0k. For calculating metrics of different max seq lengths, specify additional lengths using the metadata parameter: |
| 62 | +> `--metadata '{"max_seq_lengths":"0k,1k,2k,4k,8k,16k,32k,128k"}'`. The config currently only takes one context length at a time. The metadata parameter can also be passed to the TaskManager (metadata: dict). |
| 63 | +
|
| 64 | + |
| 65 | +### Checklist |
| 66 | + |
| 67 | +For adding novel benchmarks/datasets to the library: |
| 68 | +* [x] Is the task an existing benchmark in the literature? |
| 69 | + * [x] Have you referenced the original paper that introduced the task? |
| 70 | + * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? |
| 71 | + |
| 72 | + |
| 73 | +If other tasks on this dataset are already supported: |
| 74 | +* [ ] Is the "Main" variant of this task clearly denoted? |
| 75 | +* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? |
| 76 | +* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? |
0 commit comments