Skip to content

Commit ccfa4ad

Browse files
authored
Add BabiLong (#3287)
* create babilong tasks * lint * add clarification * fix typo * add babilong description
1 parent fec9dde commit ccfa4ad

26 files changed

+578
-1
lines changed

lm_eval/tasks/README.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,14 +22,15 @@ provided to the individual README.md files for each subfolder.
2222
| [arithmetic](arithmetic/README.md) | Tasks involving numerical computations and arithmetic reasoning. | English |
2323
| [asdiv](asdiv/README.md) | Tasks involving arithmetic and mathematical reasoning challenges. | English |
2424
| [babi](babi/README.md) | Tasks designed as question and answering challenges based on simulated stories. | English |
25+
| [babilong](babilong/README.md) | Tasks designed to test whether models can find and reason over facts in long contexts. | English |
2526
| [basque_bench](basque_bench/README.md) | Collection of tasks in Basque encompassing various evaluation areas. | Basque |
2627
| [basqueglue](basqueglue/README.md) | Tasks designed to evaluate language understanding in Basque language. | Basque |
2728
| [bbh](bbh/README.md) | Tasks focused on deep semantic understanding through hypothesization and reasoning. | English, German |
2829
| [bbq](bbq/README.md) | A question-answering benchmark designed to measure social biases in language models across various demographic categories and contexts. | English |
2930
| [belebele](belebele/README.md) | Language understanding tasks in a variety of languages and scripts. | Multiple (122 languages) |
3031
| benchmarks | General benchmarking tasks that test a wide range of language understanding capabilities. | |
3132
| [bertaqa](bertaqa/README.md) | Local Basque cultural trivia QA tests in English and Basque languages. | English, Basque, Basque (MT) |
32-
| [bhs](bhs/README.md) | Grammatical knowledge evaluation for low-resource langauges. | Basque, Hindi, Swahili |
33+
| [bhs](bhs/README.md) | Grammatical knowledge evaluation for low-resource langauges. | Basque, Hindi, Swahili |
3334
| [bigbench](bigbench/README.md) | Broad tasks from the BIG-bench benchmark designed to push the boundaries of large models. | Multiple |
3435
| [blimp](blimp/README.md) | Tasks testing grammatical phenomena to evaluate language model's linguistic capabilities. | English |
3536
| [blimp_nl](blimp_nl/README.md) | A benchmark evaluating language models' grammatical capabilities in Dutch based on comparing the probabilities of minimal pairs of grammatical and ungrammatical sentences. | Dutch |

lm_eval/tasks/babilong/README.md

Lines changed: 76 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,76 @@
1+
# Babilong
2+
3+
### Paper
4+
5+
Title: Babilong: Testing the Limits of LLMs with Long Context Reasoning-in-a-Haystack
6+
Abstract: https://arxiv.org/abs/2406.10149
7+
8+
In recent years, the input context sizes of large language models (LLMs) have increased dramatically. However, existing evaluation methods have not kept pace, failing to comprehensively assess the efficiency of models in handling long contexts. To bridge this gap, we introduce the BABILong benchmark, designed to test language models' ability to reason across facts distributed in extremely long documents. BABILong includes a diverse set of 20 reasoning tasks, including fact chaining, simple induction, deduction, counting, and handling lists/sets. These tasks are challenging on their own, and even more demanding when the required facts are scattered across long natural text. Our evaluations show that popular LLMs effectively utilize only 10-20\% of the context and their performance declines sharply with increased reasoning complexity. Among alternatives to in-context reasoning, Retrieval-Augmented Generation methods achieve a modest 60\% accuracy on single-fact question answering, independent of context length. Among context extension methods, the highest performance is demonstrated by recurrent memory transformers after fine-tuning, enabling the processing of lengths up to 50 million tokens. The BABILong benchmark is extendable to any length to support the evaluation of new upcoming models with increased capabilities, and we provide splits up to 10 million token lengths.
9+
10+
Homepage: https://github.com/booydar/babilong
11+
12+
### Citation
13+
14+
```
15+
@article{kuratov2024babilong,
16+
title={Babilong: Testing the Limits of LLMs with Long Context Reasoning-in-a-Haystack},
17+
author={Kuratov, Yuri and Bulatov, Aydar and Anokhin, Petr and Rodkin, Ivan and Sorokin, Dmitry and Burtsev, Mikhail},
18+
journal={arXiv preprint arXiv:2406.10149},
19+
year={2024}
20+
}
21+
```
22+
23+
### Groups and Tasks
24+
25+
#### Groups
26+
27+
* `babilong`: All Babilong tasks at 0k context length
28+
* `babilong_longctx`: Babilong tasks between qa1-qa5 at context lengths up to 128k
29+
30+
31+
#### Tasks
32+
33+
The benchmark includes 1000 samples of 20 reasoning tasks at various context lengths:
34+
35+
**QA Tasks (qa1-qa20):**
36+
* `babilong_qa1`: Single supporting fact QA
37+
* `babilong_qa2`: Two supporting facts QA
38+
* `babilong_qa3`: Three supporting facts QA
39+
* `babilong_qa4`: Two argument relations
40+
* `babilong_qa5`: Three argument relations
41+
* `babilong_qa6`: Yes/No questions
42+
* `babilong_qa7`: Counting
43+
* `babilong_qa8`: Lists and sets
44+
* `babilong_qa9`: Simple negation
45+
* `babilong_qa10`: Indefinite knowledge
46+
* `babilong_qa11`: Track person through temporal references
47+
* `babilong_qa12`: Conjunction
48+
* `babilong_qa13`: Compound coreference
49+
* `babilong_qa14`: Time reasoning
50+
* `babilong_qa15`: Basic deduction
51+
* `babilong_qa16`: Basic induction
52+
* `babilong_qa17`: Positional reasoning
53+
* `babilong_qa18`: Size reasoning
54+
* `babilong_qa19`: Path finding
55+
* `babilong_qa20`: Motivation deduction
56+
57+
> [!NOTE]
58+
> When using babilong tasks, please note:
59+
> 1. This is the implementation with 1000 samples per length. You can change the dataset path to `RMT-team/babilong` in `common_utils.py` for the dataset with 100 samples per length, which supports context lengths up to 10M tokens.
60+
> 2. Supported lengths are 0k, 1, 2, 4, 8, 16, 32, 64, 128k tokens for tasks qa1-5. Tasks qa6-20 only have a length of 0k.
61+
> 3. The default maximum sequence length is 0k. For calculating metrics of different max seq lengths, specify additional lengths using the metadata parameter:
62+
> `--metadata '{"max_seq_lengths":"0k,1k,2k,4k,8k,16k,32k,128k"}'`. The config currently only takes one context length at a time. The metadata parameter can also be passed to the TaskManager (metadata: dict).
63+
64+
65+
### Checklist
66+
67+
For adding novel benchmarks/datasets to the library:
68+
* [x] Is the task an existing benchmark in the literature?
69+
* [x] Have you referenced the original paper that introduced the task?
70+
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
71+
72+
73+
If other tasks on this dataset are already supported:
74+
* [ ] Is the "Main" variant of this task clearly denoted?
75+
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
76+
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
dataset_path: RMT-team/babilong-1k-samples
2+
output_type: generate_until
3+
doc_to_target: "{{target}}"
4+
target_delimiter: " "
5+
num_fewshot: 2
6+
process_results: !function common_utils.process_results
7+
metric_list:
8+
- metric: acc
9+
aggregation: mean
10+
higher_is_better: true
11+
generation_kwargs:
12+
do_sample: false
13+
temperature: 0.0
14+
max_gen_toks: 16
15+
until: []
16+
metadata:
17+
version: 0.0
Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
group: babilong
2+
task:
3+
- babilong_qa1
4+
- babilong_qa2
5+
- babilong_qa3
6+
- babilong_qa4
7+
- babilong_qa5
8+
- babilong_qa6
9+
- babilong_qa7
10+
- babilong_qa8
11+
- babilong_qa9
12+
- babilong_qa10
13+
- babilong_qa11
14+
- babilong_qa12
15+
- babilong_qa13
16+
- babilong_qa14
17+
- babilong_qa15
18+
- babilong_qa16
19+
- babilong_qa17
20+
- babilong_qa18
21+
- babilong_qa19
22+
- babilong_qa20
23+
aggregate_metric_list:
24+
- metric: acc
25+
weight_by_size: True
26+
metadata:
27+
version: 0.0
Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
group: babilong_longctx
2+
task:
3+
- babilong_qa1
4+
- babilong_qa2
5+
- babilong_qa3
6+
- babilong_qa4
7+
- babilong_qa5
8+
aggregate_metric_list:
9+
- metric: acc
10+
weight_by_size: True
11+
metadata:
12+
version: 0.0
Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
include: _babilong_common_yaml
2+
task: babilong_qa1
3+
test_split: qa1
4+
custom_dataset: !function common_utils.load_dataset
5+
dataset_kwargs:
6+
qa_split: qa1
7+
description: "I will give you context with the facts about positions of different persons hidden in some random text and a question. You need to answer the question based only on the information from the facts. If a person was in different locations, use the latest location to answer the question.\nAlways return your answer in the following format:\nThe most recent location of 'person' is 'location'. Do not write anything else after that.\n\n"
8+
doc_to_text: "{{input.strip()}}\n{{question.strip()}}"
9+
10+
fewshot_config:
11+
sampler: first_n
12+
samples:
13+
- input: "Charlie went to the hallway. Judith come back to the kitchen. Charlie travelled to balcony."
14+
question: "Where is Charlie?"
15+
target: "The most recent location of Charlie is balcony."
16+
- input: "Alan moved to the garage. Charlie went to the beach. Alan went to the shop. Rouse travelled to balcony."
17+
question: "Where is Alan?"
18+
target: "The most recent location of Alan is shop."
Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
include: _babilong_common_yaml
2+
task: babilong_qa10
3+
test_split: qa10
4+
custom_dataset: !function common_utils.load_dataset
5+
dataset_kwargs:
6+
qa_split: qa10
7+
description: "I will give you context with the facts about people and their locations hidden in some random text and a question. You need to answer the question based only on the information from the facts.\nIf a person was in different locations, use the latest location the person was in to answer the question.\nYour answer should contain only one word - $yes$ or $no$ or $maybe$. Do not write anything else. Do not explain your answer.\n\n"
8+
doc_to_text: "{{input.strip()}}\n{{question.strip()}}"
9+
10+
fewshot_config:
11+
sampler: first_n
12+
samples:
13+
- input: "Bill is in the kitchen. Julie is either in the school or the cinema."
14+
question: "Is Bill in the bedroom?"
15+
target: "no"
16+
- input: "Fred is in the bedroom. Mary is either in the school or the cinema."
17+
question: "Is Mary in the school?"
18+
target: "maybe"
19+
- input: "Fred is either in the kitchen or the park. Bill moved to the cinema."
20+
question: "Is Bill in the cinema?"
21+
target: "yes"
Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
include: _babilong_common_yaml
2+
task: babilong_qa11
3+
test_split: qa11
4+
dataset_name: 0k
5+
description: "I will give you context with the facts about people and their locations hidden in some random text and a question. You need to answer the question based only on the information from the facts.\nIf a person was in different locations, use the latest location the person was in to answer the question.\nYour answer should contain only one word - location. Do not write anything else after that. Do not explain your answer.\n\n"
6+
doc_to_text: "{{input.strip()}}\n{{question.strip()}}"
7+
8+
fewshot_config:
9+
sampler: first_n
10+
samples:
11+
- input: "Daniel journeyed to the hallway. After that he journeyed to the garden."
12+
question: "Where is Daniel?"
13+
target: "garden"
14+
- input: "Mary moved to the office. Afterwards she journeyed to the kitchen. Daniel went to the hallway. Then he journeyed to the garden."
15+
question: "Where is Mary?"
16+
target: "kitchen"
17+
- input: "Sandra moved to the kitchen. After that she went back to the hallway. Sandra moved to the bedroom. Then she went to the hallway. Mary moved to the bedroom. Afterwards she travelled to the bathroom."
18+
question: "Where is Sandra?"
19+
target: "hallway"
Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
include: _babilong_common_yaml
2+
task: babilong_qa12
3+
test_split: qa12
4+
dataset_name: 0k
5+
description: "I will give you context with the facts about people and their locations hidden in some random text and a question. You need to answer the question based only on the information from the facts.\nIf a person was in different locations, use the latest location the person was in to answer the question.\nYour answer should contain only one word - location. Do not write anything else after that. Do not explain your answer.\n\n"
6+
doc_to_text: "{{input.strip()}}\n{{question.strip()}}"
7+
8+
fewshot_config:
9+
sampler: first_n
10+
samples:
11+
- input: "Mary and Daniel travelled to the bathroom. John and Daniel travelled to the office."
12+
question: "Where is Daniel?"
13+
target: "office"
14+
- input: "Sandra and Mary went back to the office. Daniel and Sandra went to the bedroom. Sandra and Mary travelled to the hallway. John and Mary went to the kitchen."
15+
question: "Where is Mary?"
16+
target: "kitchen"
17+
- input: "Daniel and Sandra went back to the hallway. Daniel and John moved to the office. Daniel and John moved to the garden. Daniel and Mary went back to the bathroom. Daniel and John went back to the kitchen. Daniel and Sandra went to the bathroom."
18+
question: "Where is John?"
19+
target: "kitchen"
Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
include: _babilong_common_yaml
2+
task: babilong_qa13
3+
test_split: qa13
4+
dataset_name: 0k
5+
description: "I will give you context with the facts about people and their locations hidden in some random text and a question. You need to answer the question based only on the information from the facts.\nIf a person was in different locations, use the latest location the person was in to answer the question.\nYour answer should contain only one word - location. Do not write anything else after that. Do not explain your answer.\n\n"
6+
doc_to_text: "{{input.strip()}}\n{{question.strip()}}"
7+
8+
fewshot_config:
9+
sampler: first_n
10+
samples:
11+
- input: "Mary and Daniel travelled to the bathroom. Then they journeyed to the hallway."
12+
question: "Where is Daniel?"
13+
target: "hallway"
14+
- input: "Daniel and Sandra travelled to the kitchen. After that they journeyed to the hallway. Mary and Daniel travelled to the bedroom. After that they travelled to the hallway."
15+
question: "Where is Sandra?"
16+
target: "hallway"
17+
- input: "John and Mary moved to the bathroom. Then they travelled to the office. John and Mary went to the kitchen. Afterwards they went to the bedroom. John and Sandra moved to the bathroom. Following that they went back to the kitchen."
18+
question: "Where is Mary?"
19+
target: "bedroom"

0 commit comments

Comments
 (0)