Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions lm_eval/tasks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -148,6 +148,8 @@ provided to the individual README.md files for each subfolder.
| [pile](pile/README.md) | Open source language modelling data set that consists of 22 smaller, high-quality datasets. | English |
| [pile_10k](pile_10k/README.md) | The first 10K elements of The Pile, useful for debugging models trained on it. | English |
| [piqa](piqa/README.md) | Physical Interaction Question Answering tasks to test physical commonsense reasoning. | English |
| [pisa](pisa/README.md) | Multi-lingual, mulit-model tasks that involve reading comprehension and math challenges. | English, German, French, Spanish, Italien, Chinese. |

| [polemo2](polemo2/README.md) | Sentiment analysis and emotion detection tasks based on Polish language data. | Polish |
| [portuguese_bench](portuguese_bench/README.md) | Collection of tasks in European Portuguese encompassing various evaluation areas. | Portuguese |
| [prost](prost/README.md) | Tasks requiring understanding of professional standards and ethics in various domains. | English |
Expand Down
62 changes: 62 additions & 0 deletions lm_eval/tasks/pisa/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# PisaBench

### Paper

Title: PISA-Bench: The PISA Index as a Multilingual and Multimodal Metric for the Evaluation of Vision-Language Models

Abstract: https://arxiv.org/abs/2510.24792

Vision-language models (VLMs) have demonstrated remarkable progress in multimodal reasoning. However, existing benchmarks remain limited in terms of high-quality, human-verified examples. Many current datasets rely on synthetically generated content by large language models (LLMs). Furthermore, most datasets are limited to English, as manual quality assurance of translated samples is time-consuming and costly. To fill this gap, we introduce PISA-Bench, a multilingual benchmark derived from English examples of the expert-created PISA tests, a unified framework for the assessment of student competencies in over eighty countries. Each example consists of human-extracted instructions, questions, answer options, and images, enriched with question type categories, and has been translated from English into five additional languages (Spanish, German, Chinese, French, and Italian), resulting in a fully parallel corpus covering six languages. We evaluate state-of-the-art vision-language models on PISA-Bench and find that especially small models (<20B parameters) fail to achieve high test scores. We further find substantial performance degradation on non-English splits as well as high error-rates when models are tasked with spatial and geometric reasoning. By releasing the dataset and evaluation framework, we provide a resource for advancing research on multilingual multimodal reasoning.

HuggingFace Dataset: https://huggingface.co/datasets/PisaBench/pisa-bench

### Citation

```@misc{haller2025pisabenchpisaindexmultilingual,
title={PISA-Bench: The PISA Index as a Multilingual and Multimodal Metric for the Evaluation of Vision-Language Models},
author={Patrick Haller and Fabio Barth and Jonas Golde and Georg Rehm and Alan Akbik},
year={2025},
eprint={2510.24792},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2510.24792},
}```

### Groups, Tags, and Tasks

#### Groups

* `pisa`: Evaluates over all language splits with substring matching for answer evaluation.
* `pisa_llm_judged`: Evaluates over all language splits with LLM-based answer evaluation (requires OpenAI api key).

#### Tags

None.

#### Tasks

* `pisa_en`
* `pisa_de`
* `pisa_es`
* `pisa_fr`
* `pisa_it`
* `pisa_ch`
* `pisa_en_llm_judged`
* `pisa_de_llm_judged`
* `pisa_es_llm_judged`
* `pisa_fr_llm_judged`
* `pisa_it_llm_judged`
* `pisa_ch_llm_judged`

### Checklist

For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?


If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
12 changes: 12 additions & 0 deletions lm_eval/tasks/pisa/_pisa.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
group: pisa
task:
- pisa_de
- pisa_fr
- pisa_it
- pisa_en
- pisa_es
- pisa_ch

aggregate_metric_list:
- metric: acc # or acc_norm, ppl, etc.
weight_by_size: false
12 changes: 12 additions & 0 deletions lm_eval/tasks/pisa/_pisa_llm_judged.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
group: pisa_llm_judged
task:
- pisa_de_llm_judged
- pisa_fr_llm_judged
- pisa_it_llm_judged
- pisa_en_llm_judged
- pisa_es_llm_judged
- pisa_ch_llm_judged

aggregate_metric_list:
- metric: acc # or acc_norm, ppl, etc.
weight_by_size: false
16 changes: 16 additions & 0 deletions lm_eval/tasks/pisa/_template_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
dataset_path: PisaBench/pisa-bench
output_type: generate_until
doc_to_text: !function utils.pisa_doc_to_text
doc_to_choice: ["A", "B", "C", "D"]
doc_to_target: answer
process_results: !function utils.pisa_process_results
doc_to_image: !function utils.pisa_doc_to_visual

generation_kwargs:
until:
- "<|endoftext|>"

metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
4 changes: 4 additions & 0 deletions lm_eval/tasks/pisa/pisa_ch.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
task: pisa_ch
include: _template_yaml
task_alias: pisa_ch
test_split: ch
5 changes: 5 additions & 0 deletions lm_eval/tasks/pisa/pisa_ch_llm_judged.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
task: pisa_ch_llm_judged
include: _template_yaml
task_alias: pisa_ch
test_split: ch
process_results: !function utils.pisa_process_results_llm_judged
4 changes: 4 additions & 0 deletions lm_eval/tasks/pisa/pisa_de.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
task: pisa_de
include: _template_yaml
task_alias: pisa_de
test_split: de
5 changes: 5 additions & 0 deletions lm_eval/tasks/pisa/pisa_de_llm_judged.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
task: pisa_de_llm_judged
include: _template_yaml
task_alias: pisa_de
test_split: de
process_results: !function utils.pisa_process_results_llm_judged
4 changes: 4 additions & 0 deletions lm_eval/tasks/pisa/pisa_en.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
task: pisa_en
include: _template_yaml
task_alias: pisa_en
test_split: en
5 changes: 5 additions & 0 deletions lm_eval/tasks/pisa/pisa_en_llm_judged.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
task: pisa_en_llm_judged
include: _template_yaml
task_alias: pisa_en
test_split: en
process_results: !function utils.pisa_process_results_llm_judged
4 changes: 4 additions & 0 deletions lm_eval/tasks/pisa/pisa_es.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
task: pisa_es
include: _template_yaml
task_alias: pisa_es
test_split: es
5 changes: 5 additions & 0 deletions lm_eval/tasks/pisa/pisa_es_llm_judged.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
task: pisa_es_llm_judged
include: _template_yaml
task_alias: pisa_es
test_split: es
process_results: !function utils.pisa_process_results_llm_judged
4 changes: 4 additions & 0 deletions lm_eval/tasks/pisa/pisa_fr.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
task: pisa_fr
include: _template_yaml
task_alias: pisa_fr
test_split: fr
5 changes: 5 additions & 0 deletions lm_eval/tasks/pisa/pisa_fr_llm_judged.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
task: pisa_fr_llm_judged
include: _template_yaml
task_alias: pisa_fr
test_split: fr
process_results: !function utils.pisa_process_results_llm_judged
4 changes: 4 additions & 0 deletions lm_eval/tasks/pisa/pisa_it.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
task: pisa_it
include: _template_yaml
task_alias: pisa_it
test_split: it
5 changes: 5 additions & 0 deletions lm_eval/tasks/pisa/pisa_it_llm_judged.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
task: pisa_it_llm_judged
include: _template_yaml
task_alias: pisa_it
test_split: it
process_results: !function utils.pisa_process_results_llm_judged
Loading
Loading