Skip to content

[CVPR 2025] FaceBench: A Multi-View Multi-Level Facial Attribute VQA Dataset for Benchmarking Face Perception MLLMs

License

Notifications You must be signed in to change notification settings

CVI-SZU/FaceBench

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

84 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FaceBench: A Multi-View Multi-Level Facial Attribute VQA Dataset for Benchmarking Face Perception MLLMs [CVPR 2025]

Xiaoqin Wang, Xusen Ma, Xianxu Hou, Meidan Ding, Yudong Li, Junliang Chen, Wenting Chen, Xiaoyang Peng, Linlin Shen*

ArXiv Webpage Dataset Models

Overview

In this work, we introduce FaceBench, a dataset featuring hierarchical multi-view and multi-level attributes specifically designed to assess the comprehensive face perception abilities of MLLMs. We construct a hierarchical facial attribute structure, which encompasses five views with up to three levels of attributes, totaling over 210 attributes and 700 attribute values. Based on the structure, the proposed FaceBench consists of 49,919 visual question-answering (VQA) pairs for evaluation and 23,841 pairs for fine-tuning. Moreover, we further develop a robust face perception MLLM baseline, Face-LLaVA, by training with our proposed face VQA data.

Distribution of visual question-answer pairs

Some samples from our dataset

News

  • [2024-08-20] The Face-LLaVA model is released on HuggingFace🤗.
  • [2024-03-27] The paper is released on ArXiv🔥.

TODO

  • Release the Face-LLaVA model.
  • Release the evaluation code.
  • Release the dataset.

Evaluation

Model inference

OMP_NUM_THREADS=8 CUDA_VISIBLE_DEVICES=0 python evaluation/inference.py \
    --data-dir ./datasets/example/test.jsonl \
    --images-dir ./datasets/example/images/ \
    --model-name face_llava_1_5_13b \
    --question-type "TFQ, SCQ, MCQ, OEQ" \
    --save-dir "./responses-and-results/"

Calculate metrics

OMP_NUM_THREADS=8 CUDA_VISIBLE_DEVICES=5 python evaluation/evaluation.py \
    --data-path ./responses-and-results/face_llava_1_5_13b_test_responses.jsonl"

Results

Experimental results of various MLLMs and our Face-LLaVA across five facial attribute views.

Experimental results of various MLLMs and our Face-LLaVA across Level 1 facial attributes.

Citation

If you find this work useful for your research, please consider citing our paper:

@inproceedings{wang2025facebench,
  title={FaceBench: A Multi-View Multi-Level Facial Attribute VQA Dataset for Benchmarking Face Perception MLLMs},
  author={Wang, Xiaoqin and Ma, Xusen and Hou, Xianxu and Ding, Meidan and Li, Yudong and Chen, Junliang and Chen, Wenting and Peng, Xiaoyang and Shen, Linlin},
  booktitle={Proceedings-2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2025},
  year={2025}
}

@article{wang2025facebench,
  title={FaceBench: A Multi-View Multi-Level Facial Attribute VQA Dataset for Benchmarking Face Perception MLLMs},
  author={Wang, Xiaoqin and Ma, Xusen and Hou, Xianxu and Ding, Meidan and Li, Yudong and Chen, Junliang and Chen, Wenting and Peng, Xiaoyang and Shen, Linlin},
  journal={arXiv preprint arXiv:2503.21457},
  year={2025}
}

If you have any questions, you can either create issues or contact me by email [email protected].

Acknowledgments

This work is heavily based on LLaVA. Thanks to the authors for their great work.

About

[CVPR 2025] FaceBench: A Multi-View Multi-Level Facial Attribute VQA Dataset for Benchmarking Face Perception MLLMs

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages