Xiaona Zhou,Yingyan Zeng, Ran Jin, and Ismini Lourentzou
The success of modern machine learning hinges on access to high-quality training data. In many real-world scenarios, such as acquiring data from public repositories or sharing across institutions, data is naturally organized into discrete datasets that vary in relevance, quality, and utility. Selecting which repositories or institutions to search for useful datasets, and which datasets to incorporate into model training are therefore critical decisions, yet most existing methods select individual samples and treat all data as equally relevant, ignoring differences between datasets and their sources. In this work, we formalize the task of dataset selection: selecting entire datasets from a large, heterogeneous pool to improve downstream performance under resource constraints. We propose Dataset Selection via Hierarchies (DaSH), a dataset selection method that models utility at both dataset and group (e.g., collections, institutions) levels, enabling efficient generalization from limited observations. Across two public benchmarks (Digit-Five and DomainNet), DaSH outperforms state-of-the-art data selection baselines by up to 26.2% in accuracy, while requiring significantly fewer exploration steps. Ablations show DaSH is robust to low-resource settings and lack of relevant datasets, making it suitable for scalable and adaptive dataset selection in practical multi-source learning workflows.
Step 1: Clone this repository using git and change into its root directory.
git clone https://github.com/PLAN-Lab/DaSH.gitStep 2: Create and activate a conda environment named DaSH.
conda create -n DaSH python=3.8 # Any Python version should work, Python 3.8 was used.
conda activate DaSHStep 3: Install the dependencies from requirements.txt:
pip install -r requirements.txtData used in the experiments are saved under Data/ directory, with subfolders digitfive/ and domainNet/. Both Digit-Five and DomainNet are widely used benchmarks for domain adaptation and are obtained from the M3SDA benchmark repository.
Run bash run_DaSH.sh to reproduce all results.
Run bash run_DaS.sh to reproduce all results.
To ensure fair and reproducible comparisons, we rely on the official implementations released by the original authors for all baseline methods:
- Core-sets: Official implementation
- FreeSel: Official implementation
- ActiveFT: Official implementation
- BiLAF: Official implementation
If you have any questions or suggestions, feel free to contact:
- Xiaona Zhou ([email protected])
Or describe it in Issues.
If you use this code for your research, please cite our paper
@inproceedings{zhou2025hierarchical,
title={Hierarchical Dataset Selection for High-Quality Data Sharing},
author={Zhou, Xiaona and Zeng, Yingyan and Jin, Ran and Lourentzou, Ismini},
booktitle={AAAI Conference on Artificial Intelligence},
year={2026}
}

