Skip to content

ChenHongruixuan/AnyDisasterMapping

Repository files navigation

🌍Any Disaster Mapping

Earth Observation for Disaster Mapping: Benchmarks, Methods, Challenges and Future Perspectives

Hongruixuan Chen1,†, Jian Song1,†, Weihao Xuan2,1,†, Junjue Wang2,†, Heli Qi1, Zeqi Zhou3, Pengyu Dai1,2
Olivier Dietrich4, Erika Gutierrez5, Lars Bromly5, Edoardo Nemni6, Yafei Ou1, Jie Zhao7, Zhuo Zheng8, Yonghao Xu9
Ronny Hänsch10, Wenzhe Jiao11, Marco Chini12, Claudio Persello13, Junshi Xia1, Shijian Lu14, Lixin Wang15, Zhe Zhu16
Evan Shelhamer17, Jocelyn Chanussot18, Konrad Schindler4, Naoto Yokoya2,1

Equal contribution
1 RIKEN AIP, 2 The University of Tokyo, 3 Brown University, 4 ETH Zurich, 5 United Nations Satellite Centre
6 Barcelona School of Economics, 7 Technical University of Munich, 8 Stanford University, 9 Linköping University
10 German Aerospace Center (DLR), 11 Texas A&M University, 12 Luxembourg Institute of Science and Technology
13 University of Twente, 14 Nanyang Technological University, 15 Indiana University Indianapolis
16 The University of British Columbia, 17 University of Connecticut, 18 Université Grenoble Alpes

Paper | Installation | Dataset Preparation | Pretrained Weights | Quick Start | Repo Layout

🔭Overview

Any Disaster Mapping is the official repository for our review paper in EO-based disaster mapping.

One of our key motivations is that current disaster mapping research is highly fragmented: benchmarks, tasks, and model implementations are often inconsistent across papers, making fair evaluation, reproduction, and further development unnecessarily difficult.

This repo unifies widely used disaster mapping benchmarks and representative deep learning models across major research directions, and provides a consistent training and evaluation pipeline for:

  • Infrastructure damage
  • Flood mapping
  • Landslide segmentation
  • Wildfire analysis

It is designed to help researchers:

  • Reproduce the results reported in our paper
  • Evaluate models under a unified protocol
  • Use strong baselines out of the box
  • Build and test their own improvements with minimal engineering overhead

🛠️Installation

Base environment and optional model-specific extras.

# NOTE: --index-url should match the version of your local CUDA toolkit for compiling ChangeMamba kernels (cu126 is just an example)
pip install torch torchvision xformers --index-url https://download.pytorch.org/whl/cu126
pip install -e .

Some models require optional extras:

  • ChangeMamba selective scan kernel:
    # run `conda install -c conda-forge gcc=13 gxx=13 -y` if you meet GCC issues
    cd src/models/ChangeMamba/kernels/selective_scan
    pip install . --no-build-isolation
  • Local pretrained checkpoints under pretrained_weight/ for model families such as SegFormer, HRNet, SAM/SAM2, DINOv3, HyperSigma, SkySense, SpectralGPT, and ChangeMamba. See Pretrained Weights below.

🧪Dataset Preparation

Dataset preparation guides are organized by disaster domain:

📦Pretrained Weights

Recommended local checkpoint layout for the supported model zoo.

Create the local checkpoint directory first:

mkdir -p pretrained_weight

Direct Downloads

# pretrain-vit-base-e199.pth
wget -O pretrained_weight/pretrain-vit-base-e199.pth \
  https://zenodo.org/records/7338613/files/pretrain-vit-base-e199.pth

# SpectralGPT+.pth
wget -O "pretrained_weight/SpectralGPT+.pth" \
  "https://zenodo.org/records/8412455/files/SpectralGPT+.pth?download=1"

# spec-vit-base-ultra-checkpoint-1599.pth
wget -O pretrained_weight/spec-vit-base-ultra-checkpoint-1599.pth \
  https://huggingface.co/WHU-Sigma/HyperSIGMA/resolve/main/spec-vit-base-ultra-checkpoint-1599.pth

huggingface-cli download UTokyo-Yokoya-Lab/AnyDisaster-Pretrained_Weight \
  vssm_tiny_0230_ckpt_epoch_262.pth --local-dir pretrained_weight --local-dir-use-symlinks False

Additional Upstream Checkpoints

After completing all downloads, pretrained_weight/ should contain:

pretrained_weight/
├── dinov3_vitb16_pretrain_lvd1689m-73cec8be.pth
├── dinov3_vitl16_pretrain_lvd1689m-8aa4cbdd.pth
├── dinov3_vitl16_pretrain_sat493m-eadcf0ff.pth
├── HSI_spatial_checkpoint-1600.pth
├── mit_b0.pth
├── mit_b1.pth
├── mit_b2.pth
├── mit_b3.pth
├── mit_b4.pth
├── mit_b5.pth
├── pretrain-vit-base-e199.pth
├── sam2.1_hiera_base_plus.pt
├── sam2.1_hiera_small.pt
├── sam_vit_b_01ec64.pth
├── sam_vit_l_0b3195.pth
├── skysense_model_backbone_hr.pth
├── spec-vit-base-ultra-checkpoint-1599.pth
├── SpectralGPT+.pth
└── vssm_tiny_0230_ckpt_epoch_262.pth

🚀 Quick Start

Minimal training and evaluation commands.

Train

Train with a YAML config:

python train.py --config configs/infra/xbd/unet.yaml

Evaluate

Evaluate an experiment directory:

python test.py --exp_path results/xbd/unet

🗂️Repository Layout

Main directories and what they are responsible for:

  • src/core/: trainer, config loader, registry, augmentation, metrics
  • src/tasks/: task handlers for segmentation, change detection, and semantic change detection
  • src/datasets/: dataset adapters and runtime data contracts
  • src/models/: model wrappers and vendored third-party implementations
  • configs/: experiment configs grouped by domain and dataset
  • scripts/data_prep/: dataset preparation guides and helper scripts
  • docs/: architecture notes and extension guidance

🏗️Architecture And Extension

Internal runtime design and extension entry points.

📜Reference

If this code repo contributes to your research, please kindly consider citing our paper and give this repo ⭐️ :)


🙋Q & A

For any questions, please feel free to leave it in the issue section or send inquiry email to qschrx@gmail.com.

Releases

No releases published

Packages

 
 
 

Contributors