mmrotate-SSOOD is a simplified, modular, and flexible framework designed specifically for Semi-Supervised Oriented Object Detection (SSOOD) tasks.
We now support Denser Teacher(TCSVT 2025)(https://ieeexplore.ieee.org/document/10802941)!
Our framework offers the following advantages:
- Simplified Implementation: Implementing custom semi-supervised detection methods is straightforward. You only need to modify two key functions, allowing faster experimentation and development.
- Flexible Data Augmentation: Built upon MMCV, our framework supports seamless integration of custom data augmentation techniques. We also provide ready-to-use augmentation configs for fair comparisons with prior works.
- Dataset Splitting Tools: Easily split your dataset into labeled and unlabeled subsets using our user-friendly tools, saving time on data preparation for semi-supervised learning.
- Extensible Method Support: Currently, we support Denser Teacher(TCSVT 2025)(https://ieeexplore.ieee.org/document/10802941), SOOD(CVPR 2023)(https://arxiv.org/abs/2304.04515) and Dense Teacher(ECCV 2022)(https://arxiv.org/abs/2207.02541), with plans to add more state-of-the-art semi-supervised learning methods in future updates.
We plan to continually update this framework to include more state-of-the-art semi-supervised learning methods. Here are some of the methods we aim to support in future updates:
-
Soft Teacher(ICCV 2021) (End-to-End Semi-Supervised Object Detection with Soft Teacher)(https://arxiv.org/abs/2106.09018) A famous semi-supervised object detection method that uses teacher-student architecture with pseudo-label refinement for better performance.
-
ARSL(CVPR 2023) (Ambiguity-Resistant Semi-Supervised Learning for Dense Object Detection)(https://arxiv.org/abs/2303.14960) Focuses on ambiguities in semi-supervised object detection.
To ensure compatibility, please install the following dependencies:
- PyTorch:
1.13.x
We recommend PyTorch1.13.xas all modules have been tested with this version. Installation guide: PyTorch.org
- MMDetection:
3.0.0
MMDetection serves as the base object detection framework. Refer to the MMDetection documentation for installation instructions.
- MMPretrain:
1.1.0
MMPretrain is used for pretraining models. Please follow the MMPretrain installation guide.
- CUDA Compatibility: Make sure all dependencies match your system's CUDA version for proper GPU acceleration. Check the PyTorch documentation for compatibility.
- Virtual Environment: For a cleaner setup, we highly recommend using a virtual environment like
condaorvenv.
Here’s a quick guide to set up the environment:
# Create a virtual environment
conda create -n ssood python=3.10
conda activate ssood
# Install PyTorch
# Install mmdet
pip install -U openmim
mim install mmengine
mim install "mmcv==2.0.0"
mim install mmdet==3.0.0
# Install mmpretrain
mim install mmpretrain==1.1.0
pip install future tensorboard
pip install -v -e .Please refer to data_preparation.md to prepare the original data. After that, the data folder should be organized as follows:
├── data
│ ├── split_ss_dota1_5
│ │ ├── train
│ │ │ ├── images
│ │ │ ├── annfiles
│ │ ├── val
│ │ │ ├── images
│ │ │ ├── annfiles
│ │ ├── test
│ │ │ ├── images
│ │ │ ├── annfiles
For partial labeled setting, we split the DOTA-v1.5's train set via the author released split data list and split tool:
python tools/SSOD/split_dota1.5_via_lists.py
For fully labeled setting, we use DOTA-V1.5 train as labeled set and DOTA-V1.5 test as unlabeled set.
After that, the data folder should be organized as follows:
├── data
│ ├── split_ss_dota1_5
│ │ ├── train
│ │ │ ├── images
│ │ │ ├── annfiles
│ │ ├── train_10_labeled
│ │ │ ├── images
│ │ │ ├── annfiles
│ │ ├── train_10_unlabeled
│ │ │ ├── images
│ │ │ ├── annfiles
│ │ ├── train_20_labeled
│ │ │ ├── images
│ │ │ ├── annfiles
│ │ ├── train_20_unlabeled
│ │ │ ├── images
│ │ │ ├── annfiles
│ │ ├── train_30_labeled
│ │ │ ├── images
│ │ │ ├── annfiles
│ │ ├── train_30_unlabeled
│ │ │ ├── images
│ │ │ ├── annfiles
│ │ ├── val
│ │ │ ├── images
│ │ │ ├── annfiles
│ │ ├── test
│ │ │ ├── images
│ │ │ ├── annfiles
For DOTAv1.0, the preparation is the same with DOTAv1.5.
For Denser Teacher
- To train Denser Teacher with 10% labeled data, run:
CUDA_VISIBLE_DEVICES=0,1 PORT=29501 bash ./tools/dist_train.sh configs/rotated_denser_teacher/rotated-denser-teacher_2xb3-180000k_semi-0.1-dotav1.5.py 2
DOTA1.5
SOOD
| Backbone | Setting | mAP50 | mAP50 in Paper | Mem (GB) | Config |
|---|---|---|---|---|---|
| ResNet50 (1024,1024,200) | 10% | 47.93 | 48.63 | 8.45 | config |
| ResNet50 (1024,1024,200) | 20% | 55.58 | config | ||
| ResNet50 (1024,1024,200) | 30% | 59.23 | config |
Dense Teacher
| Backbone | Setting | mAP50 | mAP50 in Paper | Mem (GB) | Config |
|---|---|---|---|---|---|
| ResNet50 (1024,1024,200) | 10% | 47.10 | - | config | |
| ResNet50 (1024,1024,200) | 20% | config | |||
| ResNet50 (1024,1024,200) | 30% | config |
Denser Teacher
| Backbone | Setting | mAP50 | Mem (GB) | Config |
|---|---|---|---|---|
| ResNet50 (1024,1024,200) | 1% | 20.98 | config | |
| ResNet50 (1024,1024,200) | 5% | 43.40 | config | |
| ResNet50 (1024,1024,200) | 10% | 52.05 | config | |
| ResNet50 (1024,1024,200) | 20% | 57.49 | config | |
| ResNet50 (1024,1024,200) | 30% | 60.40 | config |
DOTAv1.0
Denser Teacher
| Backbone | Setting | mAP50 | Mem (GB) | Config |
|---|---|---|---|---|
| ResNet50 (1024,1024,200) | 1% | 19.45 | config | |
| ResNet50 (1024,1024,200) | 5% | 45.84 | config | |
| ResNet50 (1024,1024,200) | 10% | 52062 | config | |
| ResNet50 (1024,1024,200) | 20% | 59.20 | config | |
| ResNet50 (1024,1024,200) | 30% | 62.82 | config |
This repo is built upon mmrotate. The implementation of SOOD is based on SOOD. The implementation of Dense Teacher is based on Dense Teacher. Thanks for their open source code.