Skip to content

SleepyMorpheus/CIL-2025

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CapNet: Composite U-Net with Learnable Sampling for Depth Estimation

!NOTE! The trained models available as weights have been trained on a specific train/test split of the cluster. This means that validation against a different split will result in significantly destorted results because the original training cluster will probably be part of the new validation set. Therefore, see results in the report or retrain models accordingly.

Precise depth estimation is essential for understanding 3D scenes. In this work, we present CapNet, a compositional architecture for monocular depth estimation. Using a pretrained model and combining it with advanced sampling strategies, we are able to increase accuracy and outperform standard scaling methods as well as U-Net architectures. Giving an approach to use when pretrained models with wrong dimensions are available, but simply scaling does not yield good enough accuracy.

Code

Its easy to try CapNet yourself. Follow these steps:

Data Set Preperation

Download the Dataset from Kaggle. Our program expects the following folder structure:

data/
├── test/
│   ├── test_000000_rgb.png
│   ├── test_000001_rgb.png
│   └── ...
├── train/
│   ├── sample_000000_rgb.png
│   ├── sample_000000_depth.npy
│   ├── sample_000001_rgb.png
│   ├── sample_000002_depth.npy
│   └── ...
├── test_list.txt
└── train_list.txt

Setup Enviroment

  1. Install Python 3.12
  2. (optional) use venv
  3. Install dependencies pip install -r requirements.txt

Cluster Data

Since the dataset contains many very similar images, we need to cluster them first to be able to create a clean train/validation split. For this run

python create_cluster.py --data-dir you/path/data/train --output-dir you/path/to/output

Note this code is best run on a GPU. The code checks if a accelerator is available and runs automatically on it.

Train Models

To train the models use the following command:

python main.py [ARGS]

There are many arguments that can be passed to the code:

Required Arguments

  • data-dir: Must point to the data. In the above example the path must end with data
  • model: Specify which model to use. Available models are: HRNetPixelShuffle, OmniNaiveScaling, ResNetNaiveUpsample, CapNetLite, CapNet, CapNetMax
  • cluster-file: The path to the cluster file generated in the previous step.

Optional Arguments

  • load-model: Path to a pre-trained model to load. The model must match the specified --model architecture.
  • output-dir: Path to the output directory where results and predictions will be saved

Hyperparameters:

  • seed: Random seed for reproducibility. Used for torch and random
  • batch-size: Batch size for training and inference
  • learning-rate: Learning rate for the optimizer
  • weight-decay: Weight decay for the optimizer
  • num-epochs: Number of epochs for training
  • num-workers: Number of workers for data loading
  • training-size: Fraction of training data to use for training (0.0 to 1.0)

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages