Skip to content

a-elrawy/opacity-gradient-splatting

Repository files navigation

Opacity-Gradient Driven Density Control for Compact and Efficient Few-Shot 3D Gaussian Splatting

License: MIT Python 3.8+ CUDA

Abstract

3D Gaussian Splatting (3DGS) struggles in few-shot scenarios, where its standard adaptive density control (ADC) can lead to overfitting and bloated reconstructions. While state-of-the-art methods like FSGS improve quality, they often do so by significantly increasing the primitive count. This paper presents a framework that revises the core 3DGS optimization to prioritize efficiency. We replace the standard positional gradient heuristic with a novel densification trigger that uses the opacity gradient as a lightweight proxy for rendering error. We find this aggressive densification is only effective when paired with a more conservative pruning schedule, which prevents destructive optimization cycles. Combined with a standard depth-correlation loss for geometric guidance, our framework demonstrates a fundamental improvement in efficiency. On the 3-view LLFF dataset, our model is over 40% more compact (32k vs. 57k primitives) than FSGS, and on the Mip-NeRF 360 dataset, it achieves a reduction of approximately 70%. This dramatic gain in compactness is achieved with a modest trade-off in reconstruction metrics, establishing a new state-of-the-art on the quality-vs-efficiency Pareto frontier for few-shot view synthesis.

Key Features

  • Error-Driven Densification: Uses opacity gradients as direct proxy for rendering error
  • Conservative Pruning: Multi-stage pruning strategy preventing destructive optimization cycles
  • State-of-the-Art Efficiency: 40-70% reduction in model compactness with minimal quality trade-off
  • Real-time Rendering: Maintains real-time performance with significantly fewer primitives

Installation

Quick Start

# Clone the repository
git clone https://github.com/a-elrawy/opacity-gradient-splatting.git
cd opacity-gradient-splatting

# Create Virtual Environment 
# Create and activate virtual environment
python -m venv opacity-gradient-venv
source opacity-gradient-venv/bin/activate  # Linux/Mac
# OR
.\opacity-gradient-venv\Scripts\activate  # Windows



# Install PyTorch with CUDA 12.1 support
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121

# Install core dependencies
pip install plyfile tqdm matplotlib torchmetrics timm opencv-python imageio open3d

# Install custom submodules
pip install -q submodules/diff-gaussian-rasterization-confidence/
pip install -q submodules/simple-knn/

Requirements: CUDA 11.6+, Python 3.8+

📊 Data Preparation

LLFF Dataset

# Download LLFF dataset
mkdir -p dataset
cd dataset
gdown 16VnMcF1KJYxN9QId6TClMsZRahHNMW5g

# Run COLMAP preprocessing
python tools/colmap_llff.py

MipNeRF-360 Dataset

# Download MipNeRF-360 dataset
wget http://storage.googleapis.com/gresearch/refraw360/360_v2.zip
unzip -d mipnerf360 360_v2.zip

# Run COLMAP preprocessing
python tools/colmap_360.py

Docker Alternative (if COLMAP installation fails)

docker run --gpus all -it --name fsgs_colmap --shm-size=32g -v /home:/home colmap/colmap:latest /bin/bash
apt-get install pip
pip install numpy
python3 tools/colmap_llff.py

Note: Preprocessed point clouds are available here for convenience.

Quick Start

Training

# LLFF dataset (3 views) - Our main contribution
python train.py --source_path dataset/nerf_llff_data/horns --model_path output/horns --eval --n_views 3 --use_error_densification --error_densify_threshold 0.0001 --prune_from_iter 2000 --prune_threshold 0.001

# MipNeRF-360 dataset (24 views)
python train.py --source_path dataset/mipnerf360/garden --model_path output/garden --eval --n_views 24 --use_error_densification --error_densify_threshold 0.0001 --depth_pseudo_weight 0.03

Rendering

# Render images
python render.py --source_path dataset/nerf_llff_data/horns/ --model_path output/horns --iteration 10000

# Render video
python render.py --source_path dataset/nerf_llff_data/horns/ --model_path output/horns --iteration 10000 --video --fps 30

Evaluation

# Compute metrics (PSNR, SSIM, LPIPS)
python metrics.py --source_path dataset/nerf_llff_data/horns/ --model_path output/horns --iteration 10000

Code Structure

opacity-gradient-splatting/
├── scene/                    # Scene representation and Gaussian model
│   ├── gaussian_model.py    # Core Gaussian model with error-driven ADC
│   └── ...
├── gaussian_renderer/       # Rendering pipeline
├── utils/                   # Utility functions
│   ├── depth_utils.py       # Depth estimation and correlation loss
│   └── loss_utils.py       # Loss functions
├── arguments/               # Configuration parameters
├── train.py                 # Main training script
├── render.py               # Rendering script
├── metrics.py              # Evaluation metrics
└── tools/                  # Data preparation tools

Key Parameters

Error-Driven Densification

  • --use_error_densification: Enable opacity gradient-based densification
  • --error_densify_threshold: Threshold for error-driven densification (default: 0.0001)

Conservative Pruning

  • --prune_from_iter: Start pruning at iteration (default: 2000, vs. 500 in standard 3DGS)
  • --prune_threshold: Opacity threshold for pruning (default: 0.001, vs. 0.005 in standard 3DGS)
  • --max_gaussians: Maximum number of primitives (default: 1,000,000)

Acknowledgments

This work builds upon:

License

This project is built upon the Gaussian Splatting framework and follows the same licensing terms. See LICENSE.md for details.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published