3D Gaussian Splatting (3DGS) struggles in few-shot scenarios, where its standard adaptive density control (ADC) can lead to overfitting and bloated reconstructions. While state-of-the-art methods like FSGS improve quality, they often do so by significantly increasing the primitive count. This paper presents a framework that revises the core 3DGS optimization to prioritize efficiency. We replace the standard positional gradient heuristic with a novel densification trigger that uses the opacity gradient as a lightweight proxy for rendering error. We find this aggressive densification is only effective when paired with a more conservative pruning schedule, which prevents destructive optimization cycles. Combined with a standard depth-correlation loss for geometric guidance, our framework demonstrates a fundamental improvement in efficiency. On the 3-view LLFF dataset, our model is over 40% more compact (32k vs. 57k primitives) than FSGS, and on the Mip-NeRF 360 dataset, it achieves a reduction of approximately 70%. This dramatic gain in compactness is achieved with a modest trade-off in reconstruction metrics, establishing a new state-of-the-art on the quality-vs-efficiency Pareto frontier for few-shot view synthesis.
- Error-Driven Densification: Uses opacity gradients as direct proxy for rendering error
- Conservative Pruning: Multi-stage pruning strategy preventing destructive optimization cycles
- State-of-the-Art Efficiency: 40-70% reduction in model compactness with minimal quality trade-off
- Real-time Rendering: Maintains real-time performance with significantly fewer primitives
# Clone the repository
git clone https://github.com/a-elrawy/opacity-gradient-splatting.git
cd opacity-gradient-splatting
# Create Virtual Environment
# Create and activate virtual environment
python -m venv opacity-gradient-venv
source opacity-gradient-venv/bin/activate # Linux/Mac
# OR
.\opacity-gradient-venv\Scripts\activate # Windows
# Install PyTorch with CUDA 12.1 support
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
# Install core dependencies
pip install plyfile tqdm matplotlib torchmetrics timm opencv-python imageio open3d
# Install custom submodules
pip install -q submodules/diff-gaussian-rasterization-confidence/
pip install -q submodules/simple-knn/Requirements: CUDA 11.6+, Python 3.8+
# Download LLFF dataset
mkdir -p dataset
cd dataset
gdown 16VnMcF1KJYxN9QId6TClMsZRahHNMW5g
# Run COLMAP preprocessing
python tools/colmap_llff.py# Download MipNeRF-360 dataset
wget http://storage.googleapis.com/gresearch/refraw360/360_v2.zip
unzip -d mipnerf360 360_v2.zip
# Run COLMAP preprocessing
python tools/colmap_360.pydocker run --gpus all -it --name fsgs_colmap --shm-size=32g -v /home:/home colmap/colmap:latest /bin/bash
apt-get install pip
pip install numpy
python3 tools/colmap_llff.pyNote: Preprocessed point clouds are available here for convenience.
# LLFF dataset (3 views) - Our main contribution
python train.py --source_path dataset/nerf_llff_data/horns --model_path output/horns --eval --n_views 3 --use_error_densification --error_densify_threshold 0.0001 --prune_from_iter 2000 --prune_threshold 0.001
# MipNeRF-360 dataset (24 views)
python train.py --source_path dataset/mipnerf360/garden --model_path output/garden --eval --n_views 24 --use_error_densification --error_densify_threshold 0.0001 --depth_pseudo_weight 0.03# Render images
python render.py --source_path dataset/nerf_llff_data/horns/ --model_path output/horns --iteration 10000
# Render video
python render.py --source_path dataset/nerf_llff_data/horns/ --model_path output/horns --iteration 10000 --video --fps 30# Compute metrics (PSNR, SSIM, LPIPS)
python metrics.py --source_path dataset/nerf_llff_data/horns/ --model_path output/horns --iteration 10000opacity-gradient-splatting/
├── scene/ # Scene representation and Gaussian model
│ ├── gaussian_model.py # Core Gaussian model with error-driven ADC
│ └── ...
├── gaussian_renderer/ # Rendering pipeline
├── utils/ # Utility functions
│ ├── depth_utils.py # Depth estimation and correlation loss
│ └── loss_utils.py # Loss functions
├── arguments/ # Configuration parameters
├── train.py # Main training script
├── render.py # Rendering script
├── metrics.py # Evaluation metrics
└── tools/ # Data preparation tools
--use_error_densification: Enable opacity gradient-based densification--error_densify_threshold: Threshold for error-driven densification (default: 0.0001)
--prune_from_iter: Start pruning at iteration (default: 2000, vs. 500 in standard 3DGS)--prune_threshold: Opacity threshold for pruning (default: 0.001, vs. 0.005 in standard 3DGS)--max_gaussians: Maximum number of primitives (default: 1,000,000)
This work builds upon:
- FSGS: Real-Time Few-Shot View Synthesis using Gaussian Splatting
- 3D Gaussian Splatting
- DreamGaussian
- SparseNeRF
- MipNeRF-360
This project is built upon the Gaussian Splatting framework and follows the same licensing terms. See LICENSE.md for details.