Skip to content

Commit 182afc9

Browse files
committed
New release: issues fixed and integrated training speed acceleration.
1 parent a2a91d9 commit 182afc9

30 files changed

+296
-57
lines changed

.gitmodules

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,3 +8,6 @@
88
[submodule "SIBR_viewers"]
99
path = SIBR_viewers
1010
url = https://gitlab.inria.fr/sibr/sibr_core.git
11+
[submodule "submodules/fused-ssim"]
12+
path = submodules/fused-ssim
13+
url = https://github.com/rahul-goel/fused-ssim.git

README.md

Lines changed: 38 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -37,12 +37,15 @@ This research was funded by the ERC Advanced grant FUNGRAPH No 788065. The autho
3737

3838
## NEW FEATURES !
3939

40-
We have limited resources for maintaining and updating the code. However, we have added a few new features since the original release that are inspired by some of the excellent work many other researchers have been doing on 3DGS. We will be adding other features within the ability of our resources.
40+
We have limited resources for maintaining and updating the code. However, we have added a few new features since the original release that are inspired by some of the excellent work many other researchers have been doing on 3DGS. We will be adding other features within the ability of our resources.
4141

42-
Update of August 2024:
43-
We have added/corrected the following features: [Depth regularization](#depth-regularization) for training, [anti-aliasing](#anti-aliasing) and [exposure compensation](#exposure-compensation). We have enhanced the SIBR real time viewer by correcting bugs and adding features in the [Top View](#sibr-top-view) that allows visualization of input and user cameras. Please note that it is currently not possible to use depth regularization with the training speed acceleration since they use different rasterizer versions.
42+
**Update of October 2024**: We integrated [training speed acceleration](#training-speed-acceleration) and made it compatible with [depth regularization](#depth-regularization), [anti-aliasing](#anti-aliasing) and [exposure compensation](#exposure-compensation).
4443

45-
Update of Spring 2024:
44+
45+
**Update of August 2024**:
46+
We have added/corrected the following features: [depth regularization](#depth-regularization) for training, [anti-aliasing](#anti-aliasing) and [exposure compensation](#exposure-compensation). We have enhanced the SIBR real time viewer by correcting bugs and adding features in the [Top View](#sibr-top-view) that allows visualization of input and user cameras. Please note that it is currently not possible to use depth regularization with the training speed acceleration since they use different rasterizer versions.
47+
48+
**Update of Spring 2024**:
4649
Orange Labs has kindly added [OpenXR support](#openxr-support) for VR viewing.
4750

4851
## Step-by-step Tutorial
@@ -492,11 +495,34 @@ python convert.py -s <location> --skip_matching [--resize] #If not resizing, Ima
492495
</details>
493496
<br>
494497

495-
### Depth regularization
498+
### Training speed acceleration
499+
500+
We integrated the drop-in replacements from [Taming-3dgs](https://humansensinglab.github.io/taming-3dgs/)<sup>1</sup> with [fused ssim](https://github.com/rahul-goel/fused-ssim/tree/main) into the original codebase to speed up training times. Once installed, the accelerated rasterizer delivers a **$\times$ 1.6 training time speedup** using `--optimizer_type default` and a **$\times$ 2.7 training time speedup** using `--optimizer_type sparse_adam`.
501+
502+
To get faster training times you must first install the accelerated rasterizer to your environment:
503+
504+
```bash
505+
pip uninstall diff-gaussian-rasterization -y
506+
cd submodules/diff-gaussian-rasterization
507+
rm -r build
508+
git checkout 3dgs_accel
509+
pip install .
510+
```
511+
512+
Then you can add the following parameter to use the sparse adam optimizer when running `train.py`:
513+
514+
```bash
515+
--optimizer_type sparse_adam
516+
```
517+
518+
*Note that this custom rasterizer has a different behaviour than the original version, for more details on training times please see [stats for training times](results.md/#training-times-comparisons)*.
519+
520+
*1. Mallick and Goel, et al. ‘Taming 3DGS: High-Quality Radiance Fields with Limited Resources’. SIGGRAPH Asia 2024 Conference Papers, 2024, https://doi.org/10.1145/3680528.3687694, [github](https://github.com/humansensinglab/taming-3dgs)*
496521

497522

498-
Two preprocessing steps are required to enable depth regularization when training a scene:
499-
To have better reconstructed scenes we use depth maps as priors during optimization with each input images. It works best on untextured parts ex: roads and can remove floaters. Several papers have used similar ideas to improve various aspects of 3DGS; (e.g. [DepthRegularizedGS](https://robot0321.github.io/DepthRegGS/index.html), [SparseGS](https://formycat.github.io/SparseGS-Real-Time-360-Sparse-View-Synthesis-using-Gaussian-Splatting/), [DNGaussian](https://fictionarry.github.io/DNGaussian/)). The depth regularization we integrated is that used in our [Hierarchical 3DGS](https://repo-sam.inria.fr/fungraph/hierarchical-3d-gaussians/) paper, but applied to the original 3DGS; for some scenes (e.g., the DeepBlending scenes) it improves quality significantly; for others it either makes a small difference or can even be worse. For details statistics please see here: [Stats for depth regularization](results.md).
523+
### Depth regularization
524+
525+
To have better reconstructed scenes we use depth maps as priors during optimization with each input images. It works best on untextured parts ex: roads and can remove floaters. Several papers have used similar ideas to improve various aspects of 3DGS; (e.g. [DepthRegularizedGS](https://robot0321.github.io/DepthRegGS/index.html), [SparseGS](https://formycat.github.io/SparseGS-Real-Time-360-Sparse-View-Synthesis-using-Gaussian-Splatting/), [DNGaussian](https://fictionarry.github.io/DNGaussian/)). The depth regularization we integrated is that used in our [Hierarchical 3DGS](https://repo-sam.inria.fr/fungraph/hierarchical-3d-gaussians/) paper, but applied to the original 3DGS; for some scenes (e.g., the DeepBlending scenes) it improves quality significantly; for others it either makes a small difference or can even be worse. For example results showing the potential benefit and statistics on quality please see here: [Stats for depth regularization](results.md).
500526

501527
When training on a synthetic dataset, depth maps can be produced and they do not require further processing to be used in our method. For real world datasets please do the following:
502528
1. Get depth maps for each input images, to this effect we suggest using [Depth anything v2](https://github.com/DepthAnything/Depth-Anything-V2?tab=readme-ov-file#usage).
@@ -508,7 +534,11 @@ When training on a synthetic dataset, depth maps can be produced and they do not
508534
A new parameter should be set when training if you want to use depth regularization `-d <path to depth maps>`.
509535
510536
### Exposure compensation
511-
To compensate for exposure changes in the different input images we optimize an affine transformation for each image just as in [Hierarchical 3dgs](https://repo-sam.inria.fr/fungraph/hierarchical-3d-gaussians/). Add the following parameters to enable it:
537+
To compensate for exposure changes in the different input images we optimize an affine transformation for each image just as in [Hierarchical 3dgs](https://repo-sam.inria.fr/fungraph/hierarchical-3d-gaussians/).
538+
539+
This can greatly improve reconstruction results for "in the wild" captures, e.g., with a smartphone when the exposure setting of the camera is not fixed. For example results showing the potential benefit and statistics on quality please see here: [Stats for exposure compensation](results.md).
540+
541+
Add the following parameters to enable it:
512542
```
513543
--exposure_lr_init 0.001 --exposure_lr_final 0.0001 --exposure_lr_delay_steps 5000 --exposure_lr_delay_mult 0.001 --train_test_exp
514544
```

arguments/__init__.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -96,6 +96,7 @@ def __init__(self, parser):
9696
self.depth_l1_weight_init = 1.0
9797
self.depth_l1_weight_final = 0.01
9898
self.random_background = False
99+
self.optimizer_type = "default"
99100
super().__init__(parser, "Optimization Parameters")
100101

101102
def get_combined_args(parser : ArgumentParser):

assets/Exposure_comparison.png

471 KB
Loading

assets/all_results_LPIPS.png

-85.5 KB
Binary file not shown.

assets/all_results_PSNR.png

-87 KB
Binary file not shown.

assets/all_results_SSIM.png

-80.7 KB
Binary file not shown.

assets/charts/accel_default_LPIPS.png

55.3 KB
Loading

assets/charts/accel_default_PSNR.png

59 KB
Loading

assets/charts/accel_default_SSIM.png

53.7 KB
Loading

0 commit comments

Comments
 (0)