Skip to content

Commit e783741

Browse files
committed
Add hyperlinks.
1 parent 85fc609 commit e783741

File tree

1 file changed

+5
-2
lines changed

1 file changed

+5
-2
lines changed

configs/train/README.md

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,8 +6,13 @@ The core PyTorch-based training script can be found [here](https://github.com/LC
66

77
The following datasets are supported (hyperlinks takes to relevant configuration description).
88
By default, the model architecture uses five unrolleed iterations of ADMM for camera inversion, and UNetRes models for the pre-processor post-processor, and PSF correction.
9+
With DiffuserCam, we show how to set different camera inversion methods and neural networks architecture for the processors, which can also be used with other datasets.
910

1011
- [DiffuserCam](#diffusercam)
12+
- [Unrolled ADMM](#unrolled-admm)
13+
- [Compensation branch](#compensation-branch)
14+
- [Trainable inversion](#trainable-inversion)
15+
- [Multi wiener deconvolution network](#multi-wiener-deconvolution-network)
1116
- [Transformer architecture for pre- and post-processors](#transformer-architecture-for-pre--and-post-processors)
1217
- [Multi PSF camera inversion (PhoCoLens)](#multi-psf-camera-inversion)
1318
- [TapeCam](#tapecam)
@@ -22,8 +27,6 @@ The configuration files are based on [Hydra](https://hydra.cc/docs/intro/), whic
2227

2328
The output of training can be visualized on WandB (if you have connected with it when launching the script) and will be saved in the `outputs` directory with the appropriate timestamp.
2429

25-
With DiffuserCam, we show how to set different camera inversion methods and neural networks architecture for the processors, which can also be used with other datasets.
26-
2730

2831
## DiffuserCam
2932

0 commit comments

Comments
 (0)