Skip to content

Commit 85fc609

Browse files
authored
Merge pull request #164 from LCAV/small_tweaks
Improve docs
2 parents d0261b4 + 69d0141 commit 85fc609

File tree

11 files changed

+268
-46
lines changed

11 files changed

+268
-46
lines changed

CITATION.cff

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
cff-version: 1.2.0
2+
message: "If you use this software, please cite it as below."
3+
authors:
4+
- family-names: "Bezzam"
5+
given-names: "Eric"
6+
orcid: "https://orcid.org/0000-0003-4837-5031"
7+
- family-names: "Kashani"
8+
given-names: "Sepand"
9+
orcid: "https://orcid.org/0000-0002-0735-371X"
10+
- family-names: "Vetterli"
11+
given-names: "Martin"
12+
orcid: "https://orcid.org/0000-0002-6122-1216"
13+
- family-names: "Simeoni"
14+
given-names: "Matthieu"
15+
orcid: "https://orcid.org/0000-0002-4927-3697"
16+
title: "LenslessPiCam: A Hardware and Software Platform for Lensless Computational Imaging with a Raspberry Pi"
17+
doi: doi.org/10.5281/zenodo.8036869
18+
date-released: 2023-06-14
19+
url: "https://github.com/LCAV/LenslessPiCam"
20+
preferred-citation:
21+
type: article
22+
authors:
23+
- family-names: "Bezzam"
24+
given-names: "Eric"
25+
orcid: "https://orcid.org/0000-0003-4837-5031"
26+
- family-names: "Kashani"
27+
given-names: "Sepand"
28+
orcid: "https://orcid.org/0000-0002-0735-371X"
29+
- family-names: "Vetterli"
30+
given-names: "Martin"
31+
orcid: "https://orcid.org/0000-0002-6122-1216"
32+
- family-names: "Simeoni"
33+
given-names: "Matthieu"
34+
orcid: "https://orcid.org/0000-0002-4927-3697"
35+
doi: "10.21105/joss.04747"
36+
journal: "Journal of Open Source Software"
37+
pages: 4747
38+
number: 86
39+
title: "LenslessPiCam: A Hardware and Software Platform for Lensless Computational Imaging with a Raspberry Pi"
40+
volume: 8
41+
year: 2023

CONTRIBUTING.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -148,15 +148,15 @@ Building documentation
148148
conda activate lensless_docs39
149149
150150
# install dependencies
151-
(lensless_docs) pip install -r docs/requirements.txt
151+
(lensless_docs39) pip install -r docs/requirements.txt
152152
153153
# build documentation
154-
(lensless_docs) python setup.py build_sphinx
154+
(lensless_docs39) python setup.py build_sphinx
155155
# or
156-
(lensless_docs) (cd docs && make html)
156+
(lensless_docs39) (cd docs && make html)
157157
158158
To rebuild the documentation from scratch:
159159

160160
.. code:: bash
161161
162-
(lensless_docs) python setup.py build_sphinx -E -a
162+
(lensless_docs39) python setup.py build_sphinx -E -a

README.rst

Lines changed: 102 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -65,6 +65,39 @@ We've also written a few Medium articles to guide users through the process
6565
of building the camera, measuring data with it, and reconstruction.
6666
They are all laid out in `this post <https://medium.com/@bezzam/a-complete-lensless-imaging-tutorial-hardware-software-and-algorithms-8873fa81a660>`__.
6767

68+
Collection of lensless imaging research
69+
---------------------------------------
70+
71+
The following works have been implemented in the toolkit:
72+
73+
Reconstruction algorithms:
74+
75+
* ADMM with total variation regularization and 3D support (`source code <https://github.com/LCAV/LenslessPiCam/blob/d0261b4bc79ef05228b135e6898deb4f7793d1aa/lensless/recon/admm.py#L24>`__, `usage <https://github.com/LCAV/LenslessPiCam/blob/main/scripts/recon/admm.py>`__). [1]_
76+
* Unrolled ADMM (`source code <https://github.com/LCAV/LenslessPiCam/blob/d0261b4bc79ef05228b135e6898deb4f7793d1aa/lensless/recon/unrolled_admm.py#L20>`__, `usage <https://github.com/LCAV/LenslessPiCam/tree/main/configs/train#unrolled-admm>`__). [2]_
77+
* Unrolled ADMM with compensation branch (`source code <https://github.com/LCAV/LenslessPiCam/blob/d0261b4bc79ef05228b135e6898deb4f7793d1aa/lensless/recon/utils.py#L84>`__, `usage <https://github.com/LCAV/LenslessPiCam/tree/main/configs/train#compensation-branch>`__). [3]_
78+
* Trainable inversion from Flatnet (`source code <https://github.com/LCAV/LenslessPiCam/blob/d0261b4bc79ef05228b135e6898deb4f7793d1aa/lensless/recon/trainable_inversion.py#L11>`__, `usage <https://github.com/LCAV/LenslessPiCam/tree/main/configs/train#trainable-inversion>`__). [4]_
79+
* Multi-Wiener deconvolution network (`source code <https://github.com/LCAV/LenslessPiCam/blob/d0261b4bc79ef05228b135e6898deb4f7793d1aa/lensless/recon/multi_wiener.py#L87>`__, `usage <https://github.com/LCAV/LenslessPiCam/tree/main/configs/train#multi-wiener-deconvolution-network>`__). [5]_
80+
* SVDeconvNet (for learning multi-PSF deconvolution) from PhoCoLens (`source code <https://github.com/LCAV/LenslessPiCam/blob/main/lensless/recon/sv_deconvnet.py#L42>`__, `usage <https://github.com/LCAV/LenslessPiCam/tree/main/configs/train#multi-psf-camera-inversion>`__). [6]_
81+
* Incorporating pre-processor (`source code <https://github.com/LCAV/LenslessPiCam/blob/d0261b4bc79ef05228b135e6898deb4f7793d1aa/lensless/recon/trainable_recon.py#L52>`__). [7]_
82+
* Accounting for external illumination(`source code 1 <https://github.com/LCAV/LenslessPiCam/blob/d0261b4bc79ef05228b135e6898deb4f7793d1aa/lensless/recon/trainable_recon.py#L64>`__, `source code 2 <https://github.com/LCAV/LenslessPiCam/blob/d0261b4bc79ef05228b135e6898deb4f7793d1aa/scripts/recon/train_learning_based.py#L458>`__, `usage <https://github.com/LCAV/LenslessPiCam/tree/main/configs/train#multilens-under-external-illumination>`__). [8]_
83+
84+
Camera / mask design:
85+
86+
* Fresnel zone aperture mask pattern (`source code <https://github.com/LCAV/LenslessPiCam/blob/d0261b4bc79ef05228b135e6898deb4f7793d1aa/lensless/hardware/mask.py#L823>`__). [9]_
87+
* Coded aperture mask pattern (`source code <https://github.com/LCAV/LenslessPiCam/blob/d0261b4bc79ef05228b135e6898deb4f7793d1aa/lensless/hardware/mask.py#L288>`__). [10]_
88+
* Near-field Phase Retrieval for designing a high-contrast phase mask (`source code <https://github.com/LCAV/LenslessPiCam/blob/d0261b4bc79ef05228b135e6898deb4f7793d1aa/lensless/hardware/mask.py#L706>`__). [11]_
89+
* LCD-based camera, i.e. DigiCam (`source code <https://github.com/LCAV/LenslessPiCam/blob/d0261b4bc79ef05228b135e6898deb4f7793d1aa/lensless/hardware/trainable_mask.py#L117>`__). [7]_
90+
91+
Datasets (hosted on Hugging Face and downloaded via their API):
92+
93+
* DiffuserCam Lensless MIR Flickr dataset (copy on `Hugging Face <https://huggingface.co/datasets/bezzam/DiffuserCam-Lensless-Mirflickr-Dataset-NORM>`__). [2]_
94+
* TapeCam MIR Flickr (`Hugging Face <https://huggingface.co/datasets/bezzam/TapeCam-Mirflickr-25K>`__). [7]_
95+
* DigiCam MIR Flickr (`Hugging Face <https://huggingface.co/datasets/bezzam/DigiCam-Mirflickr-SingleMask-25K>`__). [7]_
96+
* DigiCam MIR Flickr with multiple mask patterns (`Hugging Face <https://huggingface.co/datasets/bezzam/DigiCam-Mirflickr-MultiMask-25K>`__). [7]_
97+
* DigiCam CelebA (`Hugging Face <https://huggingface.co/datasets/bezzam/DigiCam-CelebA-26K>`__). [7]_
98+
* MultiFocal mask MIR Flickr under external illumination (`Hugging Face <https://huggingface.co/datasets/Lensless/MultiLens-Mirflickr-Ambient>`__). [8]_ Mask fabricated by [12]_
99+
100+
68101
Setup
69102
-----
70103

@@ -86,16 +119,16 @@ the HQ camera sensor (or V2 sensor). Instructions on building the camera
86119
can be found `here <https://lensless.readthedocs.io/en/latest/building.html>`__.
87120

88121
The software from this repository has to be installed on **both** your
89-
local machine and the Raspberry Pi. Note that we highly recommend using
90-
Python 3.9, as some Python library versions may not be available with
122+
local machine and the Raspberry Pi. Note that we recommend using
123+
Python 3.11, as some Python library versions may not be available with
91124
earlier versions of Python. Moreover, its `end-of-life <https://endoflife.date/python>`__
92-
is Oct 2025.
125+
is Oct 2027.
93126

94127
*Local machine setup*
95128
=====================
96129

97-
Below are commands that worked for our configuration (Ubuntu
98-
21.04), but there are certainly other ways to download a repository and
130+
Below are commands that worked for our configuration (Ubuntu 22.04.5 LTS),
131+
but there are certainly other ways to download a repository and
99132
install the library locally.
100133

101134
Note that ``(lensless)`` is a convention to indicate that the virtual
@@ -196,7 +229,7 @@ to them for the idea and making tools/code/data available! Below is some of
196229
the work that has inspired this toolkit:
197230

198231
* `Build your own DiffuserCam tutorial <https://waller-lab.github.io/DiffuserCam/tutorial>`__.
199-
* `DiffuserCam Lensless MIR Flickr dataset <https://waller-lab.github.io/LenslessLearning/dataset.html>`__ [1]_.
232+
* `DiffuserCam Lensless MIR Flickr dataset <https://waller-lab.github.io/LenslessLearning/dataset.html>`__ [2]_.
200233

201234
A few students at EPFL have also contributed to this project:
202235

@@ -206,10 +239,12 @@ A few students at EPFL have also contributed to this project:
206239
* Rein Bentdal and David Karoubi: mask fabrication with 3D printing.
207240
* Stefan Peters: imaging under external illumination.
208241

242+
We also thank the Swiss National Science Foundation for funding this project through the `Open Research Data (ORD) program <https://ethrat.ch/en/eth-domain/open-research-data/>`__.
243+
209244
Citing this work
210245
----------------
211246

212-
If you use these tools in your own research, please cite the following:
247+
If you use this toolkit in your own research, please cite the following:
213248

214249
::
215250

@@ -226,7 +261,66 @@ If you use these tools in your own research, please cite the following:
226261
journal = {Journal of Open Source Software}
227262
}
228263

264+
265+
The following papers have contributed new approaches to the field of lensless imaging:
266+
267+
* Introducing pre-processor component as part of modular reconstruction (`IEEE Transactions on Computational Imaging <https://arxiv.org/abs/2502.01102>`__ and `IEEE International Conference on Image Processing (ICIP) 2024 <https://arxiv.org/abs/2403.00537>`__):
268+
269+
::
270+
271+
@ARTICLE{10908470,
272+
author={Bezzam, Eric and Perron, Yohann and Vetterli, Martin},
273+
journal={IEEE Transactions on Computational Imaging},
274+
title={Towards Robust and Generalizable Lensless Imaging With Modular Learned Reconstruction},
275+
year={2025},
276+
volume={11},
277+
number={},
278+
pages={213-227},
279+
keywords={Training;Wiener filters;Computational modeling;Transfer learning;Computer architecture;Cameras;Transformers;Software;Software measurement;Image reconstruction;Lensless imaging;modularity;robustness;generalizability;programmable mask;transfer learning},
280+
doi={10.1109/TCI.2025.3539448}
281+
}
282+
283+
@INPROCEEDINGS{10647433,
284+
author={Perron, Yohann and Bezzam, Eric and Vetterli, Martin},
285+
booktitle={2024 IEEE International Conference on Image Processing (ICIP)},
286+
title={A Modular and Robust Physics-Based Approach for Lensless Image Reconstruction},
287+
year={2024},
288+
volume={},
289+
number={},
290+
pages={3979-3985},
291+
keywords={Training;Multiplexing;Pipelines;Noise;Cameras;Robustness;Reproducibility of results;Lensless imaging;modular reconstruction;end-to-end optimization},
292+
doi={10.1109/ICIP51287.2024.10647433}
293+
}
294+
295+
296+
* Lensless imaging under external illumination (`IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2025 <https://arxiv.org/abs/2502.01102>`__):
297+
298+
::
299+
300+
@INPROCEEDINGS{10888030,
301+
author={Bezzam, Eric and Peters, Stefan and Vetterli, Martin},
302+
booktitle={ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
303+
title={Let There Be Light: Robust Lensless Imaging Under External Illumination With Deep Learning},
304+
year={2025},
305+
volume={},
306+
number={},
307+
pages={1-5},
308+
keywords={Source separation;Noise;Lighting;Interference;Reconstruction algorithms;Cameras;Optics;Speech processing;Image reconstruction;Standards;lensless imaging;ambient lighting;external illumination;background subtraction;learned reconstruction},
309+
doi={10.1109/ICASSP49660.2025.10888030}
310+
}
311+
229312
References
230313
----------
231314

232-
.. [1] Monakhova, K., Yurtsever, J., Kuo, G., Antipa, N., Yanny, K., & Waller, L. (2019). Learned reconstructions for practical mask-based lensless imaging. Optics express, 27(20), 28075-28090.
315+
.. [1] Antipa, N., Kuo, G., Heckel, R., Mildenhall, B., Bostan, E., Ng, R., & Waller, L. (2017). DiffuserCam: lensless single-exposure 3D imaging. Optica, 5(1), 1-9.
316+
.. [2] Monakhova, K., Yurtsever, J., Kuo, G., Antipa, N., Yanny, K., & Waller, L. (2019). Learned reconstructions for practical mask-based lensless imaging. Optics express, 27(20), 28075-28090.
317+
.. [3] Zeng, T., & Lam, E. Y. (2021). Robust reconstruction with deep learning to handle model mismatch in lensless imaging. IEEE Transactions on Computational Imaging, 7, 1080-1092.
318+
.. [4] Khan, S. S., Sundar, V., Boominathan, V., Veeraraghavan, A., & Mitra, K. (2020). Flatnet: Towards photorealistic scene reconstruction from lensless measurements. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(4), 1934-1948.
319+
.. [5] Li, Y., Li, Z., Chen, K., Guo, Y., & Rao, C. (2023). MWDNs: reconstruction in multi-scale feature spaces for lensless imaging. Optics Express, 31(23), 39088-39101.
320+
.. [6] Cai, X., You, Z., Zhang, H., Gu, J., Liu, W., & Xue, T. (2024). Phocolens: Photorealistic and consistent reconstruction in lensless imaging. Advances in Neural Information Processing Systems, 37, 12219-12242.
321+
.. [7] Bezzam, E., Perron, Y., & Vetterli, M. (2025). Towards Robust and Generalizable Lensless Imaging with Modular Learned Reconstruction. IEEE Transactions on Computational Imaging.
322+
.. [8] Bezzam, E., Peters, S., & Vetterli, M. (2024). Let there be light: Robust lensless imaging under external illumination with deep learning. IEEE International Conference on Acoustics, Speech and Signal Processing.
323+
.. [9] Wu, J., Zhang, H., Zhang, W., Jin, G., Cao, L., & Barbastathis, G. (2020). Single-shot lensless imaging with fresnel zone aperture and incoherent illumination. Light: Science & Applications, 9(1), 53.
324+
.. [10] Asif, M. S., Ayremlou, A., Sankaranarayanan, A., Veeraraghavan, A., & Baraniuk, R. G. (2016). Flatcam: Thin, lensless cameras using coded aperture and computation. IEEE Transactions on Computational Imaging, 3(3), 384-397.
325+
.. [11] Boominathan, V., Adams, J. K., Robinson, J. T., & Veeraraghavan, A. (2020). Phlatcam: Designed phase-mask based thin lensless camera. IEEE transactions on pattern analysis and machine intelligence, 42(7), 1618-1629.
326+
.. [12] Lee, K. C., Bae, J., Baek, N., Jung, J., Park, W., & Lee, S. A. (2023). Design and single-shot fabrication of lensless cameras with arbitrary point spread functions. Optica, 10(1), 72-80.

configs/benchmark/diffusercam.yaml

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -35,6 +35,13 @@ algorithms: [
3535
# "hf:diffusercam:mirflickr:Unet4M+U10+Unet4M",
3636
"hf:diffusercam:mirflickr:Unet4M+U5+Unet4M_psfNN",
3737

38+
# ## comparing UNetRes and Transformer, ADAMW optimizer
39+
# "hf:diffusercam:mirflickr:Transformer4M+U5+Transformer4M",
40+
# "hf:diffusercam:mirflickr:Transformer4M+U5+Transformer4M_psfNN",
41+
# "hf:diffusercam:mirflickr:U5+Transformer8M",
42+
# "hf:diffusercam:mirflickr:Unet4M+U5+Unet4M_adamw",
43+
# "hf:diffusercam:mirflickr:Unet4M+U5+Unet4M_psfNN_adamw",
44+
3845
# # -- benchmark PSF error
3946
# "hf:diffusercam:mirflickr:U5+Unet8M_psf0dB",
4047
# "hf:diffusercam:mirflickr:U5+Unet8M_psf-5dB",
Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
# python scripts/eval/benchmark_recon.py -cn diffusercam_fullres
2+
defaults:
3+
- defaults
4+
- _self_
5+
6+
dataset: HFDataset
7+
batchsize: 4
8+
device: "cuda:0"
9+
10+
huggingface:
11+
repo: "bezzam/DiffuserCam-Lensless-Mirflickr-Dataset-NORM"
12+
psf: psf.tiff
13+
image_res: null
14+
rotate: False # if measurement is upside-down
15+
alignment: null
16+
downsample: 1
17+
downsample_lensed: 1
18+
flipud: True
19+
flip_lensed: True
20+
single_channel_psf: True
21+
22+
algorithms: [
23+
# "ADMM",
24+
25+
# ## comparing LeADMM5 and SVDeconvNet, ADAMW optimizer
26+
"hf:diffusercam:mirflickr:Unet6M+U5+Unet6M_fullres",
27+
"hf:diffusercam:mirflickr:Unet6M+U5+Unet6M_psfNN_fullres",
28+
"hf:diffusercam:mirflickr:SVDecon+UNet8M",
29+
"hf:diffusercam:mirflickr:Unet4M+SVDecon+Unet4M",
30+
]
31+
32+
save_idx: [0, 1, 3, 4, 8]
33+
n_iter_range: [100] # for ADMM
34+

configs/benchmark/multilens_ambient.yaml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,7 @@ algorithms: [
2828
# "hf:multilens:mirflickr_ambient:Unet4M+U5+Unet4M_direct_sub",
2929
# "hf:multilens:mirflickr_ambient:Unet4M+U5+Unet4M_learned_sub",
3030
"hf:multilens:mirflickr_ambient:Unet4M+U5+Unet4M_concat",
31+
"hf:multilens:mirflickr_ambient:Unet4M+U5+Unet4M_concat_psfNN",
3132
# "hf:multilens:mirflickr_ambient:TrainInv+Unet8M",
3233
# "hf:multilens:mirflickr_ambient:TrainInv+Unet8M_learned_sub",
3334
# "hf:multilens:mirflickr_ambient:Unet4M+TrainInv+Unet4M",

configs/train/README.md

Lines changed: 26 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,12 @@
11
# Training physics-informed reconstruction models
22

3+
The core PyTorch-based training script can be found [here](https://github.com/LCAV/LenslessPiCam/blob/main/scripts/recon/train_learning_based.py), which is used to train physics-informed reconstruction models on various datasets. The script supports different camera inversion methods, pre- and post-processors, and PSF correction. Below is a visualization of the modular framework that can be trained (all components are optional).
4+
5+
![Modular framework](modular_framework.png)
6+
37
The following datasets are supported (hyperlinks takes to relevant configuration description).
48
By default, the model architecture uses five unrolleed iterations of ADMM for camera inversion, and UNetRes models for the pre-processor post-processor, and PSF correction.
59

6-
710
- [DiffuserCam](#diffusercam)
811
- [Transformer architecture for pre- and post-processors](#transformer-architecture-for-pre--and-post-processors)
912
- [Multi PSF camera inversion (PhoCoLens)](#multi-psf-camera-inversion)
@@ -30,28 +33,39 @@ With DiffuserCam, we show how to set different camera inversion methods and neur
3033

3134
The commands below show how to train different camera inversion methods on the DiffuserCam dataset (downsampled by a factor of 2 along each dimension). For a fair comparison, all models use around 8.1M parameters.
3235

36+
### Unrolled ADMM
37+
With UNetRes models for the pre- and post-processors, and PSF correction.
3338
```bash
3439
# unrolled ADMM
3540
python scripts/recon/train_learning_based.py -cn diffusercam
41+
```
3642

37-
# Trainable inversion (FlatNet but with out adversarial loss)
38-
# -- need to set PSF as trainable
39-
python scripts/recon/train_learning_based.py -cn diffusercam \
40-
reconstruction.method=trainable_inv \
41-
reconstruction.psf_network=False \
42-
trainable_mask.mask_type=TrainablePSF \
43-
trainable_mask.L1_strength=False
44-
45-
# Unrolled ADMM with compensation branch
43+
### Compensation branch
44+
Adding compenstation branch to unrolled ADMM to address model mismatch.
45+
```bash
4646
# - adjust shapes of pre and post processors
4747
python scripts/recon/train_learning_based.py -cn diffusercam \
4848
reconstruction.psf_network=False \
4949
reconstruction.pre_process.nc=[16,32,64,128] \
5050
reconstruction.post_process.nc=[16,32,64,128] \
5151
reconstruction.compensation=[24,64,128,256,400]
52+
```
5253

53-
# Multi wiener deconvolution network (MWDN)
54-
# with PSF correction built into the network
54+
### Trainable inversion
55+
FlatNet but without adversarial loss.
56+
```bash
57+
# -- need to set PSF as trainable
58+
python scripts/recon/train_learning_based.py -cn diffusercam \
59+
reconstruction.method=trainable_inv \
60+
reconstruction.psf_network=False \
61+
trainable_mask.mask_type=TrainablePSF \
62+
trainable_mask.L1_strength=False
63+
```
64+
65+
### Multi wiener deconvolution network
66+
Multi wiener deconvolution network (MWDN) with PSF correction. No pre- and post-processors as the network
67+
has layers before and after camera inversion.
68+
```bash
5569
python scripts/recon/train_learning_based.py -cn diffusercam \
5670
reconstruction.method=multi_wiener \
5771
reconstruction.multi_wiener.nc=[32,64,128,256,436] \
450 KB
Loading

docs/source/conf.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -44,6 +44,8 @@
4444
"datasets",
4545
"huggingface_hub",
4646
"cadquery",
47+
"wandb",
48+
"einops",
4749
]
4850
for mod_name in MOCK_MODULES:
4951
sys.modules[mod_name] = mock.Mock()

0 commit comments

Comments
 (0)