You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.rst
+102-8Lines changed: 102 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -65,6 +65,39 @@ We've also written a few Medium articles to guide users through the process
65
65
of building the camera, measuring data with it, and reconstruction.
66
66
They are all laid out in `this post <https://medium.com/@bezzam/a-complete-lensless-imaging-tutorial-hardware-software-and-algorithms-8873fa81a660>`__.
67
67
68
+
Collection of lensless imaging research
69
+
---------------------------------------
70
+
71
+
The following works have been implemented in the toolkit:
72
+
73
+
Reconstruction algorithms:
74
+
75
+
* ADMM with total variation regularization and 3D support (`source code <https://github.com/LCAV/LenslessPiCam/blob/d0261b4bc79ef05228b135e6898deb4f7793d1aa/lensless/recon/admm.py#L24>`__, `usage <https://github.com/LCAV/LenslessPiCam/blob/main/scripts/recon/admm.py>`__). [1]_
* Near-field Phase Retrieval for designing a high-contrast phase mask (`source code <https://github.com/LCAV/LenslessPiCam/blob/d0261b4bc79ef05228b135e6898deb4f7793d1aa/lensless/hardware/mask.py#L706>`__). [11]_
89
+
* LCD-based camera, i.e. DigiCam (`source code <https://github.com/LCAV/LenslessPiCam/blob/d0261b4bc79ef05228b135e6898deb4f7793d1aa/lensless/hardware/trainable_mask.py#L117>`__). [7]_
90
+
91
+
Datasets (hosted on Hugging Face and downloaded via their API):
92
+
93
+
* DiffuserCam Lensless MIR Flickr dataset (copy on `Hugging Face <https://huggingface.co/datasets/bezzam/DiffuserCam-Lensless-Mirflickr-Dataset-NORM>`__). [2]_
94
+
* TapeCam MIR Flickr (`Hugging Face <https://huggingface.co/datasets/bezzam/TapeCam-Mirflickr-25K>`__). [7]_
95
+
* DigiCam MIR Flickr (`Hugging Face <https://huggingface.co/datasets/bezzam/DigiCam-Mirflickr-SingleMask-25K>`__). [7]_
96
+
* DigiCam MIR Flickr with multiple mask patterns (`Hugging Face <https://huggingface.co/datasets/bezzam/DigiCam-Mirflickr-MultiMask-25K>`__). [7]_
97
+
* DigiCam CelebA (`Hugging Face <https://huggingface.co/datasets/bezzam/DigiCam-CelebA-26K>`__). [7]_
98
+
* MultiFocal mask MIR Flickr under external illumination (`Hugging Face <https://huggingface.co/datasets/Lensless/MultiLens-Mirflickr-Ambient>`__). [8]_ Mask fabricated by [12]_
99
+
100
+
68
101
Setup
69
102
-----
70
103
@@ -86,16 +119,16 @@ the HQ camera sensor (or V2 sensor). Instructions on building the camera
86
119
can be found `here <https://lensless.readthedocs.io/en/latest/building.html>`__.
87
120
88
121
The software from this repository has to be installed on **both** your
89
-
local machine and the Raspberry Pi. Note that we highly recommend using
90
-
Python 3.9, as some Python library versions may not be available with
122
+
local machine and the Raspberry Pi. Note that we recommend using
123
+
Python 3.11, as some Python library versions may not be available with
91
124
earlier versions of Python. Moreover, its `end-of-life <https://endoflife.date/python>`__
92
-
is Oct 2025.
125
+
is Oct 2027.
93
126
94
127
*Local machine setup*
95
128
=====================
96
129
97
-
Below are commands that worked for our configuration (Ubuntu
98
-
21.04), but there are certainly other ways to download a repository and
130
+
Below are commands that worked for our configuration (Ubuntu 22.04.5 LTS),
131
+
but there are certainly other ways to download a repository and
99
132
install the library locally.
100
133
101
134
Note that ``(lensless)`` is a convention to indicate that the virtual
@@ -196,7 +229,7 @@ to them for the idea and making tools/code/data available! Below is some of
196
229
the work that has inspired this toolkit:
197
230
198
231
* `Build your own DiffuserCam tutorial <https://waller-lab.github.io/DiffuserCam/tutorial>`__.
199
-
* `DiffuserCam Lensless MIR Flickr dataset <https://waller-lab.github.io/LenslessLearning/dataset.html>`__ [1]_.
232
+
* `DiffuserCam Lensless MIR Flickr dataset <https://waller-lab.github.io/LenslessLearning/dataset.html>`__ [2]_.
200
233
201
234
A few students at EPFL have also contributed to this project:
202
235
@@ -206,10 +239,12 @@ A few students at EPFL have also contributed to this project:
206
239
* Rein Bentdal and David Karoubi: mask fabrication with 3D printing.
207
240
* Stefan Peters: imaging under external illumination.
208
241
242
+
We also thank the Swiss National Science Foundation for funding this project through the `Open Research Data (ORD) program <https://ethrat.ch/en/eth-domain/open-research-data/>`__.
243
+
209
244
Citing this work
210
245
----------------
211
246
212
-
If you use these tools in your own research, please cite the following:
247
+
If you use this toolkit in your own research, please cite the following:
213
248
214
249
::
215
250
@@ -226,7 +261,66 @@ If you use these tools in your own research, please cite the following:
226
261
journal = {Journal of Open Source Software}
227
262
}
228
263
264
+
265
+
The following papers have contributed new approaches to the field of lensless imaging:
266
+
267
+
* Introducing pre-processor component as part of modular reconstruction (`IEEE Transactions on Computational Imaging <https://arxiv.org/abs/2502.01102>`__ and `IEEE International Conference on Image Processing (ICIP) 2024 <https://arxiv.org/abs/2403.00537>`__):
268
+
269
+
::
270
+
271
+
@ARTICLE{10908470,
272
+
author={Bezzam, Eric and Perron, Yohann and Vetterli, Martin},
273
+
journal={IEEE Transactions on Computational Imaging},
274
+
title={Towards Robust and Generalizable Lensless Imaging With Modular Learned Reconstruction},
author={Perron, Yohann and Bezzam, Eric and Vetterli, Martin},
285
+
booktitle={2024 IEEE International Conference on Image Processing (ICIP)},
286
+
title={A Modular and Robust Physics-Based Approach for Lensless Image Reconstruction},
287
+
year={2024},
288
+
volume={},
289
+
number={},
290
+
pages={3979-3985},
291
+
keywords={Training;Multiplexing;Pipelines;Noise;Cameras;Robustness;Reproducibility of results;Lensless imaging;modular reconstruction;end-to-end optimization},
292
+
doi={10.1109/ICIP51287.2024.10647433}
293
+
}
294
+
295
+
296
+
* Lensless imaging under external illumination (`IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2025 <https://arxiv.org/abs/2502.01102>`__):
297
+
298
+
::
299
+
300
+
@INPROCEEDINGS{10888030,
301
+
author={Bezzam, Eric and Peters, Stefan and Vetterli, Martin},
302
+
booktitle={ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
303
+
title={Let There Be Light: Robust Lensless Imaging Under External Illumination With Deep Learning},
.. [1] Monakhova, K., Yurtsever, J., Kuo, G., Antipa, N., Yanny, K., & Waller, L. (2019). Learned reconstructions for practical mask-based lensless imaging. Optics express, 27(20), 28075-28090.
315
+
.. [1] Antipa, N., Kuo, G., Heckel, R., Mildenhall, B., Bostan, E., Ng, R., & Waller, L. (2017). DiffuserCam: lensless single-exposure 3D imaging. Optica, 5(1), 1-9.
316
+
.. [2] Monakhova, K., Yurtsever, J., Kuo, G., Antipa, N., Yanny, K., & Waller, L. (2019). Learned reconstructions for practical mask-based lensless imaging. Optics express, 27(20), 28075-28090.
317
+
.. [3] Zeng, T., & Lam, E. Y. (2021). Robust reconstruction with deep learning to handle model mismatch in lensless imaging. IEEE Transactions on Computational Imaging, 7, 1080-1092.
318
+
.. [4] Khan, S. S., Sundar, V., Boominathan, V., Veeraraghavan, A., & Mitra, K. (2020). Flatnet: Towards photorealistic scene reconstruction from lensless measurements. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(4), 1934-1948.
319
+
.. [5] Li, Y., Li, Z., Chen, K., Guo, Y., & Rao, C. (2023). MWDNs: reconstruction in multi-scale feature spaces for lensless imaging. Optics Express, 31(23), 39088-39101.
320
+
.. [6] Cai, X., You, Z., Zhang, H., Gu, J., Liu, W., & Xue, T. (2024). Phocolens: Photorealistic and consistent reconstruction in lensless imaging. Advances in Neural Information Processing Systems, 37, 12219-12242.
321
+
.. [7] Bezzam, E., Perron, Y., & Vetterli, M. (2025). Towards Robust and Generalizable Lensless Imaging with Modular Learned Reconstruction. IEEE Transactions on Computational Imaging.
322
+
.. [8] Bezzam, E., Peters, S., & Vetterli, M. (2024). Let there be light: Robust lensless imaging under external illumination with deep learning. IEEE International Conference on Acoustics, Speech and Signal Processing.
323
+
.. [9] Wu, J., Zhang, H., Zhang, W., Jin, G., Cao, L., & Barbastathis, G. (2020). Single-shot lensless imaging with fresnel zone aperture and incoherent illumination. Light: Science & Applications, 9(1), 53.
324
+
.. [10] Asif, M. S., Ayremlou, A., Sankaranarayanan, A., Veeraraghavan, A., & Baraniuk, R. G. (2016). Flatcam: Thin, lensless cameras using coded aperture and computation. IEEE Transactions on Computational Imaging, 3(3), 384-397.
325
+
.. [11] Boominathan, V., Adams, J. K., Robinson, J. T., & Veeraraghavan, A. (2020). Phlatcam: Designed phase-mask based thin lensless camera. IEEE transactions on pattern analysis and machine intelligence, 42(7), 1618-1629.
326
+
.. [12] Lee, K. C., Bae, J., Baek, N., Jung, J., Park, W., & Lee, S. A. (2023). Design and single-shot fabrication of lensless cameras with arbitrary point spread functions. Optica, 10(1), 72-80.
Copy file name to clipboardExpand all lines: configs/train/README.md
+26-12Lines changed: 26 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,9 +1,12 @@
1
1
# Training physics-informed reconstruction models
2
2
3
+
The core PyTorch-based training script can be found [here](https://github.com/LCAV/LenslessPiCam/blob/main/scripts/recon/train_learning_based.py), which is used to train physics-informed reconstruction models on various datasets. The script supports different camera inversion methods, pre- and post-processors, and PSF correction. Below is a visualization of the modular framework that can be trained (all components are optional).
4
+
5
+

6
+
3
7
The following datasets are supported (hyperlinks takes to relevant configuration description).
4
8
By default, the model architecture uses five unrolleed iterations of ADMM for camera inversion, and UNetRes models for the pre-processor post-processor, and PSF correction.
5
9
6
-
7
10
-[DiffuserCam](#diffusercam)
8
11
-[Transformer architecture for pre- and post-processors](#transformer-architecture-for-pre--and-post-processors)
9
12
-[Multi PSF camera inversion (PhoCoLens)](#multi-psf-camera-inversion)
@@ -30,28 +33,39 @@ With DiffuserCam, we show how to set different camera inversion methods and neur
30
33
31
34
The commands below show how to train different camera inversion methods on the DiffuserCam dataset (downsampled by a factor of 2 along each dimension). For a fair comparison, all models use around 8.1M parameters.
32
35
36
+
### Unrolled ADMM
37
+
With UNetRes models for the pre- and post-processors, and PSF correction.
0 commit comments