Skip to content

Commit baa4ee3

Browse files
authored
DLMBL 2024 notebook (#114)
* pruning noteboook from demo for release * - initial commit adding the predictions using pretrained model. - adding evaluation of pretrained vs course trained model - saving of the predictions - saving of the pixl based metrics and segmentation metrics * renaming the previous virtual staining demo * formatting and updaing demos readme * restructuring the folder tree * updating readme after folder reorg * - adding predictions with pretrain model - evaluation metrics pixel and segmentation - saving predictions for further evaluation * bumping cellpose to 3.0.10
1 parent a1df436 commit baa4ee3

File tree

18 files changed

+1555
-68
lines changed

18 files changed

+1555
-68
lines changed

README.md

Lines changed: 24 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -11,12 +11,30 @@ The following methods are being developed:
1111
- Image representation learning
1212
- Self-supervised learning of the cell state and organelle phenotypes
1313

14-
VisCy is currently considered alpha software and is under active development.
15-
Frequent breaking changes are expected.
14+
<div style="border: 2px solid orange; padding: 10px; border-radius: 5px; background-color: #fff8e1;">
15+
<strong>Note:</strong><br>
16+
VisCy is currently considered alpha software and is under active development. Frequent breaking changes are expected.
17+
</div>
1618

1719
## Virtual staining
18-
20+
### Pipeline
1921
A full illustration of the virtual staining pipeline can be found [here](docs/virtual_staining.md).
22+
23+
### Library of virtual staining (VS) models
24+
The robust virtual staining models (i.e *VSCyto2D*, *VSCyto3D*, *VSNeuromast*), and fine-tuned models can be found [here](https://github.com/mehta-lab/VisCy/wiki/Library-of-virtual-staining-(VS)-Models)
25+
26+
### Demos
27+
#### Image-to-Image translation using VisCy
28+
- [Guide for Virtual Staining Models](https://github.com/mehta-lab/VisCy/wiki/virtual-staining-instructions):
29+
Instructions for how to train and run inference on ViSCy's virtual staining models (*VSCyto3D*, *VSCyto2D* and *VSNeuromast*)
30+
31+
- [Image translation Exercise](./dlmbl_exercise/solution.py):
32+
Example showing how to use VisCy to train, predict and evaluate the VSCyto2D model. This notebook was developed for the [DL@MBL2024](https://github.com/dlmbl/DL-MBL-2024) course.
33+
34+
- [Virtual staining exercise](./img2img_translation/solution.py): exploring the label-free to fluorescence virtual staining and florescence to label-free image translation task using VisCy UneXt2.
35+
More usage examples and demos can be found [here](https://github.com/mehta-lab/VisCy/blob/b7af9687c6409c738731ea47f66b74db2434443c/examples/virtual_staining/README.md)
36+
37+
### Gallery
2038
Below are some examples of virtually stained images (click to play videos).
2139
See the full gallery [here](https://github.com/mehta-lab/VisCy/wiki/Gallery).
2240

@@ -100,19 +118,14 @@ publisher = {eLife Sciences Publications, Ltd},
100118
viscy --help
101119
```
102120

121+
## Contributing
103122
For development installation, see [the contributing guide](CONTRIBUTING.md).
104123

124+
## Additional Notes
105125
The pipeline is built using the [PyTorch Lightning](https://www.pytorchlightning.ai/index.html) framework.
106126
The [iohub](https://github.com/czbiohub-sf/iohub) library is used
107127
for reading and writing data in [OME-Zarr](https://www.nature.com/articles/s41592-021-01326-w) format.
108128

109129
The full functionality is tested on Linux `x86_64` with NVIDIA Ampere GPUs (CUDA 12.4).
110130
Some features (e.g. mixed precision and distributed training) may not be available with other setups,
111-
see [PyTorch documentation](https://pytorch.org) for details.
112-
113-
### Demos
114-
Check out our demos for:
115-
- [Virtual staining](https://github.com/mehta-lab/VisCy/tree/main/examples/demos) - training, inference and evaluation
116-
117-
### Library of virtual staining (VS) models
118-
The robust virtual staining models (i.e *VSCyto2D*, *VSCyto3D*, *VSNeuromast*), and fine-tuned models can be found [here](https://github.com/mehta-lab/VisCy/wiki/Library-of-virtual-staining-(VS)-Models)
131+
see [PyTorch documentation](https://pytorch.org) for details.

examples/demos/README.md

Lines changed: 0 additions & 29 deletions
This file was deleted.
Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
# VisCy usage examples
2+
3+
Examples scripts showcasing the usage of VisCy for different computer vision tasks.
4+
5+
## Virtual staining
6+
### Image-to-Image translation using VisCy
7+
- [Guide for Virtual Staining Models](https://github.com/mehta-lab/VisCy/wiki/virtual-staining-instructions):
8+
Instructions for how to train and run inference on ViSCy's virtual staining models (*VSCyto3D*, *VSCyto2D* and *VSNeuromast*)
9+
10+
- [Image translation Exercise](./dlmbl_exercise/solution.py):
11+
Example showing how to use VisCy to train, predict and evaluate the VSCyto2D model. This notebook was developed for the [DL@MBL2024](https://github.com/dlmbl/DL-MBL-2024) course.
12+
13+
- [Virtual staining exercise](./img2img_translation/solution.py): exploring the label-free to fluorescence virtual staining and florescence to label-free image translation task using VisCy UneXt2.
14+
15+
## Notes
16+
To run the examples, make sure to activate the `viscy` environment. Follow the instructions for each demo.
17+
18+
These scripts can also be ran interactively in many IDEs as notebooks,for example in VS Code, PyCharm, and Spyder.

examples/demos/demo_vscyto2d.py renamed to examples/virtual_staining/VS_model_inference/demo_vscyto2d.py

Lines changed: 2 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,6 @@
1111

1212
from iohub import open_ome_zarr
1313
from plot import plot_vs_n_fluor
14-
1514
# Viscy classes for the trainer and model
1615
from viscy.data.hcs import HCSDataModule
1716
from viscy.light.engine import FcmaeUNet
@@ -31,13 +30,9 @@
3130
root_dir = Path("")
3231
# Download from
3332
# https://public.czbiohub.org/comp.micro/viscy/VSCyto2D/test/a549_hoechst_cellmask_test.zarr/
34-
input_data_path = (
35-
root_dir / "VSCyto2D/test/a549_hoechst_cellmask_test.zarr"
36-
)
33+
input_data_path = root_dir / "VSCyto2D/test/a549_hoechst_cellmask_test.zarr"
3734
# Download from GitHub release page of v0.1.0
38-
model_ckpt_path = (
39-
root_dir / "VisCy-0.1.0-VS-models/VSCyto2D/epoch=399-step=23200.ckpt"
40-
)
35+
model_ckpt_path = root_dir / "VisCy-0.1.0-VS-models/VSCyto2D/epoch=399-step=23200.ckpt"
4136
# Zarr store to save the predictions
4237
output_path = root_dir / "./a549_prediction.zarr"
4338
# FOV of interest

examples/demos/demo_vscyto3d.py renamed to examples/virtual_staining/VS_model_inference/demo_vscyto3d.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,9 +11,7 @@
1111

1212
from iohub import open_ome_zarr
1313
from plot import plot_vs_n_fluor
14-
1514
from viscy.data.hcs import HCSDataModule
16-
1715
# Viscy classes for the trainer and model
1816
from viscy.light.engine import VSUNet
1917
from viscy.light.predict_writer import HCSPredictionWriter
@@ -30,7 +28,9 @@
3028
# %%
3129
# Download from
3230
# https://public.czbiohub.org/comp.micro/viscy/VSCyto3D/test/no_pertubation_Phase1e-3_Denconv_Nuc8e-4_Mem8e-4_pad15_bg50.zarr/
33-
input_data_path = "VSCyto3D/test/no_pertubation_Phase1e-3_Denconv_Nuc8e-4_Mem8e-4_pad15_bg50.zarr"
31+
input_data_path = (
32+
"VSCyto3D/test/no_pertubation_Phase1e-3_Denconv_Nuc8e-4_Mem8e-4_pad15_bg50.zarr"
33+
)
3434
# Download from GitHub release page of v0.1.0
3535
model_ckpt_path = "VisCy-0.1.0-VS-models/VSCyto3D/epoch=48-step=18130.ckpt"
3636
# Zarr store to save the predictions

examples/demos/demo_vsneuromast.py renamed to examples/virtual_staining/VS_model_inference/demo_vsneuromast.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,9 +11,7 @@
1111

1212
from iohub import open_ome_zarr
1313
from plot import plot_vs_n_fluor
14-
1514
from viscy.data.hcs import HCSDataModule
16-
1715
# Viscy classes for the trainer and model
1816
from viscy.light.engine import VSUNet
1917
from viscy.light.predict_writer import HCSPredictionWriter
@@ -30,7 +28,9 @@
3028
# %%
3129
# Download from
3230
# https://public.czbiohub.org/comp.micro/viscy/VSNeuromast/test/20230803_fish2_60x_1_cropped_zyx_resampled_clipped_2.zarr/
33-
input_data_path = "VSNeuromast/test/20230803_fish2_60x_1_cropped_zyx_resampled_clipped_2.zarr"
31+
input_data_path = (
32+
"VSNeuromast/test/20230803_fish2_60x_1_cropped_zyx_resampled_clipped_2.zarr"
33+
)
3434
# Download from GitHub release page of v0.1.0
3535
model_ckpt_path = "VisCy-0.1.0-VS-models/VSNeuromast/timelapse_finetine_1hr_dT_downsample_lr1e-4_45epoch_clahe_v5/epoch=44-step=1215.ckpt"
3636
# Zarr store to save the predictions
File renamed without changes.
Lines changed: 88 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,88 @@
1+
# Exercise 6: Image translation - Part 1
2+
3+
This demo script was developed for the DL@MBL 2024 course by Eduardo Hirata-Miyasaki, Ziwen Liu and Shalin Mehta, with many inputs and bugfixes by [Morgan Schwartz](https://github.com/msschwartz21), [Caroline Malin-Mayor](https://github.com/cmalinmayor), and [Peter Park](https://github.com/peterhpark).
4+
5+
6+
# Image translation (Virtual Staining)
7+
8+
Written by Eduardo Hirata-Miyasaki, Ziwen Liu, and Shalin Mehta, CZ Biohub San Francisco.
9+
10+
## Overview
11+
12+
In this exercise, we will predict fluorescence images of nuclei and plasma membrane markers from quantitative phase images of cells, i.e., we will _virtually stain_ the nuclei and plasma membrane visible in the phase image.
13+
This is an example of an image translation task. We will apply spatial and intensity augmentations to train robust models and evaluate their performance. Finally, we will explore the opposite process of predicting a phase image from a fluorescence membrane label.
14+
15+
[![HEK293T](https://raw.githubusercontent.com/mehta-lab/VisCy/main/docs/figures/svideo_1.png)](https://github.com/mehta-lab/VisCy/assets/67518483/d53a81eb-eb37-44f3-b522-8bd7bddc7755)
16+
(Click on image to play video)
17+
18+
## Goals
19+
20+
### Part 1: Learn to use iohub (I/O library), VisCy dataloaders, and TensorBoard.
21+
22+
- Use a OME-Zarr dataset of 34 FOVs of adenocarcinomic human alveolar basal epithelial cells (A549),
23+
each FOV has 3 channels (phase, nuclei, and cell membrane).
24+
The nuclei were stained with DAPI and the cell membrane with Cellmask.
25+
- Explore OME-Zarr using [iohub](https://czbiohub-sf.github.io/iohub/main/index.html)
26+
and the high-content-screen (HCS) format.
27+
- Use [MONAI](https://monai.io/) to implement data augmentations.
28+
29+
### Part 2: Train and evaluate the model to translate phase into fluorescence, and vice versa.
30+
- Train a 2D UNeXt2 model to predict nuclei and membrane from phase images.
31+
- Compare the performance of the trained model and a pre-trained model.
32+
- Evaluate the model using pixel-level and instance-level metrics.
33+
34+
35+
Checkout [VisCy](https://github.com/mehta-lab/VisCy/tree/main/examples/demos),
36+
our deep learning pipeline for training and deploying computer vision models
37+
for image-based phenotyping including the robust virtual staining of landmark organelles.
38+
VisCy exploits recent advances in data and metadata formats
39+
([OME-zarr](https://www.nature.com/articles/s41592-021-01326-w)) and DL frameworks,
40+
[PyTorch Lightning](https://lightning.ai/) and [MONAI](https://monai.io/).
41+
42+
## Setup
43+
44+
Make sure that you are inside of the `image_translation` folder by using the `cd` command to change directories if needed.
45+
46+
Make sure that you can use conda to switch environments.
47+
48+
```bash
49+
conda init
50+
```
51+
52+
**Close your shell, and login again.**
53+
54+
Run the setup script to create the environment for this exercise and download the dataset.
55+
```bash
56+
sh setup.sh
57+
```
58+
Activate your environment
59+
```bash
60+
conda activate 06_image_translation
61+
```
62+
63+
## Use vscode
64+
65+
Install vscode, install jupyter extension inside vscode, and setup [cell mode](https://code.visualstudio.com/docs/python/jupyter-support-py). Open [solution.py](solution.py) and run the script interactively.
66+
67+
## Use Jupyter Notebook
68+
69+
The matching exercise and solution notebooks can be found [here](https://github.com/dlmbl/image_translation/tree/28e0e515b4a8ad3f392a69c8341e105f730d204f) on the course repository.
70+
71+
Launch a jupyter environment
72+
73+
```
74+
jupyter notebook
75+
```
76+
77+
...and continue with the instructions in the notebook.
78+
79+
If `06_image_translation` is not available as a kernel in jupyter, run:
80+
81+
```
82+
python -m ipykernel install --user --name=06_image_translation
83+
```
84+
85+
### References
86+
87+
- [Liu, Z. and Hirata-Miyasaki, E. et al. (2024) Robust Virtual Staining of Cellular Landmarks](https://www.biorxiv.org/content/10.1101/2024.05.31.596901v2.full.pdf)
88+
- [Guo et al. (2020) Revealing architectural order with quantitative label-free imaging and deep learning. eLife](https://elifesciences.org/articles/55502)

examples/demo_dlmbl/convert-solution.py renamed to examples/virtual_staining/dlmbl_exercise/convert-solution.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,8 @@
11
import argparse
22

33
from nbconvert.exporters import NotebookExporter
4-
from nbconvert.preprocessors import ClearOutputPreprocessor, TagRemovePreprocessor
4+
from nbconvert.preprocessors import (ClearOutputPreprocessor,
5+
TagRemovePreprocessor)
56
from traitlets.config import Config
67

78

File renamed without changes.

0 commit comments

Comments
 (0)