Skip to content

Commit f22c379

Browse files
Implement structure for napari plugin (#77)
* Implement structure for napari plugin * Update segmentation widget * Add sample data * Implement detection widget * Fix tests * Update documentation WIP * Add low-res sample data * Add script to build the doc * Fix tests
1 parent 98ebea9 commit f22c379

File tree

16 files changed

+1137
-9
lines changed

16 files changed

+1137
-9
lines changed

.github/workflows/build_docs.yaml

Lines changed: 66 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,66 @@
1+
name: Build and Deploy Docs
2+
3+
on:
4+
push:
5+
paths:
6+
- "doc/*.md" # Trigger on changes to any markdown file
7+
- "flamigo_tools/**/*.py" # Optionally include changes in Python files
8+
branches:
9+
- main # Run the workflow only on pushes to the main branch
10+
workflow_dispatch:
11+
12+
# security: restrict permissions for CI jobs.
13+
permissions:
14+
contents: read
15+
16+
jobs:
17+
build:
18+
name: Build Documentation
19+
runs-on: ubuntu-latest
20+
21+
steps:
22+
- name: Checkout Code
23+
uses: actions/checkout@v4
24+
25+
- name: Set up Micromamba
26+
uses: mamba-org/setup-micromamba@v2
27+
with:
28+
micromamba-version: "latest"
29+
environment-file: environment.yaml
30+
init-shell: bash
31+
cache-environment: true
32+
post-cleanup: 'all'
33+
34+
- name: Install package
35+
shell: bash -l {0}
36+
run: pip install -e .
37+
38+
- name: Install pdoc
39+
shell: bash -l {0}
40+
run: pip install pdoc
41+
42+
- name: Generate Documentation
43+
shell: bash -l {0}
44+
run: pdoc flamingo_tools/ -d google -o doc/
45+
46+
- name: Verify Documentation Output
47+
run: ls -la doc/
48+
49+
- name: Upload Documentation Artifact
50+
uses: actions/upload-pages-artifact@v3
51+
with:
52+
path: doc/
53+
54+
deploy:
55+
name: Deploy Documentation
56+
needs: build
57+
runs-on: ubuntu-latest
58+
permissions:
59+
pages: write
60+
id-token: write
61+
environment:
62+
name: github-pages
63+
url: ${{ steps.deployment.outputs.page_url }}
64+
steps:
65+
- name: Deploy to GiHub Pages
66+
uses: actions/deploy-pages@v4

README.md

Lines changed: 10 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,14 @@
1-
# Flamingo Tools
1+
# CochleaNet
22

3-
Data processing for light-sheet microscopy, specifically for data from [Flamingo microscopes](https://huiskenlab.com/flamingo/).
3+
CochleaNet is a software for the analysis of cochleae imaged in light-sheet microscopy. It is based on deep neural networks for the segmentation of spiral ganglion neurons, inner hair cells, and the detection of ribbon synapses.
4+
It was developed for imaging data from (clear-tissue) [flamingo microscopes](https://huiskenlab.com/flamingo/) and is also applicable to data from commercial microscopes.
45

6+
In addition to the analysis functionality, CochleaNet implements data pre-processing to convert data from flamingo microscopes into a format compatible with [BigStitcher](https://imagej.net/plugins/bigstitcher/) and to export image data and segmentation results to [ome.zarr](https://www.nature.com/articles/s41592-021-01326-w) and [MoBIE](https://mobie.github.io/).
7+
This functionality is applicable to any imaging data from flamingo microscopes, not only clear-tissue data or cochleae. We aim to also extend the segmentation and analysis functionality to other kinds of samples imaged in the flamingo in the future.
8+
9+
For installation and usage instructions, check out [the documentation](TODO). For more details on the underlying methodology check out [our preprint](TODO).
10+
11+
<!---
512
The `flamingo_tools` library implements functionality for:
613
- converting the lightsheet data into a format compatible with [BigDataViewer](https://imagej.net/plugins/bdv/) and [BigStitcher](https://imagej.net/plugins/bigstitcher/).
714
- Cell / nucleus segmentation via a 3D U-net.
@@ -48,3 +55,4 @@ You can also check out the following example scripts:
4855
- `load_data.py`: Example script for how to load sub-regions from the converted data into python.
4956
5057
For advanced examples to segment data with a U-Net, check out the `scripts` folder.
58+
--->

doc/documentation.md

Lines changed: 68 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,68 @@
1+
# CochleaNet
2+
3+
CochleaNet is a software tool for the analysis of cochleae imaged in light-sheet microscopy.
4+
Its main components are:
5+
- A deep neural network for segmenting spiral ganglion neurons (SGNs) from parvalbumin (PV) staining.
6+
- A deep neural network for segmenting inner hair cells (IHCs) from VGlut3 staining.
7+
- A deep neural network for detecting ribbon synapses from CtBP2 staining.
8+
9+
In addition, it contains functionality for different kinds of measurements based on network predictions, including:
10+
- Analyzing the tonotopic mapping of SGNs and IHCs in the cochlea.
11+
- Validating gene therapies and optogentic therapies (based on additional fluorescent stainings).
12+
- Analyzing SGN subtypes (based on additional fluorescent staining).
13+
- Visualizing segmentation results and derived analysis in [MoBIE](https://mobie.github.io/).
14+
15+
The networks and analysis methods were primarily developed for high-resolution isotropic data from a [custom light-sheet microscope](https://www.biorxiv.org/content/10.1101/2025.02.21.639411v2.abstract).
16+
The networks will work best on the respective fluorescent stains they were trained on, but will work on similar stains. For example, we have successfully applied the network for SGN segmentation on a calretinin (CR) stain and the network for IHC segmentation on a myosin7a stain.
17+
In addition, CochleaNet provides networks for the segmentation of SGNs and IHCs in anisotropic data from a [commercial light-sheet microscope](https://www.miltenyibiotec.com/DE-en/products/macs-imaging-and-spatial-biology/ultramicroscope-platform.html).
18+
19+
For more information on CochleaNet, check out our [preprint](TODO).
20+
21+
## Installation
22+
23+
CochleaNet can be installed via `conda` (or [micromamba](https://mamba.readthedocs.io/en/latest/user_guide/micromamba.html)).
24+
To install it:
25+
- Download the CochleaNet github repository:
26+
```
27+
git clone https://github.com/computational-cell-analytics/cochlea-net
28+
```
29+
- Go to the directory:
30+
```
31+
cd cochlea-net
32+
```
33+
- Create an environment with the required dependencies:
34+
```
35+
conda env create -f environment.yaml
36+
```
37+
- Activate the environment:
38+
```
39+
conda activate cochlea-net
40+
```
41+
- Install the cochlea-net package:
42+
```
43+
pip install .
44+
```
45+
- (Optional): if you want to use the napari plugin you have to install napari:
46+
```
47+
conda install -c conda-forge napari pyqt
48+
```
49+
50+
## Usage
51+
52+
TODO
53+
54+
### Napari Plugin
55+
56+
TODO
57+
58+
### Command Line Interface
59+
60+
TODO
61+
62+
### Available Models
63+
64+
TODO
65+
66+
### Python Library
67+
68+
TODO

environment.yaml

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,18 @@
1-
name: flamingo
1+
name: cochlea-net
22

33
channels:
44
- conda-forge
55

66
dependencies:
77
- cluster_tools
88
- scikit-image
9+
- pip
910
- pooch
1011
- pybdv
1112
- pytorch
1213
- s3fs
1314
- torch_em
1415
- trimesh
1516
- z5py
16-
# Don't install zarr v3, as we are not sure that it is compatible with MoBIE etc. yet
17+
# Don't install zarr v3, which is not yet compatible with all dependencies.
1718
- zarr <3

flamingo_tools/__init__.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,6 @@
1+
"""
2+
.. include:: ../doc/documentation.md
3+
"""
4+
15
from .data_conversion import convert_lightsheet_to_bdv, convert_lightsheet_to_bdv_cli
26
from .test_data import create_test_data

flamingo_tools/file_utils.py

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,10 @@
1+
import os
12
import warnings
23
from typing import Optional, Union
34

45
import imageio.v3 as imageio
56
import numpy as np
7+
import pooch
68
import tifffile
79
import zarr
810
from elf.io import open_file
@@ -15,6 +17,18 @@
1517
from zarr._storage.store import BaseStore as Store
1618

1719

20+
def get_cache_dir() -> str:
21+
"""Get the cache directory of CochleaNet.
22+
23+
The default cache directory is "$HOME/cochlea-net"
24+
25+
Returns:
26+
The cache directory.
27+
"""
28+
cache_dir = os.path.expanduser(pooch.os_cache("cochlea-net"))
29+
return cache_dir
30+
31+
1832
def _parse_shape(metadata_file):
1933
depth, height, width = None, None, None
2034

flamingo_tools/measurements.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@ def _default_object_features(
158158
# The radius passed is given in micrometer.
159159
# The resolution is given in micrometer per pixel.
160160
# So we have to divide by the resolution to obtain the radius in pixel.
161-
radius_in_pixel = background_radius / resolution
161+
radius_in_pixel = background_radius / resolution if isinstance(resolution, (float, int)) else resolution[1]
162162
measures = _normalize_background(measures, image, background_mask, center, radius_in_pixel, norm, median_only)
163163

164164
# Do the volume and surface measurement.

flamingo_tools/model_utils.py

Lines changed: 160 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,160 @@
1+
import os
2+
from typing import Dict, Optional, Union
3+
4+
import pooch
5+
import torch
6+
from .file_utils import get_cache_dir
7+
8+
9+
def _get_default_device():
10+
# Check that we're in CI and use the CPU if we are.
11+
# Otherwise the tests may run out of memory on MAC if MPS is used.
12+
if os.getenv("GITHUB_ACTIONS") == "true":
13+
return "cpu"
14+
# Use cuda enabled gpu if it's available.
15+
if torch.cuda.is_available():
16+
device = "cuda"
17+
# As second priority use mps.
18+
# See https://pytorch.org/docs/stable/notes/mps.html for details
19+
elif torch.backends.mps.is_available() and torch.backends.mps.is_built():
20+
device = "mps"
21+
# Use the CPU as fallback.
22+
else:
23+
device = "cpu"
24+
return device
25+
26+
27+
def get_device(device: Optional[Union[str, torch.device]] = None) -> Union[str, torch.device]:
28+
"""Get the torch device.
29+
30+
If no device is passed the default device for your system is used.
31+
Else it will be checked if the device you have passed is supported.
32+
33+
Args:
34+
device: The input device.
35+
36+
Returns:
37+
The device.
38+
"""
39+
if device is None or device == "auto":
40+
device = _get_default_device()
41+
else:
42+
device_type = device if isinstance(device, str) else device.type
43+
if device_type.lower() == "cuda":
44+
if not torch.cuda.is_available():
45+
raise RuntimeError("PyTorch CUDA backend is not available.")
46+
elif device_type.lower() == "mps":
47+
if not (torch.backends.mps.is_available() and torch.backends.mps.is_built()):
48+
raise RuntimeError("PyTorch MPS backend is not available or is not built correctly.")
49+
elif device_type.lower() == "cpu":
50+
pass # cpu is always available
51+
else:
52+
raise RuntimeError(f"Unsupported device: {device}. Please choose from 'cpu', 'cuda', or 'mps'.")
53+
return device
54+
55+
56+
# FIXME: SGN-lowres seems to be the wrong model and doesn't work well on the sample data.
57+
def get_model_registry() -> None:
58+
"""Get the model registry for downloading pre-trained CochleaNet models.
59+
"""
60+
registry = {
61+
"SGN": "3058690b49015d6210a8e8414eb341c34189fee660b8fac438f1fdc41bdfff98",
62+
"IHC": "89afbcca08ed302aa6dfbaba5bf2530fc13339c05a604b6f2551d97cf5f12774",
63+
"Synapses": "2a42712b056f082b4794f15cf41b15678aab0bec1acc922ff9f0dc76abe6747e",
64+
"SGN-lowres": "6accba4b4c65158fccf25623dcd0fb3b14203305d033a0d443a307114ec5dd8c",
65+
"IHC-lowres": "537f1d4afc5a582771b87adeccadfa5635e1defd13636702363992188ef5bdbd",
66+
}
67+
urls = {
68+
"SGN": "https://owncloud.gwdg.de/index.php/s/NZ2vv7hxX1imITG/download",
69+
"IHC": "https://owncloud.gwdg.de/index.php/s/GBBJkPQFraz1ZzU/download",
70+
"Synapses": "https://owncloud.gwdg.de/index.php/s/A9W5NmOeBxiyZgY/download",
71+
"SGN-lowres": "https://owncloud.gwdg.de/index.php/s/8hwZjBVzkuYhHLm/download",
72+
"IHC-lowres": "https://owncloud.gwdg.de/index.php/s/EhnV4brhpvFbSsy/download",
73+
}
74+
cache_dir = get_cache_dir()
75+
models = pooch.create(
76+
path=os.path.join(cache_dir, "models"),
77+
base_url="",
78+
registry=registry,
79+
urls=urls,
80+
)
81+
return models
82+
83+
84+
def get_model_path(model_type: str) -> str:
85+
"""Get the local path to a pretrained model.
86+
87+
Args:
88+
The model type.
89+
90+
Returns:
91+
The local path to the model.
92+
"""
93+
model_registry = get_model_registry()
94+
model_path = model_registry.fetch(model_type)
95+
return model_path
96+
97+
98+
def get_model(model_type: str, device: Optional[Union[str, torch.device]] = None) -> torch.nn.Module:
99+
"""Get the model for a specific segmentation type.
100+
101+
Args:
102+
model_type: The model for one of the following segmentation or detection tasks:
103+
'SGN', 'IHC', 'Synapses', 'SGN-lowres', 'IHC-lowres'.
104+
device: The device to use.
105+
106+
Returns:
107+
The model.
108+
"""
109+
if device is None:
110+
device = get_device(device)
111+
model_path = get_model_path(model_type)
112+
model = torch.load(model_path, weights_only=False)
113+
model.to(device)
114+
return model
115+
116+
117+
def get_default_tiling() -> Dict[str, Dict[str, int]]:
118+
"""Determine the tile shape and halo depending on the available VRAM.
119+
120+
Returns:
121+
The default tiling settings for the available computational resources.
122+
"""
123+
if torch.cuda.is_available():
124+
# The default halo size.
125+
halo = {"x": 64, "y": 64, "z": 16}
126+
127+
# Determine the GPU RAM and derive a suitable tiling.
128+
vram = torch.cuda.get_device_properties(0).total_memory / 1e9
129+
130+
if vram >= 80:
131+
tile = {"x": 640, "y": 640, "z": 80}
132+
elif vram >= 40:
133+
tile = {"x": 512, "y": 512, "z": 64}
134+
elif vram >= 20:
135+
tile = {"x": 352, "y": 352, "z": 48}
136+
elif vram >= 10:
137+
tile = {"x": 256, "y": 256, "z": 32}
138+
halo = {"x": 64, "y": 64, "z": 8} # Choose a smaller halo in z.
139+
else:
140+
raise NotImplementedError(f"Infererence with a GPU with {vram} GB VRAM is not supported.")
141+
142+
tiling = {"tile": tile, "halo": halo}
143+
print(f"Determined tile size for CUDA: {tiling}")
144+
145+
elif torch.backends.mps.is_available(): # Check for Apple Silicon (MPS)
146+
tile = {"x": 256, "y": 256, "z": 16}
147+
halo = {"x": 16, "y": 16, "z": 4}
148+
tiling = {"tile": tile, "halo": halo}
149+
print(f"Determined tile size for MPS: {tiling}")
150+
151+
# I am not sure what is reasonable on a cpu. For now choosing very small tiling.
152+
# (This will not work well on a CPU in any case.)
153+
else:
154+
tiling = {
155+
"tile": {"x": 96, "y": 96, "z": 16},
156+
"halo": {"x": 16, "y": 16, "z": 8},
157+
}
158+
print(f"Determining default tiling for CPU: {tiling}")
159+
160+
return tiling

0 commit comments

Comments
 (0)