Skip to content

Commit 76b6699

Browse files
Merge branch 'master' into sgn-density-and-volume
2 parents d1de337 + 626474b commit 76b6699

File tree

76 files changed

+5487
-393
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

76 files changed

+5487
-393
lines changed

.github/workflows/build_docs.yaml

Lines changed: 74 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,74 @@
1+
name: Build and Deploy Docs
2+
3+
on:
4+
push:
5+
paths:
6+
- "doc/*.md" # Trigger on changes to any markdown file
7+
- "flamigo_tools/**/*.py" # Optionally include changes in Python files
8+
branches:
9+
- master # Run the workflow only on pushes to the master branch
10+
workflow_dispatch:
11+
12+
# security: restrict permissions for CI jobs.
13+
permissions:
14+
contents: read
15+
pages: write # to publish to Pages
16+
id-token: write # to authenticate to Pages
17+
18+
concurrency:
19+
group: "pages"
20+
cancel-in-progress: true
21+
22+
jobs:
23+
build:
24+
name: Build Documentation
25+
runs-on: ubuntu-latest
26+
27+
steps:
28+
- name: Checkout Code
29+
uses: actions/checkout@v4
30+
31+
- name: Set up Micromamba
32+
uses: mamba-org/setup-micromamba@v2
33+
with:
34+
micromamba-version: "latest"
35+
environment-file: environment.yaml
36+
init-shell: bash
37+
cache-environment: true
38+
post-cleanup: 'all'
39+
40+
- name: Install napari
41+
shell: bash -l {0}
42+
run: pip install napari pyqt5
43+
44+
- name: Install package
45+
shell: bash -l {0}
46+
run: pip install -e .
47+
48+
- name: Install pdoc
49+
shell: bash -l {0}
50+
run: pip install pdoc
51+
52+
- name: Generate Documentation
53+
shell: bash -l {0}
54+
run: pdoc flamingo_tools/ -d google -o _site
55+
56+
- name: Verify Documentation Output
57+
run: ls -la _site/
58+
59+
- name: Upload Documentation Artifact
60+
uses: actions/upload-pages-artifact@v3
61+
with:
62+
path: _site/
63+
64+
deploy:
65+
name: Deploy Documentation
66+
needs: build
67+
runs-on: ubuntu-latest
68+
environment:
69+
name: github-pages
70+
url: ${{ steps.deployment.outputs.page_url }}
71+
steps:
72+
- name: Deploy to GitHub Pages
73+
id: deployment
74+
uses: actions/deploy-pages@v4

README.md

Lines changed: 10 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,14 @@
1-
# Flamingo Tools
1+
# CochleaNet
22

3-
Data processing for light-sheet microscopy, specifically for data from [Flamingo microscopes](https://huiskenlab.com/flamingo/).
3+
CochleaNet is a software for the analysis of cochleae imaged in light-sheet microscopy. It is based on deep neural networks for the segmentation of spiral ganglion neurons, inner hair cells, and the detection of ribbon synapses.
4+
It was developed for imaging data from (clear-tissue) [flamingo microscopes](https://huiskenlab.com/flamingo/) and is also applicable to data from commercial microscopes.
45

6+
In addition to the analysis functionality, CochleaNet implements data pre-processing to convert data from flamingo microscopes into a format compatible with [BigStitcher](https://imagej.net/plugins/bigstitcher/) and to export image data and segmentation results to [ome.zarr](https://www.nature.com/articles/s41592-021-01326-w) and [MoBIE](https://mobie.github.io/).
7+
This functionality is applicable to any imaging data from flamingo microscopes, not only clear-tissue data or cochleae. We aim to also extend the segmentation and analysis functionality to other kinds of samples imaged in the flamingo in the future.
8+
9+
For installation and usage instructions, check out [the documentation](https://computational-cell-analytics.github.io/cochlea-net/). For more details on the underlying methodology check out [our preprint](TODO).
10+
11+
<!---
512
The `flamingo_tools` library implements functionality for:
613
- converting the lightsheet data into a format compatible with [BigDataViewer](https://imagej.net/plugins/bdv/) and [BigStitcher](https://imagej.net/plugins/bigstitcher/).
714
- Cell / nucleus segmentation via a 3D U-net.
@@ -48,3 +55,4 @@ You can also check out the following example scripts:
4855
- `load_data.py`: Example script for how to load sub-regions from the converted data into python.
4956
5057
For advanced examples to segment data with a U-Net, check out the `scripts` folder.
58+
--->

doc/documentation.md

Lines changed: 83 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,83 @@
1+
# CochleaNet
2+
3+
CochleaNet is a software tool for the analysis of cochleae imaged in light-sheet microscopy.
4+
Its main components are:
5+
- A deep neural network for segmenting spiral ganglion neurons (SGNs) from parvalbumin (PV) staining.
6+
- A deep neural network for segmenting inner hair cells (IHCs) from VGlut3 staining.
7+
- A deep neural network for detecting ribbon synapses from CtBP2 staining.
8+
9+
In addition, it contains functionality for data pre-processing and different kinds of measurements based on the network predictions, including:
10+
- Analyzing the tonotopic mapping of SGNs and IHCs in the cochlea.
11+
- Validating gene therapies and optogentic therapies (based on additional fluorescent stainings).
12+
- Analyzing SGN subtypes (based on additional fluorescent staining).
13+
- Visualizing segmentation results and derived analyses in [MoBIE](https://mobie.github.io/).
14+
15+
The networks and analysis methods were primarily developed for high-resolution isotropic data from a [custom light-sheet microscope](https://www.biorxiv.org/content/10.1101/2025.02.21.639411v2.abstract).
16+
The networks will work best on the respective fluorescent stains they were trained on, but will work on similar stains. For example, we have successfully applied the network for SGN segmentation on a calretinin (CR) stain and the network for IHC segmentation on a myosin7a stain.
17+
In addition, CochleaNet provides networks for the segmentation of SGNs and IHCs in anisotropic data from a [commercial light-sheet microscope](https://www.miltenyibiotec.com/DE-en/products/macs-imaging-and-spatial-biology/ultramicroscope-platform.html).
18+
19+
For more information on CochleaNet, check out our [preprint](TODO).
20+
21+
## Installation
22+
23+
CochleaNet can be installed via `conda` (or [micromamba](https://mamba.readthedocs.io/en/latest/user_guide/micromamba.html)).
24+
To install it:
25+
- Download the CochleaNet github repository:
26+
```
27+
git clone https://github.com/computational-cell-analytics/cochlea-net
28+
```
29+
- Go to the directory:
30+
```
31+
cd cochlea-net
32+
```
33+
- Create an environment with the required dependencies:
34+
```
35+
conda env create -f environment.yaml
36+
```
37+
- Activate the environment:
38+
```
39+
conda activate cochlea-net
40+
```
41+
- Install the cochlea-net package:
42+
```
43+
pip install .
44+
```
45+
- (Optional): if you want to use the napari plugin you have to install napari:
46+
```
47+
conda install -c conda-forge napari pyqt
48+
```
49+
50+
## Usage
51+
52+
CochleaNet can be used via:
53+
- The [napari plugin](#napari-plugin): enables prediction with the pre-trained CochleaNet deep neural networks.
54+
- The [command line interface](#command-line-interface): enables data conversion, model prediction, and selected analysis workflows for large image data.
55+
- The [python library](#python-library): implements CochleaNet's functionality and can be used to implement flexible prediction and data analysis workflows for large image data.
56+
57+
**Note: the napari plugin was not optimized for processing large data. For processing large image data use the CLI or python library.**
58+
59+
### Napari Plugin
60+
61+
The napari plugin for segmentation (SGNs and IHCS) and detection (ribbon synapses) is available under `Plugins->CochleaNet->Segmentation/Detection` in napari:
62+
63+
The segmentation plugin offers the choice of different models under `Select Model:` (see [Available Models](#available-models) for details). `Image data` enables to choose which image data (layer) the model is applied to. The segmentation is started by clicking the `Run Segmentation` button. After the segmentation has finished, a new segmentation layer with the result (here `IHC`) will be added:
64+
65+
The detection model works similarly. It currently provides the model for synapse detection. The predictions are added as point layer (``):
66+
67+
TODO Video.
68+
For more information on how to use napari, check out the tutorials at [www.napari.org](TODO).
69+
70+
**To use the napari plugin you have to install `napari` and `pyqt` in your environment.** See [installation](#installation) for details.
71+
72+
### Command Line Interface
73+
74+
TODO
75+
76+
### Python Library
77+
78+
TODO
79+
80+
81+
## Available Models
82+
83+
TODO

environment.yaml

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,19 @@
1-
name: flamingo
1+
name: cochlea-net
22

33
channels:
44
- conda-forge
55

66
dependencies:
77
- cluster_tools
8-
- scikit-image
8+
- mobie_utils
9+
- pip
910
- pooch
1011
- pybdv
1112
- pytorch
13+
- scikit-image
1214
- s3fs
1315
- torch_em
1416
- trimesh
1517
- z5py
16-
# Don't install zarr v3, as we are not sure that it is compatible with MoBIE etc. yet
18+
# Don't install zarr v3, which is not yet compatible with all dependencies.
1719
- zarr <3

flamingo_tools/__init__.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,6 @@
1+
"""
2+
.. include:: ../doc/documentation.md
3+
"""
4+
15
from .data_conversion import convert_lightsheet_to_bdv, convert_lightsheet_to_bdv_cli
26
from .test_data import create_test_data

flamingo_tools/classification/classification_gui.py

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,9 +11,12 @@
1111
from joblib import dump
1212
from magicgui import magic_factory
1313

14-
import micro_sam.sam_annotator.object_classifier as classifier_util
15-
from micro_sam.object_classification import project_prediction_to_segmentation
16-
from micro_sam.sam_annotator._widgets import _generate_message
14+
try:
15+
import micro_sam.sam_annotator.object_classifier as classifier_util
16+
from micro_sam.object_classification import project_prediction_to_segmentation
17+
from micro_sam.sam_annotator._widgets import _generate_message
18+
except ImportError:
19+
micro_sam = None
1720

1821
from ..measurements import compute_object_measures_impl
1922

flamingo_tools/file_utils.py

Lines changed: 25 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,18 +1,34 @@
1+
import os
12
import warnings
23
from typing import Optional, Union
34

45
import imageio.v3 as imageio
56
import numpy as np
7+
import pooch
68
import tifffile
79
import zarr
810
from elf.io import open_file
911

12+
from .s3_utils import get_s3_path
13+
1014
try:
1115
from zarr.abc.store import Store
1216
except ImportError:
1317
from zarr._storage.store import BaseStore as Store
1418

1519

20+
def get_cache_dir() -> str:
21+
"""Get the cache directory of CochleaNet.
22+
23+
The default cache directory is "$HOME/cochlea-net"
24+
25+
Returns:
26+
The cache directory.
27+
"""
28+
cache_dir = os.path.expanduser(pooch.os_cache("cochlea-net"))
29+
return cache_dir
30+
31+
1632
def _parse_shape(metadata_file):
1733
depth, height, width = None, None, None
1834

@@ -67,7 +83,9 @@ def read_tif(file_path: str) -> Union[np.ndarray, np.memmap]:
6783
return x
6884

6985

70-
def read_image_data(input_path: Union[str, Store], input_key: Optional[str]) -> np.typing.ArrayLike:
86+
def read_image_data(
87+
input_path: Union[str, Store], input_key: Optional[str], from_s3: bool = False
88+
) -> np.typing.ArrayLike:
7189
"""Read flamingo image data, stored in various formats.
7290
7391
Args:
@@ -76,10 +94,16 @@ def read_image_data(input_path: Union[str, Store], input_key: Optional[str]) ->
7694
Access via S3 is only supported for a zarr container.
7795
input_key: The key (= internal path) for a zarr or n5 container.
7896
Set it to None if the data is stored in a tif file.
97+
from_s3: Whether to read the data from S3.
7998
8099
Returns:
81100
The data, loaded either as a numpy mem-map, a numpy array, or a zarr / n5 array.
82101
"""
102+
if from_s3:
103+
assert input_key is not None
104+
s3_store, fs = get_s3_path(input_path)
105+
return zarr.open(s3_store, mode="r")[input_key]
106+
83107
if input_key is None:
84108
input_ = read_tif(input_path)
85109
elif isinstance(input_path, str):

flamingo_tools/measurements.py

Lines changed: 13 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
import warnings
44
from concurrent import futures
55
from functools import partial
6-
from typing import List, Optional, Tuple
6+
from typing import List, Optional, Tuple, Union
77

88
import numpy as np
99
import pandas as pd
@@ -25,7 +25,7 @@
2525

2626
def _measure_volume_and_surface(mask, resolution):
2727
# Use marching_cubes for 3D data
28-
verts, faces, normals, _ = marching_cubes(mask, spacing=resolution)
28+
verts, faces, normals, _ = marching_cubes(mask, spacing=(resolution,) * 3)
2929

3030
mesh = trimesh.Trimesh(vertices=verts, faces=faces, vertex_normals=normals)
3131
surface = mesh.area
@@ -60,6 +60,9 @@ def _get_bounding_box_and_center(table, seg_id, resolution, shape, dilation):
6060
for bmin, bmax, sh in zip(bb_min, bb_max, shape)
6161
)
6262

63+
if isinstance(resolution, float):
64+
resolution = (resolution,) * 3
65+
6366
center = (
6467
int(row.anchor_z.item() / resolution[0]),
6568
int(row.anchor_y.item() / resolution[1]),
@@ -155,7 +158,7 @@ def _default_object_features(
155158
# The radius passed is given in micrometer.
156159
# The resolution is given in micrometer per pixel.
157160
# So we have to divide by the resolution to obtain the radius in pixel.
158-
radius_in_pixel = background_radius / resolution
161+
radius_in_pixel = background_radius / resolution if isinstance(resolution, (float, int)) else resolution[1]
159162
measures = _normalize_background(measures, image, background_mask, center, radius_in_pixel, norm, median_only)
160163

161164
# Do the volume and surface measurement.
@@ -207,7 +210,7 @@ def _regionprops_features(seg_id, table, image, segmentation, resolution, backgr
207210
return features
208211

209212

210-
def get_object_measures_from_table(arr_seg, table, keyword="median"):
213+
def get_object_measures_from_table(arr_seg, table):
211214
"""Return object measurements for label IDs wthin array.
212215
"""
213216
# iterate through segmentation ids in reference mask
@@ -217,11 +220,11 @@ def get_object_measures_from_table(arr_seg, table, keyword="median"):
217220
if len(object_ids) < len(ref_ids):
218221
warnings.warn(f"Not all IDs were found in measurement table. Using {len(object_ids)}/{len(ref_ids)}.")
219222

220-
measure_values = [table.at[table.index[table["label_id"] == label_id][0], keyword] for label_id in object_ids]
223+
median_values = [table.at[table.index[table["label_id"] == label_id][0], "median"] for label_id in object_ids]
221224

222225
measures = pd.DataFrame({
223226
"label_id": object_ids,
224-
keyword: measure_values,
227+
"median": median_values,
225228
})
226229
return measures
227230

@@ -252,7 +255,7 @@ def compute_object_measures_impl(
252255
image: np.typing.ArrayLike,
253256
segmentation: np.typing.ArrayLike,
254257
n_threads: Optional[int] = None,
255-
resolution: Optional[Tuple[float, float, float]] = (0.38, 0.38, 0.38),
258+
resolution: float = 0.38,
256259
table: Optional[pd.DataFrame] = None,
257260
feature_set: str = "default",
258261
background_mask: Optional[np.typing.ArrayLike] = None,
@@ -323,7 +326,7 @@ def compute_object_measures(
323326
image_key: Optional[str] = None,
324327
segmentation_key: Optional[str] = None,
325328
n_threads: Optional[int] = None,
326-
resolution: Optional[Tuple[float, float, float]] = (0.38, 0.38, 0.38),
329+
resolution: Union[float, Tuple[float, ...]] = 0.38,
327330
force: bool = False,
328331
feature_set: str = "default",
329332
s3_flag: bool = False,
@@ -375,8 +378,8 @@ def compute_object_measures(
375378
table = table[table["component_labels"].isin(component_list)]
376379

377380
# Then, open the volumes.
378-
image = read_image_data(image_path, image_key)
379-
segmentation = read_image_data(segmentation_path, segmentation_key)
381+
image = read_image_data(image_path, image_key, from_s3=s3_flag)
382+
segmentation = read_image_data(segmentation_path, segmentation_key, from_s3=s3_flag)
380383

381384
measures = compute_object_measures_impl(
382385
image, segmentation, n_threads, resolution, table=table, feature_set=feature_set,

0 commit comments

Comments
 (0)