You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+10-2Lines changed: 10 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,14 @@
1
-
# Flamingo Tools
1
+
# CochleaNet
2
2
3
-
Data processing for light-sheet microscopy, specifically for data from [Flamingo microscopes](https://huiskenlab.com/flamingo/).
3
+
CochleaNet is a software for the analysis of cochleae imaged in light-sheet microscopy. It is based on deep neural networks for the segmentation of spiral ganglion neurons, inner hair cells, and the detection of ribbon synapses.
4
+
It was developed for imaging data from (clear-tissue) [flamingo microscopes](https://huiskenlab.com/flamingo/) and is also applicable to data from commercial microscopes.
4
5
6
+
In addition to the analysis functionality, CochleaNet implements data pre-processing to convert data from flamingo microscopes into a format compatible with [BigStitcher](https://imagej.net/plugins/bigstitcher/) and to export image data and segmentation results to [ome.zarr](https://www.nature.com/articles/s41592-021-01326-w) and [MoBIE](https://mobie.github.io/).
7
+
This functionality is applicable to any imaging data from flamingo microscopes, not only clear-tissue data or cochleae. We aim to also extend the segmentation and analysis functionality to other kinds of samples imaged in the flamingo in the future.
8
+
9
+
For installation and usage instructions, check out [the documentation](https://computational-cell-analytics.github.io/cochlea-net/). For more details on the underlying methodology check out [our preprint](TODO).
10
+
11
+
<!---
5
12
The `flamingo_tools` library implements functionality for:
6
13
- converting the lightsheet data into a format compatible with [BigDataViewer](https://imagej.net/plugins/bdv/) and [BigStitcher](https://imagej.net/plugins/bigstitcher/).
7
14
- Cell / nucleus segmentation via a 3D U-net.
@@ -48,3 +55,4 @@ You can also check out the following example scripts:
48
55
- `load_data.py`: Example script for how to load sub-regions from the converted data into python.
49
56
50
57
For advanced examples to segment data with a U-Net, check out the `scripts` folder.
CochleaNet is a software tool for the analysis of cochleae imaged in light-sheet microscopy.
4
+
Its main components are:
5
+
- A deep neural network for segmenting spiral ganglion neurons (SGNs) from parvalbumin (PV) staining.
6
+
- A deep neural network for segmenting inner hair cells (IHCs) from VGlut3 staining.
7
+
- A deep neural network for detecting ribbon synapses from CtBP2 staining.
8
+
9
+
In addition, it contains functionality for data pre-processing and different kinds of measurements based on the network predictions, including:
10
+
- Analyzing the tonotopic mapping of SGNs and IHCs in the cochlea.
11
+
- Validating gene therapies and optogentic therapies (based on additional fluorescent stainings).
12
+
- Analyzing SGN subtypes (based on additional fluorescent staining).
13
+
- Visualizing segmentation results and derived analyses in [MoBIE](https://mobie.github.io/).
14
+
15
+
The networks and analysis methods were primarily developed for high-resolution isotropic data from a [custom light-sheet microscope](https://www.biorxiv.org/content/10.1101/2025.02.21.639411v2.abstract).
16
+
The networks will work best on the respective fluorescent stains they were trained on, but will work on similar stains. For example, we have successfully applied the network for SGN segmentation on a calretinin (CR) stain and the network for IHC segmentation on a myosin7a stain.
17
+
In addition, CochleaNet provides networks for the segmentation of SGNs and IHCs in anisotropic data from a [commercial light-sheet microscope](https://www.miltenyibiotec.com/DE-en/products/macs-imaging-and-spatial-biology/ultramicroscope-platform.html).
18
+
19
+
For more information on CochleaNet, check out our [preprint](TODO).
20
+
21
+
## Installation
22
+
23
+
CochleaNet can be installed via `conda` (or [micromamba](https://mamba.readthedocs.io/en/latest/user_guide/micromamba.html)).
- Create an environment with the required dependencies:
34
+
```
35
+
conda env create -f environment.yaml
36
+
```
37
+
- Activate the environment:
38
+
```
39
+
conda activate cochlea-net
40
+
```
41
+
- Install the cochlea-net package:
42
+
```
43
+
pip install .
44
+
```
45
+
- (Optional): if you want to use the napari plugin you have to install napari:
46
+
```
47
+
conda install -c conda-forge napari pyqt
48
+
```
49
+
50
+
## Usage
51
+
52
+
CochleaNet can be used via:
53
+
- The [napari plugin](#napari-plugin): enables prediction with the pre-trained CochleaNet deep neural networks.
54
+
- The [command line interface](#command-line-interface): enables data conversion, model prediction, and selected analysis workflows for large image data.
55
+
- The [python library](#python-library): implements CochleaNet's functionality and can be used to implement flexible prediction and data analysis workflows for large image data.
56
+
57
+
**Note: the napari plugin was not optimized for processing large data. For processing large image data use the CLI or python library.**
58
+
59
+
### Napari Plugin
60
+
61
+
The napari plugin for segmentation (SGNs and IHCS) and detection (ribbon synapses) is available under `Plugins->CochleaNet->Segmentation/Detection` in napari:
62
+
63
+
The segmentation plugin offers the choice of different models under `Select Model:` (see [Available Models](#available-models) for details). `Image data` enables to choose which image data (layer) the model is applied to. The segmentation is started by clicking the `Run Segmentation` button. After the segmentation has finished, a new segmentation layer with the result (here `IHC`) will be added:
64
+
65
+
The detection model works similarly. It currently provides the model for synapse detection. The predictions are added as point layer (``):
66
+
67
+
TODO Video.
68
+
For more information on how to use napari, check out the tutorials at [www.napari.org](TODO).
69
+
70
+
**To use the napari plugin you have to install `napari` and `pyqt` in your environment.** See [installation](#installation) for details.
0 commit comments