You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
-49Lines changed: 0 additions & 49 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,52 +7,3 @@ In addition to the analysis functionality, CochleaNet implements data pre-proces
7
7
This functionality is applicable to any imaging data from flamingo microscopes, not only clear-tissue data or cochleae. We aim to also extend the segmentation and analysis functionality to other kinds of samples imaged in the flamingo in the future.
8
8
9
9
For installation and usage instructions, check out [the documentation](https://computational-cell-analytics.github.io/cochlea-net/). For more details on the underlying methodology check out [our preprint](https://doi.org/10.1101/2025.11.16.688700).
10
-
11
-
<!---
12
-
The `flamingo_tools` library implements functionality for:
13
-
- converting the lightsheet data into a format compatible with [BigDataViewer](https://imagej.net/plugins/bdv/) and [BigStitcher](https://imagej.net/plugins/bigstitcher/).
14
-
- Cell / nucleus segmentation via a 3D U-net.
15
-
- ... and more functionality is planned!
16
-
17
-
This is work in progress!
18
-
19
-
20
-
## Requirements & Installation
21
-
22
-
You need a python environment with the following dependencies: [pybdv](https://github.com/constantinpape/pybdv) and [z5py](https://github.com/constantinpape/z5).
23
-
You install these dependencies with [mamba](https://github.com/mamba-org/mamba) or [conda](https://docs.conda.io/en/latest/) via:
(for an existing conda environment). You can also set up a new environment with all required dependencies using the file `environment.yaml`:
28
-
```bash
29
-
conda env create -f environment.yaml
30
-
```
31
-
This will create the environment `flamingo`, which you can then activate via `conda activate flamingo`.
32
-
Finally, to install `flamingo_tools` into the environment run
33
-
```bash
34
-
pip install -e .
35
-
```
36
-
37
-
## Usage
38
-
39
-
We provide a command line tool, `convert_flamingo`, for converting data from the flamingo microscope to a data format compatible with BigDataViewer / BigStitcher:
Here, `/path/to/data` is the filepath to the folder with the flamingo data to be converted, `/path/to/output.n5` is the filepath where the converted data will be stored, and `--file_ext .tif` declares that the files are stored as tif stacks.
44
-
Use `--file_ext .raw` isntead if the data is stored in raw files.
45
-
46
-
The data will be converted to the [bdv.n5 format](https://github.com/bigdataviewer/bigdataviewer-core/blob/master/BDV%20N5%20format.md).
47
-
It can be opened with BigDataViewer via `Plugins->BigDataViewer->Open XML/HDF5`.
48
-
Or with BigStitcher as described [here](https://imagej.net/plugins/bigstitcher/open-existing).
49
-
50
-
You can also check out the following example scripts:
51
-
- `create_synthetic_data.py`: create small synthetic test data to check that the scripts work.
52
-
- `convert_flamingo_data_examples.py`: convert flamingo data to a file format comatible with BigDataViewer / BigStitcher with parameters defined in the python script. Contains two example functions:
53
-
- `convert_synthetic_data` to convert the synthetic data created via `create_synthetic_data.py`.
54
-
- `convert_flamingo_data_moser` to convert sampled flamingo data from the Moser group.
55
-
- `load_data.py`: Example script for how to load sub-regions from the converted data into python.
56
-
57
-
For advanced examples to segment data with a U-Net, check out the `scripts` folder.
Copy file name to clipboardExpand all lines: doc/documentation.md
+34-4Lines changed: 34 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -60,25 +60,55 @@ CochleaNet can be used via:
60
60
### Napari Plugin
61
61
62
62
The plugins for segmentation (SGNs and IHCS) and detection (ribbon synapses) is available under `Plugins->CochleaNet->Segmentation/Detection` in napari:
63
+
<imgsrc="https://raw.githubusercontent.com/computational-cell-analytics/cochlea-net/refs/heads/master/doc/img/cochlea-net-plugin-selection.png"alt="The CochleaNet plugins available in napari.">
64
+
63
65
64
66
The segmentation plugin offers the choice of different models under `Select Model:` (see [Available Models](#available-models) for details). `Image data` enables the choice which image data (napari layer) the model is applied to.
65
67
The segmentation is started by clicking the `Run Segmentation` button. After the segmentation has finished, a new segmentation layer with the result (here `IHC`) will be added:
For more information on how to use napari, check out the tutorials at [www.napari.org](https://napari.org/stable/).
70
74
71
75
**To use the napari plugin you have to install `napari` and `pyqt` in your environment. See [installation](#installation) for details.**
72
76
73
77
### Command Line Interface
74
78
75
-
TODO
79
+
The command line interface provides the following commands:
80
+
81
+
`flamingo_tools.convert_data`: Convert data from a flamingo microscope into the [bdv.n5 format](https://github.com/bigdataviewer/bigdataviewer-core/blob/master/BDV%20N5%20format.md) (compatible with [BigStitcher](https://imagej.net/plugins/bigstitcher/)) or into [ome.zarr format](https://ngff.openmicroscopy.org/). You can use this command as follows:
Use `--file_ext .raw` instead if the data is stored in raw files. By default, the data will be exported to the n5 format. It can be opened with BigDataViewer via `Plugins->BigDataViewer->Open XML/HDF5` or with BigStitcher as described [here](https://imagej.net/plugins/bigstitcher/open-existing).
86
+
87
+
`flamingo_tools.run_segmentation`: To segment cells in volumetric light microscopy data.
88
+
89
+
`flamingo_tools.run_detection`: To detect synapses in volumetric light microscopy data.
90
+
91
+
For more information on any of the command run `flamingo_tools.<COMMAND> -h` (e.g. `flamingo_tools.run_segmentation -h`) in your terminal.
76
92
77
93
### Python Library
78
94
79
-
TODO
95
+
CochleaNet's functionality is implemented in the `flamingo_tools` python library. It implements:
96
+
-`measurements`: functionality to measure morphological attributes and intensity statistics for segmented cells.
97
+
-`mobie`: functionality to export flamingo image data or segmentation results to a MoBIE project.
98
+
-`segmentation`: functionality to apply segmentation and detection models to large volumetric image data.
99
+
-`training`: functionality to train segmentation and detection networks.
80
100
81
101
82
102
## Available Models
83
103
84
-
TODO
104
+
CochleaNet provides five different models:
105
+
-`SGN`: for segmenting spiral ganglion neurons (SGNs) in high-resolution, isotropic light-sheet microscopy data.
106
+
- This model was trained on image data with parvalbumin (PV) stain, with a voxel size of 0.38 micrometer.
107
+
-`IHC`: for segmenting inner hair cells (IHCs) in high-resolution, isotropic light-sheet microscopy data.
108
+
- This model was trained on image data with Vglut3 stain, with a voxel size of 0.38 micrometer.
109
+
-`Synapses`: for detecting afferent ribbon synapses in high-resolution isotropic light-sheet microscopy data.
110
+
- This model was trained on image data with CtBP2 stain, with a voxel size of 0.38 micrometer.
111
+
-`SGN-lowres`: for segmenting SGNS in lower-resolution, anisotropic light-sheet microscopy data.
112
+
- This model was trained on image data with PV stain, with a voxel size of 0.76 X 0.76 X 3.0 micrometer.
113
+
-`SGN-lowres`: for segmenting SGNS in lower-resolution, anisotropic light-sheet microscopy data.
114
+
- This model was trained on image data with Myosin VIIa stain, with a voxel size of 0.76 X 0.76 X 3.0 micrometer.
0 commit comments