Skip to content

Commit e6d62ae

Browse files
Minor iteration take 2
1 parent 77fa7fc commit e6d62ae

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

paper/paper.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -51,9 +51,9 @@ The initial performance testing for VST with various subjects has been reported
5151
![Schematic diagram for VST](Figure 1.png)
5252

5353
# Statement of need
54-
Volume Electron Microscopy (VEM) enables the capture of 3D structure beyond planar samples, which is crucial for understanding biological mechanisms. With automation, improved resolution, and increased data storage capacity, VEM has led to an explosion of large three-dimensional datasets. Large datasets offer the opportunity to generate statistical data, but analysing them often requires assigning each voxel (3D pixel) to its corresponding structure, a process known as image segmentation. Manually segmenting hundreds or thousands of image slices is tedious and time-consuming. Computer-aided, especially Machine Learning (ML) based segmentation is now a routinely used method, with Trainable Weka Segmentation [@arganda2017trainable] and Ilastik [@berg2019ilastik] being two leading options. Emerging methods for EM image segmentation are often based on Deep Learning (DL) [@mekuvc2020automatic] because this approach has potential to outperform traditional ML in terms of accuracy and adaptivity [@minaee2021image, @erickson2019deep].
54+
Volume Electron Microscopy (VEM) enables the capture of 3D structure beyond planar samples, which is crucial for understanding biological mechanisms. With automation, improved resolution, and increased data storage capacity, VEM has led to an explosion of large three-dimensional datasets. Large datasets offer the opportunity to generate statistical data, but analysing them often requires assigning each voxel (3D pixel) to its corresponding structure, a process known as image segmentation. Manually segmenting hundreds or thousands of image slices is tedious and time-consuming. Computer-aided, especially Machine Learning (ML) based segmentation is now a routinely used method, with Trainable Weka Segmentation [@arganda2017trainable] and Ilastik [@berg2019ilastik] being two leading options. Emerging methods for EM image segmentation are often based on Deep Learning (DL) [@mekuvc2020automatic] because this approach has potential to outperform traditional ML in terms of accuracy and adaptivity [@minaee2021image; @erickson2019deep].
5555

56-
Many earlier DL tools developed are highly specific to single sample types, like in connectomics [@li2017compactness, @kamnitsas2017efficient], MRI [@milletari2016v] or X-ray tomography [@li2022auto], they use a subject-optimised design at the cost of adaptability to non-target datasets. Dedicated DL segmentation tools for generalised VEM data are gradually becoming available but each have short-comings. One example, CDeep3M [@haberl2018cdeep3m], which uses cloud computing. Although easy to use, it was designed for anisotropic data (where the z-resolution is much lower than xy-resolution) which creates limitations when applied to isotropic data [@gallusser2022deep]. Another example is DeepImageJ [@gomez2021deepimagej], which runs on local hardware and integrates easily with the ImageJ suit [@schneider2012nih]. However, it only supports pre-trained models and does not have the functionality to train new ones. ZeroCostDL4Mic [@von2021democratising] utilises premade notebooks running on Google Colab, but it requires user interaction during the entire segmentation process, which can take hours and thus is inconvenient. A more recent and advanced example is nnU-Net [@isensee2021nnu], which auto-configurates itself based on dataset properties and has a good support for volumetric dataset, but it focuses exclusively on semantic segmentation and lacks a user friendly interface.
56+
Many earlier DL tools developed are highly specific to single sample types, like in connectomics [@li2017compactness; @kamnitsas2017efficient], MRI [@milletari2016v] or X-ray tomography [@li2022auto], they use a subject-optimised design at the cost of adaptability to non-target datasets. Dedicated DL segmentation tools for generalised VEM data are gradually becoming available but each have short-comings. One example, CDeep3M [@haberl2018cdeep3m], which uses cloud computing. Although easy to use, it was designed for anisotropic data (where the z-resolution is much lower than xy-resolution) which creates limitations when applied to isotropic data [@gallusser2022deep]. Another example is DeepImageJ [@gomez2021deepimagej], which runs on local hardware and integrates easily with the ImageJ suit [@schneider2012nih]. However, it only supports pre-trained models and does not have the functionality to train new ones. ZeroCostDL4Mic [@von2021democratising] utilises premade notebooks running on Google Colab, but it requires user interaction during the entire segmentation process, which can take hours and thus is inconvenient. A more recent and advanced example is nnU-Net [@isensee2021nnu], which auto-configurates itself based on dataset properties and has a good support for volumetric dataset, but it focuses exclusively on semantic segmentation and lacks a user friendly interface.
5757

5858
In short, there is a lack of tools that can handle a wide range of VEM data well for generating both semantic and instance segmentation, while at the same time been easy to use, scalable and can be run locally. Which is what motivated us to develop VST - an easy-to-use and adaptive DL tools specifically optimised for generalised VEM image segmentation.
5959

0 commit comments

Comments
 (0)