Skip to content

Commit 3c46aa9

Browse files
committed
Reduce paper length and align it to new guidelines
1 parent a45313b commit 3c46aa9

File tree

2 files changed

+6
-6
lines changed

2 files changed

+6
-6
lines changed

joss-paper/paper.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -39,39 +39,39 @@ In line-scanning microscopy, sample rotation during acquisition introduces geome
3939

4040
# Statement of Need
4141

42-
When sample motion during acquisition is rotational, it produces characteristic "fan-like" distortions that corrupt the morphological features of the imaged structures (Figure 2), significantly complicating or preventing cell segmentation and automated region-of-interest tracking.
42+
When sample motion during acquisition is rotational, it produces characteristic "fan-like" distortions that corrupt the morphological features of the imaged structures (Figure 2), significantly complicating or preventing cell segmentation.
4343

4444
This problem is particularly acute in systems neuroscience, where researchers increasingly combine head-fixed multiphoton calcium imaging with behavioral paradigms involving passive or active head rotation [@velez-fort_circuit_2018; @hennestad_mapping_2021; @sit_coregistration_2023; @voigts_somatic_2020]. High-speed angular motion can render imaging data unusable, especially for modalities with lower frame rates such as three-photon imaging. Head-mounted miniature microscopes avoid this issue but currently support only smaller fields of view, making stationary-objective setups, and therefore rotation correction, essential for large-area imaging. Despite multiple lab-specific solutions, no validated, open-source, and easy-to-use Python tool has been available to the broader community.
4545

4646
![Schematic of line-scanning microscope distortion. Left: line scanning pattern plus sample rotation lead to fan-like artifacts when imaging a grid. Right: grid imaged while still (top), while rotating at 200°/s with 7Hz frame rate (middle), and after `derotation` (bottom), showing alignment restoration.](figure2.png)
4747

48-
`derotation` meets this need by providing a documented, tested, and modular solution for post hoc correction of imaging data acquired during rotation. Users need only provide a one-, two-, or three-photon movie together with the microscope's analog timing signals and the recorded rotation angles. The package has been developed openly on GitHub since June 2023, with contributions from six developers. It is distributed on PyPI under a BSD-3-Clause license, with comprehensive documentation, tutorials, and runnable examples via Binder at https://derotation.neuroinformatics.dev. By providing a robust and accessible tool, `derotation` lowers the barrier for complex behavioral experiments and improves the reproducibility of a key analysis step in a growing field of research.
48+
`derotation` meets this need by providing a ready-to-use, tested, and modular solution for post hoc correction of imaging data acquired during rotation. Users need only provide a one-, two-, or three-photon movie together with the microscope's analog timing signals and the recorded rotation angles. The package is installable from PyPI, with documentation, and runnable examples via Binder at https://derotation.neuroinformatics.dev. By providing a robust and accessible tool, `derotation` lowers the barrier for complex behavioral experiments and improves the reproducibility of a key analysis step in a growing field of research.
4949

5050
The package has been validated on three-photon recordings of deep cortical neurons expressing the calcium indicator GCaMP7f in head-fixed mice (Figure 3). The corrected images showed restored cellular morphology and were successfully processed by Suite2p [@pachitariu_suite2p_2016]. Compared with frame-by-frame affine correction, line-by-line derotation preserves ROI fluorescence signals during rotation periods, eliminating the artificial dips visible in Figure 3.
5151

5252
![Figure 3. Validation on three-photon data. Left: mean image after line‑by‑line derotation. Red circle marks the ROI used for the plots on the right. Top right: sample $\Delta F/F_0$ timecourse for the selected ROI (pink = line‑by‑line derotation; gray = frame‑by‑frame affine correction; shaded vertical bars = rotation intervals). Bottom right: mean $\Delta F/F_0$ aligned to rotation periods for clockwise and counterclockwise rotations. Line‑by‑line derotation preserves the ROI signal during rotations and removes the artificial dips introduced by frame‑by‑frame correction. Clockwise and counterclockwise traces show a roughly mirror‑symmetric, angle‑dependent modulation of measured fluorescence with the frame-by-frame correction.](figure3.png)
5353

5454
# State of the Field
5555

56-
Several groups have developed rotation correction procedures as part of their experimental pipelines [@velez-fort_circuit_2018; @hennestad_mapping_2021; @sit_coregistration_2023], but each solution remains embedded in a lab-specific workflow, typically applying frame-level corrections in MATLAB. The closest prior work is @voigts_somatic_2020, who described a line-by-line derotation approach for two-photon imaging during free locomotion, but their code was not designed to generalize across setups with lower frame rates. General-purpose image registration tools such as Suite2p [@pachitariu_suite2p_2016] correct for translational motion but cannot handle within-frame rotational distortions. `derotation` fills this gap by integrating with the scientific Python ecosystem and producing output that can be directly fed into standard registration pipelines.
56+
Several groups have developed rotation correction procedures as part of their experimental pipelines [@velez-fort_circuit_2018; @hennestad_mapping_2021; @sit_coregistration_2023], but each solution remains embedded in a lab-specific workflow, typically applying frame-level corrections in MATLAB. The closest prior work is @voigts_somatic_2020, who described a line-by-line derotation approach for two-photon imaging during free locomotion, but their code was not designed to generalize across setups with lower frame rates. General-purpose image registration tools such as Suite2p correct for translational motion but cannot handle within-frame rotational distortions. `derotation` fills this gap by integrating with the scientific Python ecosystem and producing output that can be directly fed into standard registration pipelines.
5757

5858
# Software Design
5959

6060
`derotation` separates the core algorithm from experiment-specific logic through a layered, object-oriented design. The core function (`derotate_an_image_array_line_by_line`) takes a movie and a per-line angle array and returns the corrected stack, with no dependencies on configuration or I/O, so it can be used directly in any custom workflow.
6161

62-
On top of this core, two pipeline classes (`FullPipeline` and `IncrementalPipeline`) orchestrate the end-to-end processing: data loading, angle interpolation, optional center-of-rotation estimation, derotation, and output saving. Each processing step is implemented as an overridable method, so users can subclass either pipeline to adapt to new experimental setups or data formats. A simulation module generates synthetic test data for systematic regression testing. Full API documentation, configuration guides, and runnable examples are available at https://derotation.neuroinformatics.dev.
62+
On top of this core, two pipeline classes (`FullPipeline` and `IncrementalPipeline`) orchestrate the end-to-end processing: data loading, angle interpolation, optional center-of-rotation estimation, derotation, and output saving. Each processing step is implemented as an overridable method, so users can subclass either pipeline to adapt to new experimental setups or data formats. A simulation module generates synthetic test data for systematic regression testing.
6363

6464
# Research Impact Statement
6565

66-
`derotation` is in active use at the Sainsbury Wellcome Centre for Neural Circuits and Behaviour (SWC), where it processes three-photon calcium imaging data acquired during passive head rotation experiments. The corrected movies are fed into Suite2p [@pachitariu_suite2p_2016] for cell detection and signal extraction, forming a core step in the analysis pipeline of the Margrie Lab. Two additional research groups at the SWC are currently adopting the package for multiphoton imaging experiments that involve head-fixed rotation under a stationary objective. Since its first release on PyPI in 2024, the package has accumulated over 7,000 downloads, and its development has attracted contributions from six developers across two teams at UCL.
66+
`derotation` is in active use at the Sainsbury Wellcome Centre for Neural Circuits and Behaviour (SWC), where it processes three-photon calcium imaging data acquired during passive head rotation experiments. The corrected movies are fed into Suite2p for cell detection and signal extraction, forming a core step in the analysis pipeline of the Margrie Lab. Two additional research groups at the SWC are currently adopting the package for multiphoton imaging experiments that involve head-fixed rotation under a stationary objective. Since its first release on PyPI in 2024, the package has accumulated over 7,000 downloads, and its development has attracted contributions from six developers across two teams at UCL.
6767

6868
# AI Usage Disclosure
6969

7070
GitHub Copilot was used for code autocompletion during development and to assist with drafting portions of this manuscript and the documentation. All AI-generated content was reviewed, tested, and validated by the authors, who carried out the algorithmic design, architectural decisions, and scientific validation.
7171

7272
# Methodological appendix
7373

74-
The package has been directly tested on three-photon imaging data obtained from cortical layer 6 callosal-projecting neurons expressing the calcium indicator GCaMP7f in the mouse visual cortex. More specifically, wild type C57/B6 mice were injected with retro AAV-hSyn-Cre (1 × $10^{14}$ units per ml) in the left and with AAV2/1.syn.FLEX.GCaMP7f (1.8 × $10^{13}$ units per ml) in the right primary visual cortex. A cranial window was implanted over the right hemisphere, and then a headplate was cemented onto the skull of the animal. After 4 weeks of viral expression and recovery, animals were head-fixed on a rotation platform driven by a direct-drive motor (U-651, Physik Instrumente). 360 degree clockwise and counter-clockwise rotations with different peak speed profiles (50, 100, 150, 200 deg/s) were performed while imaging awake neuronal activity using a 25x objective (XLPLN25XWMP2, NA 1.05, 25, Olympus). Imaging was conducted at 7 Hz with 256x256 pixels. All experimental procedures were approved by the Sainsbury Wellcome Centre (SWC) Animal Welfare and Ethical Review Body (AWERB). For detailed description of the three-photon power source and imaging please see [@cloves_vivo_2024].
74+
The package has been directly tested on three-photon imaging data obtained from cortical layer 6 callosal-projecting neurons expressing the calcium indicator GCaMP7f in the mouse visual cortex. More specifically, wild type C57/B6 mice were injected with retro AAV-hSyn-Cre (1 × $10^{14}$ units per ml) in the left and with AAV2/1.syn.FLEX.GCaMP7f (1.8 × $10^{13}$ units per ml) in the right primary visual cortex. A cranial window was implanted over the right hemisphere, and then a headplate was cemented onto the skull of the animal. After 4 weeks of viral expression and recovery, animals were head-fixed on a rotation platform driven by a direct-drive motor (U-651, Physik Instrumente). 360 degree clockwise and counter-clockwise rotations with different peak speed profiles (50, 100, 150, 200 deg/s) were performed while imaging awake neuronal activity using a 25x objective (XLPLN25XWMP2, NA 1.05, 25, Olympus). Imaging was conducted at 7 Hz with 256x256 pixels. All experimental procedures were approved by the SWC Animal Welfare and Ethical Review Body (AWERB). For detailed description of the three-photon power source and imaging please see [@cloves_vivo_2024].
7575

7676
# Acknowledgements
7777

joss-paper/paper.pdf

430 Bytes
Binary file not shown.

0 commit comments

Comments
 (0)