You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Feb 26, 2025. It is now read-only.
The conversion of literature measurements into average densities and the fitting steps of the pipeline takes (a lot of) time because they require an estimate of the volume, cell and neuron density of each individual region of the annotation atlas (see function measurement_to_average_densityhere which leverages compute_region_volumes and calls compute_region_densities twice).
However, these estimations could be speed up if they were done together as the filtering of the regions' voxels and the results stored in files (csv or json) to be re-used (e.g for fitting). Additionally, composition rules (mother/children regions relations) can speed up the process if the regions are treated from leaf regions to major regions.
I wonder if this should be a separate/isolated step of the pipeline done before the conversion of literature measurements or directly integrated in this step. Also is it worth creating yet additional intermediate files to speed up the fitting step?