Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
81390b1
remove testing file
PolarBean Dec 4, 2025
4358d7b
auto publish to pypi on release
PolarBean Dec 4, 2025
d3845c3
update yml file and python versions
PolarBean Dec 4, 2025
01f922a
remove scheduled deploy
PolarBean Dec 4, 2025
690b392
remove twine api key as we can instead set the repo to be a trusted p…
PolarBean Dec 4, 2025
81a5a94
run black linter
PolarBean Dec 4, 2025
1534156
enforce line lengths
PolarBean Dec 4, 2025
e7ef967
enforce 79 char line length
PolarBean Dec 4, 2025
88d03d3
exclude files that will not be included in pypi
PolarBean Dec 4, 2025
73f9b8a
remove line length from ruff as its failing test (its still specified…
PolarBean Dec 4, 2025
fc84555
fix issue with test
PolarBean Dec 4, 2025
423fbde
update license file to avoid spdx warning
PolarBean Dec 4, 2025
cd98cdf
ruff fixes
PolarBean Dec 4, 2025
56cb5e4
correct line incorrectly removed from ingestion script and caught by …
PolarBean Dec 4, 2025
e9edb2a
remove unused variable caught by precommit
PolarBean Dec 4, 2025
3131485
make ruff happy
PolarBean Dec 4, 2025
fca46ec
ruff reformat
PolarBean Dec 4, 2025
db9f4ba
exclude utilities
PolarBean Dec 4, 2025
cfb61c4
precommit passed locally but failed remotely, updating reqs and license
PolarBean Dec 4, 2025
352bca9
move docstring to beginning of file
PolarBean Dec 4, 2025
bbcc26c
tests failing bc of triple quoted comments
PolarBean Dec 4, 2025
013347b
ruff reformat
PolarBean Dec 4, 2025
6f800d6
test off by 0.5 voxels...
PolarBean Dec 4, 2025
179fd2a
rename test case
PolarBean Dec 5, 2025
6b70855
revert test value
PolarBean Dec 5, 2025
7849626
fix empty last line
PolarBean Dec 5, 2025
e0671fb
autofix files precommit
PolarBean Dec 5, 2025
6f7beca
Auto detect same space atlases (#26)
PolarBean Feb 9, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
63 changes: 63 additions & 0 deletions .github/workflows/test_and_deploy.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
name: tests

on:
push:
branches:
- "main"
tags:
- "v**"
pull_request:
workflow_dispatch:

jobs:
linting:
runs-on: ubuntu-latest
steps:
- uses: neuroinformatics-unit/actions/lint@v2

manifest:
runs-on: ubuntu-latest
steps:
- uses: neuroinformatics-unit/actions/check_manifest@v2

test:
needs: [linting, manifest]
name: ${{ matrix.os }} py${{ matrix.python-version }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest]
python-version: ["3.11", "3.12", "3.13"]
include:
- os: macos-latest
python-version: "3.11"
- os: windows-latest
python-version: "3.11"
Comment on lines +32 to +35
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We typically have these set to 3.13 (or the latest compatible version of Python being tested)


steps:
- uses: neuroinformatics-unit/actions/test@v2
with:
python-version: ${{ matrix.python-version }}
secret-codecov-token: ${{ secrets.CODECOV_TOKEN }}

build_sdist_wheels:
name: Build source distribution and wheel
needs: [test]
if: github.event_name == 'push' && github.ref_type == 'tag'
runs-on: ubuntu-latest
steps:
- uses: neuroinformatics-unit/actions/build_sdist_wheels@v2

upload_all:
name: Publish build distributions
needs: [build_sdist_wheels]
if: github.event_name == 'push' && github.ref_type == 'tag'
runs-on: ubuntu-latest
permissions:
id-token: write
steps:
- uses: actions/download-artifact@v4
with:
name: artifact
path: dist
- uses: pypa/gh-action-pypi-publish@release/v1
4 changes: 2 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ demo_data
create_transformation_for_ages/deformations_from_elastix
create_transformation_for_ages/
.vscode/

testing.py
# Byte-compiled / optimized / DLL files
__pycache__/
brainglobe_ccf_translator/__pycache__/
Expand Down Expand Up @@ -168,4 +168,4 @@ cython_debug/
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
#.idea/
6 changes: 3 additions & 3 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,3 @@


# Configuring https://pre-commit.ci/
ci:
autoupdate_schedule: monthly
Expand Down Expand Up @@ -30,12 +28,14 @@ repos:
- id: mypy
additional_dependencies:
- types-setuptools
- types-requests
exclude: ^utilities/
- repo: https://github.com/mgedmin/check-manifest
rev: "0.50"
hooks:
- id: check-manifest
args: [--no-build-isolation]
additional_dependencies: [setuptools-scm]
additional_dependencies: [setuptools-scm, wheel]
- repo: https://github.com/codespell-project/codespell
# Configuration for codespell is in pyproject.toml
rev: v2.3.0
Expand Down
17 changes: 14 additions & 3 deletions PyPiDeploy.md
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this file necessary? The instructions can be found https://brainglobe.info/community/developers/new_releases.html

Original file line number Diff line number Diff line change
@@ -1,8 +1,19 @@
to deploy the package to PyPi
# Deploying to PyPI

## Automatic Deployment (Recommended)

The package is automatically published to PyPI when you push a version tag:

1. Update your version as needed
2. Create and push a tag: `git tag v0.1.0 && git push --tags`
3. The GitHub Action will run tests, build, and upload to PyPI

**Note:** You need to add a `TWINE_API_KEY` secret to your repository with your PyPI API token.

## Manual Deployment

```bash
pip install build twine
pip install build twine
python -m build
twine upload dist/*
```
```
39 changes: 25 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
# brainglobe-ccf-translator
![PyPI - Version](https://img.shields.io/pypi/v/brainglobe-ccf-translator)

CCF translator (brainglobe-ccf-translator) is a tool for translating between common coordinate frameworks using deformation matrices.
A longstanding problem in NeuroInformatics has been the inability to easily translate data between common coordinate frameworks. CCF translator aims to solve this. By connecting each new space to an existing one, we can construct a graph of deformations. This means that data can be translated as long as there is a route from one space to another, even if that route passes through multiple other spaces. Now, when new templates for new modalities, strains, or ages are released, users will not be subdivided into unrelated spaces. As long as they are connected to a space which exists in our network, they will be fully connected to all other spaces.
CCF translator (brainglobe-ccf-translator) is a tool for translating between common coordinate frameworks using deformation matrices.
A longstanding problem in NeuroInformatics has been the inability to easily translate data between common coordinate frameworks. CCF translator aims to solve this. By connecting each new space to an existing one, we can construct a graph of deformations. This means that data can be translated as long as there is a route from one space to another, even if that route passes through multiple other spaces. Now, when new templates for new modalities, strains, or ages are released, users will not be subdivided into unrelated spaces. As long as they are connected to a space which exists in our network, they will be fully connected to all other spaces.

CCF translator can also interpolate between spaces and create a new intermediate space. This is primarily useful for development, where, for instance, the midpoint between day 5 and day 7 can be taken and used as a postnatal day 6 reference. It could also be useful for making references of disease progression.

![a graph of all the available spaces and how they are connected. the spaces are nodes with the space name written on top of them, the edges show which spaces are connected to which other spaces.]
CCF translator can also interpolate between spaces and create a new intermediate space. This is primarily useful for development, where, for instance, the midpoint between day 5 and day 7 can be taken and used as a postnatal day 6 reference. It could also be useful for making references of disease progression.

**Diagram:**
a graph of all the available spaces and how they are connected. the spaces are nodes with the space name written on top of them, the edges show which spaces are connected to which other spaces.
```mermaid
graph TD
allen_mouse_P56 --- demba_dev_mouse_P56
Expand All @@ -19,34 +19,45 @@ graph TD
demba_dev_mouse_P21 --- demba_dev_mouse_P28
demba_dev_mouse_P28 --- demba_dev_mouse_P56
demba_dev_mouse_P4 --- demba_dev_mouse_P7
dorr_mouse_mri_P84 --- perens_stereotaxic_mri_mouse_P56
```


## Use Cases
One way you can use CCF translator is to view data from one space, in another space. For instance the allen connectivity dataset shows projections from viral tracing studies in the adult brain. We can take any of these projection datasets and view them in the developing brain, for instance post natal day 9.
![an image which shows a viral tracing study overlaid on the allen adult ccfv3 template. it shows that same viral tracing data transformed and overlaid on a post natal day 9 template. between the two images is an arrow pointing from the adult to the post natal day 9 brain, above which is text saying CCF translator, implying that CCF translator was used to transform the data from adult to post natal day 9.](https://raw.githubusercontent.com/brainglobe/brainglobe-ccf-translator/main/media/allen_connectivity_transform.png)
## Installation
CCF translator can be installed by running
CCF translator can be installed by running
```
pip install brainglobe-ccf-translator
```
Or by cloning this repository and running
Or by cloning this repository and running
```
pip install -e .
```
while in the root of the repository.
## Currently supported spaces
the name in CCF translator usually copies the name of the atlas in the brainglobe atlasapi.
the name in CCF translator usually copies the name of the atlas in the brainglobe atlasapi.
| Framework Name | name in api | supported age range
| -------------- | ----------- | -----------
| -------------- | ----------- | -----------
| Allen mouse CCFv3 | allen_mouse | 56
| Demba developmental mouse | demba_dev_mouse| 4-56
| Gubra lightsheet mouse | perens_multimodal_lsfm| 56
| Gubra MRI mouse | perens_stereotaxic_mri_mouse| 56
| Princeton lightsheet mouse | princeton_mouse| 56
| Dorr MRI mouse | dorr_mouse_mri | 84


We also support brainglobe atlas api names which are in existing coordinate frameworks. For instance you can specify osten_mouse and CCF translator will autoconvert this to allen_mouse.
| atlas api name | converts to | supported age range
| -------------- | ----------- | -----------
| osten_mouse | allen_mouse | 56
| allen_mouse_bluebrain_barrels | allen_mouse| 56
| kim_mouse | allen_mouse | 56
| demba_allen_seg_dev_mouse | demba_dev_mouse | 4-56

## Usage
**Transforming points**
To take a coordinate in one volume and find the equivalent coordinate in a second volume is quite simple in CCF translator.
To take a coordinate in one volume and find the equivalent coordinate in a second volume is quite simple in CCF translator.
```python
import numpy as np
import brainglobe_ccf_translator
Expand All @@ -61,13 +72,13 @@ print(f"new points are {pset.values}")
new points are [[267 250 286] [452 247 414]]
```
**Transforming volumes**
All of our transforms assume you retrieved the atlas from the brianglobe-atlasapi.
All of our transforms assume you retrieved the atlas from the brianglobe-atlasapi.
To run the volume examples you will want to install brainglobe-atlasapi using the following
```
pip install brainglobe-atlasapi
```

Transforming a volume is equally simple, here we get the volume from the brainglobe api, but you can load it however you like. In the Demba space the valid ages are from 4 to 56, and all of these are valid targets for transformation.
Transforming a volume is equally simple, here we get the volume from the brainglobe api, but you can load it however you like. In the Demba space the valid ages are from 4 to 56, and all of these are valid targets for transformation.
```python
from brainglobe_atlasapi.bg_atlas import BrainGlobeAtlas
import brainglobe_ccf_translator
Expand All @@ -91,7 +102,7 @@ ccft_vol.transform(target_age, 'demba_dev_mouse')
ccft_vol.save(rf"demo_data/P{target_age}_template_{voxel_size_micron}um.nii.gz")
```
## Contributing
If you would like to add a new space or connect an existing one, please create a deformation matrix and/or describe the required reorientation, flipping, cropping, and padding of the axis between this space and one that already exists in the network, and then open an issue in this repository. Ideally, choose a space which covers all the areas which are covered in your space. While the Allen CCFv3 is very popular, it is missing the anterior olfactory bulb and the caudal portion of the cerebellum and brain stem, so it is not the optimal choice.
If you would like to add a new space or connect an existing one, please create a deformation matrix and/or describe the required reorientation, flipping, cropping, and padding of the axis between this space and one that already exists in the network, and then open an issue in this repository. Ideally, choose a space which covers all the areas which are covered in your space. While the Allen CCFv3 is very popular, it is missing the anterior olfactory bulb and the caudal portion of the cerebellum and brain stem, so it is not the optimal choice.

## Citation
CCF translator was first described in [DeMBA: a developmental atlas for navigating the mouse brain in space and time](https://www.biorxiv.org/content/10.1101/2024.06.14.598876v1). If you use CCF translator, please cite that paper.
19 changes: 14 additions & 5 deletions brainglobe_ccf_translator/PointSet.py
Original file line number Diff line number Diff line change
@@ -1,16 +1,18 @@
from .deformation import apply_deformation, route_calculation
import pandas as pd
import os

import numpy as np
import pandas as pd

from . import config
from .deformation import apply_deformation, route_calculation
from .space_utils import validate_space_name

base_path = os.path.dirname(__file__)


class PointSet:
def __init__(self, values, space, voxel_size_micron, age_PND):
self.values = np.array(values).astype(float)
self.space = space
self.voxel_size_micron = voxel_size_micron
self.age_PND = age_PND
# Setup brainglobe dir
Expand All @@ -22,17 +24,21 @@ def __init__(self, values, space, voxel_size_micron, age_PND):
try:
metadata = pd.read_csv(metadata_path)
except FileNotFoundError:
raise FileNotFoundError(f"Metadata file not found at {metadata_path}")
raise FileNotFoundError(
f"Metadata file not found at {metadata_path}"
)
except pd.errors.ParserError:
raise ValueError(f"Error parsing metadata file at {metadata_path}")

self.metadata = metadata
self.space = validate_space_name(space, self.metadata)

def transform(self, target_age=None, target_space=None):
values = self.values
if len(values.shape) == 1:
values = values.reshape(1, -1)
source = f"{self.space}_P{self.age_PND}"
target_space = validate_space_name(target_space, self.metadata)
target = f"{target_space}_P{target_age}"
row_template = "{}_physical_size_micron"
source = f"{self.space}_P{self.age_PND}"
Expand All @@ -41,7 +47,10 @@ def transform(self, target_age=None, target_space=None):
route = route_calculation.calculate_route(target, source, G)
deform_arr, pad_sum, flip_sum, dim_order_sum, final_voxel_size = (
apply_deformation.combine_route(
route, self.voxel_size_micron, self.deformation_dir, self.metadata
route,
self.voxel_size_micron,
self.deformation_dir,
self.metadata,
)
)
previous = "_".join(route[1].split("_")[:-1])
Expand Down
46 changes: 27 additions & 19 deletions brainglobe_ccf_translator/Volume.py
Original file line number Diff line number Diff line change
@@ -1,13 +1,3 @@
import numpy as np
from .deformation import apply_deformation, route_calculation
from . import config
import pandas as pd
import json
import os
import nibabel as nib
from typing import Any


"""
At present the order of transformations is:
transpose
Expand All @@ -18,6 +8,17 @@
So the transform should be in the shape of the output
"""

import os
from typing import Any

import nibabel as nib
import numpy as np
import pandas as pd

from . import config
from .deformation import apply_deformation, route_calculation
from .space_utils import validate_space_name


class Volume:
def __init__(
Expand All @@ -39,7 +40,6 @@ def __init__(
segmentation_file (bool): Flag indicating if a segmentation file is used.
"""
self.values = values
self.space = space
self.voxel_size_micron = voxel_size_micron
self.age_PND = age_PND
self.segmentation_file = segmentation_file
Expand All @@ -53,15 +53,19 @@ def __init__(
try:
metadata = pd.read_csv(metadata_path)
except FileNotFoundError:
raise FileNotFoundError(f"Metadata file not found at {metadata_path}")
raise FileNotFoundError(
f"Metadata file not found at {metadata_path}"
)
except pd.errors.ParserError:
raise ValueError(f"Error parsing metadata file at {metadata_path}")

self.metadata = metadata
self.space = validate_space_name(space, self.metadata)

def transform(self, target_age, target_space):
array = self.values
source = f"{self.space}_P{self.age_PND}"
target_space = validate_space_name(target_space, self.metadata)
target = f"{target_space}_P{target_age}"
if source == target:
print("volume is already in that space")
Expand All @@ -70,28 +74,32 @@ def transform(self, target_age, target_space):
route = route_calculation.calculate_route(source, target, G)
deform_arr, pad_sum, flip_sum, dim_order_sum, final_voxel_size = (
apply_deformation.combine_route(
route, self.voxel_size_micron, self.deformation_dir, self.metadata
route,
self.voxel_size_micron,
self.deformation_dir,
self.metadata,
)
)
array = np.transpose(array, dim_order_sum)
for i in range(len(flip_sum)):
if flip_sum[i]:
array = np.flip(array, axis=i)
if deform_arr is not None:

# original_input_shape = np.array([456.0, 668.0, 320.0])
if final_voxel_size != self.voxel_size_micron:
original_input_shape = np.shape(array)
original_input_shape = np.array(original_input_shape)[dim_order_sum]
new_input_shape = np.array(array.shape) * (
final_voxel_size / self.voxel_size_micron
)
original_input_shape = np.array(original_input_shape)[
dim_order_sum
]

deform_arr = apply_deformation.resize_transform(
deform_arr,
(1, *([final_voxel_size / self.voxel_size_micron] * 3)),
)
order = 0 if self.segmentation_file else 1
array = apply_deformation.apply_transform(array, deform_arr, order=order)
array = apply_deformation.apply_transform(
array, deform_arr, order=order
)
else:
array = apply_deformation.pad_neg(array, pad_sum, mode="constant")
self.values = array
Expand Down
Loading
Loading