Skip to content

Conversation

@effigies
Copy link
Member

@effigies effigies commented Mar 1, 2025

antsBrainExtraction.sh passes the original N4 corrected image to Atropos, where we have been running a masked N4 before Atropos. We have examples where this difference leads to failures that do not occur in the original ANTs workflow. Further, if we do run Atropos, then this result is discarded and N4 is run using a white-matter mask.

This patch therefore simply passes the original N4 image to the Atropos workflow, but otherwise leaves the workflow unchanged.

Addresses #928. This targets the LTS branch maint/1.3.x. Will leave that issue open until this or another patch is merged into master.

antsBrainExtraction.sh passes the original N4 corrected image to
Atropos, where we have been running a masked N4 before Atropos.
We have examples where this difference leads to failures that do not
occur in the original ANTs workflow. Further, if we do run Atropos,
then this result is discarded and N4 is run using a white-matter mask.

This patch therefore simply passes the original N4 image to the Atropos
workflow, but otherwise leaves the workflow unchanged.
@effigies
Copy link
Member Author

effigies commented Mar 4, 2025

@yohanchatelain I wonder if it would be possible for you to see if this change would have a significant impact on the metrics that you were calculating for the paper. You should be able to use the following Dockerfile:

FROM nipreps/fmriprep:20.2.8

RUN pip install --no-cache-dir git+https://github.com/nipreps/niworkflows.git@refs/pull/929/head

I believe it should be minimal, but if we could confirm/disconfirm that, it would help justify whether we can make a release in the LTS series or if I need to find some other way to make this patch available to users.

@effigies
Copy link
Member Author

effigies commented Mar 4, 2025

@bpinsard @pbellec Are you still using fMRIPrep LTS? I don't know if you have any process for testing whether a new release is similar enough to an old release.

@yohanchatelain
Copy link

Hi @effigies,
Sure, I can do it. Which version should we compare 20.2.8 to? The previous version, 20.2.7?

@effigies
Copy link
Member Author

effigies commented Mar 4, 2025

Just to be clear, 20.2.8 was released last July, so this will be a patch on that. If you don't have stats for that, however, comparing to 20.2.7 would be fine.

@yohanchatelain
Copy link

Ok I see, so we need to check this patch against the 20.2.8 release version.
I'll launch that.

@effigies
Copy link
Member Author

@yohanchatelain Just checking in. Do you have an estimate of how long it will take to analyze the changes?

@yohanchatelain
Copy link

Hi @effigies, the longest subject takes approximately 5 days to run with fuzzy, so I should have the results next week.

@effigies
Copy link
Member Author

Awesome, thanks!

@effigies
Copy link
Member Author

@yohanchatelain Just checking in today since I'll be out tomorrow.

@yohanchatelain
Copy link

yohanchatelain commented Mar 21, 2025

@effigies, I’ve finished the numerical stability test. Here is the notebook summarising the results.

The reference was built from 30 Random Rounding repetitions computed with fMRIPrep 20.2.8, and the test set consists of 1 IEEE execution with 20.2.8_PR-929 for each subject used in the paper.

  • The 20.2.8 PR#929 does not pass the test when using version 20.2.8 as the reference.
    image

  • Additionally, I’d say the numerical quality is slightly better in version 20.2.8 than in 20.2.1.
    image

  • Here are the results for version 20.2.1 from the paper for comparison:
    image

@effigies
Copy link
Member Author

Thanks @yohanchatelain. That was very helpful, and we declined to try to include this in an LTS release.

@effigies effigies closed this Apr 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants