Skip to content

Commit b10da66

Browse files
committed
feat(docs): move to pydata theme.
1 parent 7da7077 commit b10da66

File tree

10 files changed

+224
-193
lines changed

10 files changed

+224
-193
lines changed

docs/api.rst

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,6 @@
1+
API
2+
===
3+
14
.. autosummary::
25
:toctree: generated/_autosummary
36
:recursive:

docs/backend.rst

Lines changed: 108 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,108 @@
1+
Choosing a NUFFT Backend
2+
========================
3+
4+
In order to perform Non-Uniform fast Fourier transform for MRI you need to install a computation library backend.
5+
6+
7+
Supported Libraries
8+
-------------------
9+
10+
These libraries need to be installed separately from this package.
11+
12+
.. Don't touch the spacing ! ..
13+
14+
==================== ============ =================== =============== =================
15+
Backend Hardward Batch computation Precision Array Interface
16+
==================== ============ =================== =============== =================
17+
cufinufft_ GPU (CUDA) ✔ single cupy/torch/numpy
18+
finufft_ CPU ✔ single/double numpy/torch
19+
gpunufft_ GPU ✔ single/double numpy/torch/cupy
20+
tensorflow-nufft_ GPU (CUDA) ✘ single tensorflow
21+
pynufft-cpu_ CPU ✘ single/double numpy
22+
pynfft_ CPU ✘ single/double numpy
23+
bart_ CPU/GPU ✔ single numpy
24+
sigpy_ CPU ✔ single numpy
25+
stacked (*) CPU/GPU ✔ single/double numpy
26+
==================== ============ =================== =============== =================
27+
28+
29+
.. _cufinufft: https://github.com/flatironinstitute/finufft
30+
.. _finufft: https://github.com/flatironinstitute/finufft
31+
.. _tensorflow-nufft: https://github.com/flatironinstitute/pynufft
32+
.. _gpunufft: https://github.com/chaithyagr/gpuNUFFT
33+
.. _pynufft-cpu: https://github.com/jyhmiinlin/pynufft
34+
.. _pynfft: https://github.com/pynfft/pynfft
35+
.. _bart: https://github.com/mrirecon/bart
36+
.. _sigpy: https://github.com/sigpy/sigpy
37+
38+
- (*) stacked-nufft allows one to use any supported backend to perform a stack of 2D NUFFT and adds a z-axis FFT (using scipy or cupy)
39+
40+
41+
**The NUFFT operation is often not enough to provide decent image quality by itself (even with density compensation)**.
42+
For improved image quality, use a Compressed Sensing recon. For doing so, you can check the pysap-mri_ for MRI dedicated solutions and deepinv_ for Deep Learning based solutions.
43+
44+
.. _pysap-mri: https://github.com/CEA-COSMIC/pysap-mri/
45+
.. _Modopt: https://github.com/CEA-COSMIC/ModOpt/
46+
.. _deepinv: https:/github.com/deepinv/deepinv/
47+
48+
Backend Installations
49+
---------------------
50+
51+
To benefit the most from certain backends we recommend to use the following instructions
52+
53+
finufft / cufinufft
54+
~~~~~~~~~~~~~~~~~~~
55+
56+
Those are developed by the `flatiron-institute <https://github.com/flatironinstitute/finufft>`_ and are installable with `pip install finufft` and `pip install cufinufft`.
57+
58+
.. warning::
59+
60+
for cufinufft, a working installation of CUDA and cupy is required.
61+
62+
gpuNUFFT
63+
~~~~~~~~
64+
65+
an active gpuNUFFT fork is maintained by `chaithyagr <https://github.com/chaithyagr/gpunufft/>`_.
66+
67+
68+
To install it use `pip install gpuNUFFT` or for local development, use the following:
69+
70+
.. code-block:: sh
71+
72+
git clone https://github.com/chaythiagr/gpuNUFFT
73+
cd gpuNUFFT
74+
python setup.py install
75+
76+
.. warning::
77+
78+
If you are using ``uv`` as your package installer you will need to do ::
79+
80+
.. code-block:: sh
81+
82+
uv pip install wheel pip pybind11
83+
uv pip install mri-nufft[gpunufft] --no-build-isolation
84+
85+
BART
86+
~~~~
87+
88+
BART has to be installed separately and `bart` command needs to be runnable from your `PATH`.
89+
See `installation instructions <https://mrirecon.github.io/bart/installation.html>`_
90+
91+
92+
PyNFFT
93+
~~~~~~
94+
95+
PyNFFT requires Cython<3.0.0 to work. and can be installed using
96+
97+
.. code-block:: sh
98+
99+
pip install cython<3.0.0 pynfft2
100+
101+
Which backend to use
102+
--------------------
103+
104+
We provided an extensive benchmark on computation and memory usage on https://github.com/mind-inria/mri-nufft-benchmark/
105+
106+
.. tip::
107+
108+
Overall, we recommend to use ``finufft`` for CPU, and ``cufinufft`` or ``gpunufft`` when CUDA GPU are available.

docs/conf.py

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -121,7 +121,7 @@
121121
# a list of builtin themes.
122122
#
123123

124-
html_theme = "sphinx_book_theme"
124+
html_theme = "pydata_sphinx_theme"
125125

126126
# Add any paths that contain custom static files (such as style sheets) here,
127127
# relative to this directory. They are copied after the builtin static files,
@@ -133,12 +133,8 @@
133133
]
134134

135135
html_theme_options = {
136-
"repository_url": GITHUB_REPO,
137-
"use_repository_button": True,
138-
"use_issues_button": True,
139136
"use_edit_page_button": True,
140-
"use_download_button": False,
141-
"home_page_in_toc": True,
137+
"header_links_before_dropdown": 4,
142138
}
143139

144140
html_logo = "_static/logos/mri-nufft.png"

docs/explanations/index.rst

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
2+
Explanations
3+
============
4+
5+
6+
.. toctree::
7+
8+
nufft
9+
mrinufft_convention
10+
trajectory_gradspec

docs/mrinufft_convention.rst renamed to docs/explanations/mrinufft_convention.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ Moreover, the two following methods should be implemented for each backend
4444
* ``op(image)`` : Forward Operation (image to k-space)
4545
* ``adj_op(kspace)`` Adjoint Operation (k-space to image)
4646

47-
After initialization, defaults for the following methods are available, as well as a range of QoL properties (``uses_sense``, ``uses_density``, ``ndim``, etc.).
47+
After initialization, defaults for the following methods are available, as well as a range of usefull properties (``uses_sense``, ``uses_density``, ``ndim``, etc.).
4848

4949
* ``data_consistency(image, obs_data)``: perform the data consistency step :math:`\cal{F}^*(\cal{F} x - y)`
5050
* ``get_lipschitz_cst(max_iter)``: Estimate the spectral radius of the auto adjoint operator :math:`\cal{F}^*\cal{F}`

docs/nufft.rst renamed to docs/explanations/nufft.rst

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -98,7 +98,7 @@ Extension of the Acquisition model
9898
The MRI acquisition model can be extended in two main ways. First by taking into account Parallel Imaging, where multiple coils are receiving data, each with a dedicated sensitivity profile.
9999

100100
.. tip::
101-
MRI-NUFFT provides the `FourierOperator` interface to implement all the physical model described below. See :ref:`nufft-interface` for the standard, and :class:`FourierOperatorBase <mrinufft.operators.base.FourierOperatorBase>`
101+
MRI-NUFFT provides the `FourierOperator` interface to implement all the physical model described below. See :ref:`mri-nufft-interface` for the standard, and :class:`FourierOperatorBase <mrinufft.operators.base.FourierOperatorBase>`
102102

103103

104104
Parallel Imaging Model
@@ -235,6 +235,17 @@ the projection operator :math:`\boldsymbol{\Phi}` commutes with the Fourier tran
235235
236236
that is, computation now involves :math:`K \ll T` Fourier Transform operations, each with the same sampling trajectory, which can be computed by levaraging efficient NUFFT implementations for conventional static MRI.
237237

238+
239+
240+
Stacked NUFFT
241+
~~~~~~~~~~~~~
242+
243+
If the k-space trajectory consists of a stacked of equally (or a subsampling of) spaced 2D planes of the 3D k-space, the NUFFT operator can be optimized by performing A 2D NUFFT on each plane, followed by a 1D FFT along the third dimension, resulting in a 2.5D NUFFT operator, lowering the computational cost and memory footprint.
244+
245+
.. note::
246+
247+
You can use the stacked nufft operator by using a ``stacked-*`` backend, and provide a 3D stacked trajectory. See :py:mod:`mrinufft.operators.stacked` for more details.
248+
238249
.. _nufft-algo:
239250

240251
The Non Uniform Fast Fourier Transform in practice
File renamed without changes.

docs/getting_started.rst

Lines changed: 23 additions & 157 deletions
Original file line numberDiff line numberDiff line change
@@ -1,178 +1,44 @@
1+
===============
12
Getting Started
23
===============
34

4-
Installing MRI-NUFFT
5-
--------------------
6-
7-
mri-nufft is available on `PyPi <https://pypi.org/project/mri-nufft/>`_
8-
9-
10-
.. tip::
11-
12-
TLDR: If you have a GPU and CUDA>=12.0, you probably want to install MRI-NUFFT like so:
13-
``pip install mri-nufft[cufinufft]`` or ``pip install mri-nufft[gpunufft]``
14-
For CPU only setup we recommend ``pip install mri-nufft[finufft]``
15-
16-
Then, use the ``get_operator(backend=<your backend>, ... )`` to initialize your MRI-NUFFT operator.
17-
18-
For more information , check the :ref:`general_examples`
19-
20-
21-
.. code-block:: sh
22-
23-
pip install mri-nufft
24-
25-
26-
However, if you want to use some specific backends or develop on mri-nufft, you can install it with extra dependencies. notably `extra`, `io`, and `autodiff`
27-
28-
.. code-block:: sh
29-
30-
pip install mri-nufft[extra,io,autodiff]
31-
32-
33-
Using ``uv``
34-
~~~~~~~~~~~~
35-
If you are using ``uv`` as your package installer you will need to do ::
36-
.. code-block:: sh
37-
38-
uv pip install mri-nufft[extra,io,autodiff] --no-build-isolation
39-
40-
41-
Development Version
42-
~~~~~~~~~~~~~~~~~~~
43-
44-
If you want to modify the mri-nufft code base
45-
46-
.. code-block:: sh
47-
48-
git clone https://github.com:mind-inria/mri-nufft
49-
pip install -e ./mri-nufft[dev,doc,extra,io,autodiff,tests,cufinufft,gpunufft,finufft]
50-
51-
or using ``uv``
52-
53-
.. code-block:: sh
54-
55-
git clone https://github.com:mind-inria/mri-nufft
56-
uv venv
57-
uv sync --all-extras --no-build-isolation --no-extra <backend-you-don't-need>
58-
59-
60-
61-
Choosing a NUFFT Backend
62-
========================
63-
64-
In order to perform Non-Uniform fast Fourier transform you need to install a specific :ref:``NUFFT` computation library backend.
65-
66-
.. tip::
67-
68-
TLDR: If you have a GPU and CUDA>=12.0, you probably want to install MRI-NUFFT like so:
69-
``pip install mri-nufft[cufinufft]`` or ``pip install mri-nufft[gpunufft]``
70-
For CPU only setup we recommend ``pip install mri-nufft[finufft]``
71-
72-
Then, use the ``get_operator(backend=<your backend>, ... )`` to initialize your MRI-NUFFT operator.
73-
74-
For more information , check the :ref:`general_examples`
75-
76-
77-
Supported Libraries
78-
-------------------
79-
80-
These libraries need to be installed separately from this package.
81-
82-
.. Don't touch the spacing ! ..
83-
84-
==================== ============ =================== =============== =================
85-
Backend Hardward Batch computation Precision Array Interface
86-
==================== ============ =================== =============== =================
87-
cufinufft_ GPU (CUDA) ✔ single cupy/torch/numpy
88-
finufft_ CPU ✔ single/double numpy/torch
89-
gpunufft_ GPU ✔ single/double numpy/torch/cupy
90-
tensorflow-nufft_ GPU (CUDA) ✘ single tensorflow
91-
pynufft-cpu_ CPU ✘ single/double numpy
92-
pynfft_ CPU ✘ single/double numpy
93-
bart_ CPU/GPU ✔ single numpy
94-
sigpy_ CPU ✔ single numpy
95-
stacked (*) CPU/GPU ✔ single/double numpy
96-
==================== ============ =================== =============== =================
97-
98-
99-
.. _cufinufft: https://github.com/flatironinstitute/finufft
100-
.. _finufft: https://github.com/flatironinstitute/finufft
101-
.. _tensorflow-nufft: https://github.com/flatironinstitute/pynufft
102-
.. _gpunufft: https://github.com/chaithyagr/gpuNUFFT
103-
.. _pynufft-cpu: https://github.com/jyhmiinlin/pynufft
104-
.. _pynfft: https://github.com/pynfft/pynfft
105-
.. _bart: https://github.com/mrirecon/bart
106-
.. _sigpy: https://github.com/sigpy/sigpy
107-
108-
- (*) stacked-nufft allows one to use any supported backend to perform a stack of 2D NUFFT and adds a z-axis FFT (using scipy or cupy)
109-
110-
111-
**The NUFFT operation is often not enough to provide decent image quality by itself (even with density compensation)**.
112-
For improved image quality, use a Compressed Sensing recon. For doing so, you can check the pysap-mri_ for MRI dedicated solutions and deepinv_ for Deep Learning based solutions.
113-
114-
.. _pysap-mri: https://github.com/CEA-COSMIC/pysap-mri/
115-
.. _Modopt: https://github.com/CEA-COSMIC/ModOpt/
116-
.. _deepinv: https:/github.com/deepinv/deepinv/
117-
118-
Backend Installations
119-
---------------------
120-
121-
To benefit the most from certain backends we recommend to use the following instructions
122-
123-
finufft / cufinufft
124-
~~~~~~~~~~~~~~~~~~~
125-
126-
Those are developed by the `flatiron-institute <https://github.com/flatironinstitute/finufft>`_ and are installable with `pip install finufft` and `pip install cufinufft`.
127-
128-
.. warning::
129-
130-
for cufinufft, a working installation of CUDA and cupy is required.
131-
132-
gpuNUFFT
133-
~~~~~~~~
1345

135-
an active gpuNUFFT fork is maintained by `chaithyagr <https://github.com/chaithyagr/gpunufft/>`_.
6+
.. toctree::
7+
:maxdepth: 2
8+
:hidden:
9+
:titlesonly:
13610

11+
self
12+
install
13+
backend
13714

138-
To install it use `pip install gpuNUFFT` or for local development, use the following:
13915

140-
.. code-block:: sh
14116

142-
git clone https://github.com/chaythiagr/gpuNUFFT
143-
cd gpuNUFFT
144-
python setup.py install
17+
Welcome to MRI-NUFFT ! This library provides efficient implementations of Non-Uniform Fast Fourier Transform (NUFFT) algorithms specifically designed for Magnetic Resonance Imaging (MRI) applications.
14518

146-
.. warning::
19+
Whether you are a researcher, developer, or student, this guide will help you get started with installing and using MRI-NUFFT for your MRI data processing needs.
14720

148-
If you are using ``uv`` as your package installer you will need to do ::
21+
Installation
22+
------------
14923

150-
.. code-block:: sh
151-
152-
uv pip install wheel pip pybind11
153-
uv pip install mri-nufft[gpunufft] --no-build-isolation
154-
155-
BART
156-
~~~~
24+
To install MRI-NUFFT, follow the instructions in the :doc:`install` section. This guide covers the prerequisites, installation steps, and verification of the installation.
15725

158-
BART has to be installed separately and `bart` command needs to be runnable from your `PATH`.
159-
See `installation instructions <https://mrirecon.github.io/bart/installation.html>`_
26+
Using MRI-NUFFT
27+
-----------------
16028

29+
Once you have installed MRI-NUFFT, you can start using it in your projects. The :doc:`backend` section provides more details on how to perform NUFFT operation with a specific library backend, and :ref:`mri-nufft-interface` describes the main interface of the NUFFT operator you can use in your application.
16130

162-
PyNFFT
163-
~~~~~~
31+
.. note::
16432

165-
PyNFFT requires Cython<3.0.0 to work. and can be installed using
33+
We also provide a large collection of :ref:`trajectories <trajectories_examples>` and some :py:mod:`extras capabilities <mrinufft.extras>` for doing non-Cartesian MRI processing.
16634

167-
.. code-block:: sh
16835

169-
pip install cython<3.0.0 pynfft2
17036

171-
Which backend to use
172-
--------------------
37+
What's Next?
38+
------------
17339

174-
We provided an extensive benchmark on computation and memory usage on https://github.com/mind-inria/mri-nufft-benchmark/
40+
- You can explore the :ref:`general_examples` section to see practical applications of MRI-NUFFT, or refer to the :doc:`api` for detailed information on the available functions and classes.
17541

176-
.. tip::
42+
- If you want to learn more about the underlying concepts of NUFFT and its applications in MRI, check out the :doc:`explanations/index` section.
17743

178-
Overall, we recommend to use ``finufft`` for CPU, and ``cufinufft`` or ``gpunufft`` when CUDA GPU are available.
44+
- Maybe you are also interested in the :doc:`misc/related`.

0 commit comments

Comments
 (0)