|
| 1 | +=============== |
1 | 2 | Getting Started |
2 | 3 | =============== |
3 | 4 |
|
4 | | -Installing MRI-NUFFT |
5 | | --------------------- |
6 | | - |
7 | | -mri-nufft is available on `PyPi <https://pypi.org/project/mri-nufft/>`_ |
8 | | - |
9 | | - |
10 | | -.. tip:: |
11 | | - |
12 | | - TLDR: If you have a GPU and CUDA>=12.0, you probably want to install MRI-NUFFT like so: |
13 | | - ``pip install mri-nufft[cufinufft]`` or ``pip install mri-nufft[gpunufft]`` |
14 | | - For CPU only setup we recommend ``pip install mri-nufft[finufft]`` |
15 | | - |
16 | | - Then, use the ``get_operator(backend=<your backend>, ... )`` to initialize your MRI-NUFFT operator. |
17 | | - |
18 | | - For more information , check the :ref:`general_examples` |
19 | | - |
20 | | - |
21 | | -.. code-block:: sh |
22 | | -
|
23 | | - pip install mri-nufft |
24 | | -
|
25 | | - |
26 | | -However, if you want to use some specific backends or develop on mri-nufft, you can install it with extra dependencies. notably `extra`, `io`, and `autodiff` |
27 | | - |
28 | | -.. code-block:: sh |
29 | | -
|
30 | | - pip install mri-nufft[extra,io,autodiff] |
31 | | -
|
32 | | -
|
33 | | -Using ``uv`` |
34 | | -~~~~~~~~~~~~ |
35 | | -If you are using ``uv`` as your package installer you will need to do :: |
36 | | - .. code-block:: sh |
37 | | - |
38 | | - uv pip install mri-nufft[extra,io,autodiff] --no-build-isolation |
39 | | - |
40 | | - |
41 | | -Development Version |
42 | | -~~~~~~~~~~~~~~~~~~~ |
43 | | - |
44 | | -If you want to modify the mri-nufft code base |
45 | | - |
46 | | -.. code-block:: sh |
47 | | -
|
48 | | - git clone https://github.com:mind-inria/mri-nufft |
49 | | - pip install -e ./mri-nufft[dev,doc,extra,io,autodiff,tests,cufinufft,gpunufft,finufft] |
50 | | -
|
51 | | -or using ``uv`` |
52 | | - |
53 | | -.. code-block:: sh |
54 | | -
|
55 | | - git clone https://github.com:mind-inria/mri-nufft |
56 | | - uv venv |
57 | | - uv sync --all-extras --no-build-isolation --no-extra <backend-you-don't-need> |
58 | | -
|
59 | | -
|
60 | | - |
61 | | -Choosing a NUFFT Backend |
62 | | -======================== |
63 | | -
|
64 | | -In order to perform Non-Uniform fast Fourier transform you need to install a specific :ref:``NUFFT` computation library backend. |
65 | | -
|
66 | | -.. tip:: |
67 | | -
|
68 | | - TLDR: If you have a GPU and CUDA>=12.0, you probably want to install MRI-NUFFT like so: |
69 | | - ``pip install mri-nufft[cufinufft]`` or ``pip install mri-nufft[gpunufft]`` |
70 | | - For CPU only setup we recommend ``pip install mri-nufft[finufft]`` |
71 | | -
|
72 | | - Then, use the ``get_operator(backend=<your backend>, ... )`` to initialize your MRI-NUFFT operator. |
73 | | -
|
74 | | - For more information , check the :ref:`general_examples` |
75 | | -
|
76 | | -
|
77 | | -Supported Libraries |
78 | | -------------------- |
79 | | -
|
80 | | -These libraries need to be installed separately from this package. |
81 | | -
|
82 | | -.. Don't touch the spacing ! .. |
83 | | -
|
84 | | -==================== ============ =================== =============== ================= |
85 | | -Backend Hardward Batch computation Precision Array Interface |
86 | | -==================== ============ =================== =============== ================= |
87 | | -cufinufft_ GPU (CUDA) ✔ single cupy/torch/numpy |
88 | | -finufft_ CPU ✔ single/double numpy/torch |
89 | | -gpunufft_ GPU ✔ single/double numpy/torch/cupy |
90 | | -tensorflow-nufft_ GPU (CUDA) ✘ single tensorflow |
91 | | -pynufft-cpu_ CPU ✘ single/double numpy |
92 | | -pynfft_ CPU ✘ single/double numpy |
93 | | -bart_ CPU/GPU ✔ single numpy |
94 | | -sigpy_ CPU ✔ single numpy |
95 | | -stacked (*) CPU/GPU ✔ single/double numpy |
96 | | -==================== ============ =================== =============== ================= |
97 | | - |
98 | | - |
99 | | -.. _cufinufft: https://github.com/flatironinstitute/finufft |
100 | | -.. _finufft: https://github.com/flatironinstitute/finufft |
101 | | -.. _tensorflow-nufft: https://github.com/flatironinstitute/pynufft |
102 | | -.. _gpunufft: https://github.com/chaithyagr/gpuNUFFT |
103 | | -.. _pynufft-cpu: https://github.com/jyhmiinlin/pynufft |
104 | | -.. _pynfft: https://github.com/pynfft/pynfft |
105 | | -.. _bart: https://github.com/mrirecon/bart |
106 | | -.. _sigpy: https://github.com/sigpy/sigpy |
107 | | - |
108 | | -- (*) stacked-nufft allows one to use any supported backend to perform a stack of 2D NUFFT and adds a z-axis FFT (using scipy or cupy) |
109 | | - |
110 | | - |
111 | | -**The NUFFT operation is often not enough to provide decent image quality by itself (even with density compensation)**. |
112 | | -For improved image quality, use a Compressed Sensing recon. For doing so, you can check the pysap-mri_ for MRI dedicated solutions and deepinv_ for Deep Learning based solutions. |
113 | | - |
114 | | -.. _pysap-mri: https://github.com/CEA-COSMIC/pysap-mri/ |
115 | | -.. _Modopt: https://github.com/CEA-COSMIC/ModOpt/ |
116 | | -.. _deepinv: https:/github.com/deepinv/deepinv/ |
117 | | - |
118 | | -Backend Installations |
119 | | ---------------------- |
120 | | - |
121 | | -To benefit the most from certain backends we recommend to use the following instructions |
122 | | - |
123 | | -finufft / cufinufft |
124 | | -~~~~~~~~~~~~~~~~~~~ |
125 | | - |
126 | | -Those are developed by the `flatiron-institute <https://github.com/flatironinstitute/finufft>`_ and are installable with `pip install finufft` and `pip install cufinufft`. |
127 | | - |
128 | | -.. warning:: |
129 | | - |
130 | | - for cufinufft, a working installation of CUDA and cupy is required. |
131 | | - |
132 | | -gpuNUFFT |
133 | | -~~~~~~~~ |
134 | 5 |
|
135 | | -an active gpuNUFFT fork is maintained by `chaithyagr <https://github.com/chaithyagr/gpunufft/>`_. |
| 6 | +.. toctree:: |
| 7 | + :maxdepth: 2 |
| 8 | + :hidden: |
| 9 | + :titlesonly: |
136 | 10 |
|
| 11 | + self |
| 12 | + install |
| 13 | + backend |
137 | 14 |
|
138 | | -To install it use `pip install gpuNUFFT` or for local development, use the following: |
139 | 15 |
|
140 | | -.. code-block:: sh |
141 | 16 |
|
142 | | - git clone https://github.com/chaythiagr/gpuNUFFT |
143 | | - cd gpuNUFFT |
144 | | - python setup.py install |
| 17 | +Welcome to MRI-NUFFT ! This library provides efficient implementations of Non-Uniform Fast Fourier Transform (NUFFT) algorithms specifically designed for Magnetic Resonance Imaging (MRI) applications. |
145 | 18 |
|
146 | | -.. warning:: |
| 19 | +Whether you are a researcher, developer, or student, this guide will help you get started with installing and using MRI-NUFFT for your MRI data processing needs. |
147 | 20 |
|
148 | | - If you are using ``uv`` as your package installer you will need to do :: |
| 21 | +Installation |
| 22 | +------------ |
149 | 23 |
|
150 | | - .. code-block:: sh |
151 | | - |
152 | | - uv pip install wheel pip pybind11 |
153 | | - uv pip install mri-nufft[gpunufft] --no-build-isolation |
154 | | - |
155 | | -BART |
156 | | -~~~~ |
| 24 | +To install MRI-NUFFT, follow the instructions in the :doc:`install` section. This guide covers the prerequisites, installation steps, and verification of the installation. |
157 | 25 |
|
158 | | -BART has to be installed separately and `bart` command needs to be runnable from your `PATH`. |
159 | | -See `installation instructions <https://mrirecon.github.io/bart/installation.html>`_ |
| 26 | +Using MRI-NUFFT |
| 27 | +----------------- |
160 | 28 |
|
| 29 | +Once you have installed MRI-NUFFT, you can start using it in your projects. The :doc:`backend` section provides more details on how to perform NUFFT operation with a specific library backend, and :ref:`mri-nufft-interface` describes the main interface of the NUFFT operator you can use in your application. |
161 | 30 |
|
162 | | -PyNFFT |
163 | | -~~~~~~ |
| 31 | +.. note:: |
164 | 32 |
|
165 | | -PyNFFT requires Cython<3.0.0 to work. and can be installed using |
| 33 | + We also provide a large collection of :ref:`trajectories <trajectories_examples>` and some :py:mod:`extras capabilities <mrinufft.extras>` for doing non-Cartesian MRI processing. |
166 | 34 |
|
167 | | -.. code-block:: sh |
168 | 35 |
|
169 | | - pip install cython<3.0.0 pynfft2 |
170 | 36 |
|
171 | | -Which backend to use |
172 | | --------------------- |
| 37 | +What's Next? |
| 38 | +------------ |
173 | 39 |
|
174 | | -We provided an extensive benchmark on computation and memory usage on https://github.com/mind-inria/mri-nufft-benchmark/ |
| 40 | +- You can explore the :ref:`general_examples` section to see practical applications of MRI-NUFFT, or refer to the :doc:`api` for detailed information on the available functions and classes. |
175 | 41 |
|
176 | | -.. tip:: |
| 42 | +- If you want to learn more about the underlying concepts of NUFFT and its applications in MRI, check out the :doc:`explanations/index` section. |
177 | 43 |
|
178 | | - Overall, we recommend to use ``finufft`` for CPU, and ``cufinufft`` or ``gpunufft`` when CUDA GPU are available. |
| 44 | +- Maybe you are also interested in the :doc:`misc/related`. |
0 commit comments