You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/c_gpu.rst
+58-6Lines changed: 58 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ There are four steps needed to call cuFINUFFT from C: 1) making a plan, 2) setti
5
5
The simplest case is to call them in order, i.e., 1234.
6
6
However, it is possible to repeat 3 with new strength or coefficient data, or to repeat 2 to choose new nonuniform points in order to do one or more step 3's again, before destroying.
7
7
For instance, 123334 and 1232334 are allowed.
8
-
If non-standard algorithm options are desired, an extra function is needed before making the plan (see bottom of this page).
8
+
If non-standard algorithm options are desired, an extra function is needed before making the plan; see bottom of this page for options.
9
9
10
10
This API matches very closely that of the plan interface to FINUFFT (in turn modeled on those of FFTW and NFFT).
11
11
Here is the full documentation for these functions.
@@ -289,8 +289,8 @@ This deallocates all arrays inside the ``plan`` struct, freeing all internal mem
289
289
Note: the plan (being just a pointer to the plan struct) is not actually "destroyed"; rather, its internal struct is destroyed.
290
290
There is no need for further deallocation of the plan.
291
291
292
-
Non-standard options
293
-
~~~~~~~~~~~~~~~~~~~~
292
+
Options for GPU code
293
+
--------------------
294
294
295
295
The last argument in the above plan stage accepts a pointer to an options structure, which is the same in both single and double precision.
296
296
To create such a structure, use:
@@ -300,7 +300,59 @@ To create such a structure, use:
300
300
cufinufft_opts opts;
301
301
cufinufft_default_opts(&opts);
302
302
303
-
Then you may change fields of ``opts`` by hand, finally pass ``&opts`` in as the last argument to ``cufinufft_makeplan`` or ``cufinufftf_makeplan``.
304
-
The options fields are currently only documented in the ``include/cufinufft_opts.h``.
303
+
Then you may change fields of ``opts`` by hand, finally pass ``&opts`` in as the last argument to ``cufinufft_makeplan`` or ``cufinufftf_makeplan``. Here are the options, with the important user-controllable ones documented. For their default values, see below.
305
304
306
-
For examples of this advanced usage, see ``test/cuda/cufinufft*.cu``
305
+
Data handling options
306
+
~~~~~~~~~~~~~~~~~~~~~
307
+
308
+
**modeord**: Fourier coefficient frequency index ordering; see the CPU option of the same name :ref:`modeord<modeord>`.
309
+
As a reminder, ``modeord=0`` selects increasing frequencies (negative through positive) in each dimension,
310
+
while ``modeord=1`` selects FFT-style ordering starting at zero and wrapping over to negative frequencies half way through.
311
+
312
+
**gpu_device_id**: Sets the GPU device ID. Leave at default unless you know what you're doing. [To be documented]
313
+
314
+
Diagnostic options
315
+
~~~~~~~~~~~~~~~~~~
316
+
317
+
**gpu_spreadinterponly**: if ``0`` do the NUFFT as intended. If ``1``, omit the FFT and kernel FT deconvolution steps and return garbage answers.
318
+
Nonzero value is *only* to be used to aid timing tests (although currently there are no timing codes that exploit this option), and will give wrong or undefined answers for the NUFFT transforms!
319
+
320
+
321
+
Algorithm performance options
322
+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
323
+
324
+
**gpu_method**: Spreader/interpolator algorithm.
325
+
326
+
* ``gpu_method=0`` : makes an automatic choice of one of the below methods, based on our heuristics.
327
+
328
+
* ``gpu_method=1`` : uses a nonuniform points-driven method, either unsorted which is referred to as GM in our paper, or sorted which is called GM-sort in our paper, depending on option ``gpu_sort`` below
329
+
330
+
* ``gpu_method=2`` : for spreading only, ie, type 1 transforms, uses a shared memory output-block driven method, referred to as SM in our paper. Has no effect for interpolation (type 2 transforms)
331
+
332
+
* ``gpu_method>2`` : (various upsupported experimental methods due to Melody Shih, not for regular users. Eg ``3`` tests an idea of Paul Springer's to group NU points when spreading, ``4`` is a block gather method of possible interest.)
333
+
334
+
**gpu_sort**: ``0`` do not sort nonuniform points, ``1`` do sort nonuniform points. Only has an effect when ``gpu_method=1`` (or if this method has been internally chosen when ``gpu_method=0``). Unlike the CPU code, there is no auto-choice since in our experience sorting is fast and always helps. It is possible for structured NU point inputs that ``gpu_sort=0`` may be the faster.
335
+
336
+
**gpu_kerevalmeth**: ``0`` use direct (reference) kernel evaluation, which is not recommended for speed (however, it allows nonstandard ``opts.upsampfac`` to be used). ``1`` use Horner piecewise polynomial evaluation (recommended, and enforces ``upsampfac=2.0``)
337
+
338
+
**upsampfac**: set upsampling factor. For the recommended ``kerevalmeth=1`` you must choose the standard ``upsampfac=2.0``. If you are willing to risk a slower kernel evaluation, you may set any ``upsampfac>1.0``, but this is experimental and unsupported.
339
+
340
+
**gpu_maxsubprobsize**: maximum number of NU points to be handled in a single subproblem in the spreading SM method (``gpu_method=2`` only)
341
+
342
+
**gpu_{o}binsize{x,y,z}**: various bisizes for sorting (GM-sort) or SM subproblem methods. Values of ``-1`` trigger the heuristically set default values. Leave at default unless you know what you're doing. [To be documented]
343
+
344
+
**gpu_maxbatchsize**: ``0`` use heuristically defined batch size for vectorized (many-transforms with same NU points) interface, else set this batch size.
345
+
346
+
**gpu_stream**: CUDA stream to use. Leave at default unless you know what you're doing. [To be documented]
347
+
348
+
349
+
For all GPU option default values we refer to the source code in
For examples of advanced options-switching usage, see ``test/cuda/cufinufft*.cu`` and ``perftest/cuda/cuperftest.cu``.
357
+
358
+
You may notice a lack of debugging/timing options in the GPU code. This is to avoid CUDA writing to stdout. Please help us out by adding some of these.
Copy file name to clipboardExpand all lines: docs/julia.rst
+5-4Lines changed: 5 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,9 +1,10 @@
1
-
Julia interfaces
2
-
================
1
+
Julia interfaces (CPU and GPU)
2
+
==============================
3
3
4
-
Ludvig af Klinteberg, Libin Lu, and others, have built `FINUFFT.jl <https://github.com/ludvigak/FINUFFT.jl>`_, an interface from the `Julia <https://julialang.org/>`_ language. This package supports 32-bit and 64-bit precision, and automatically downloads and runs pre-built binaries of the FINUFFT library for Linux, macOS, Windows and FreeBSD (for a full list see `finufft_jll <https://github.com/JuliaBinaryWrappers/finufft_jll.jl>`_).
4
+
Principal author Ludvig af Klinteberg and others have built and maintain `FINUFFT.jl <https://github.com/ludvigak/FINUFFT.jl>`_, an interface from the `Julia <https://julialang.org/>`_ language. This official Julia package supports 32-bit and 64-bit precision, now on both CPU and GPU (via `CUDA.jl`), via a common interface.
5
+
The Julia package installation automatically downloads pre-built CPU binaries of the FINUFFT library for Linux, macOS, Windows and FreeBSD (for a full list see `finufft_jll <https://github.com/JuliaBinaryWrappers/finufft_jll.jl>`_), and the GPU binary for Linux (see `cufinufft_jll <https://github.com/JuliaBinaryWrappers/cufinufft_jll.jl>`_).
5
6
6
-
`FINUFFT.jl` has now (in 2022) itself been wrapped as part of `NFFT.jl <https://juliamath.github.io/NFFT.jl/dev/performance/>`_, which contains an "abstract" interface
7
+
`FINUFFT.jl` has itself been wrapped as part of `NFFT.jl <https://juliamath.github.io/NFFT.jl/dev/performance/>`_, which contains an "abstract" interface
7
8
to any NUFFT in Julia, with FINUFFT as an example.
Copy file name to clipboardExpand all lines: docs/opts.rst
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
.. _opts:
2
2
3
-
Options parameters
4
-
==================
3
+
Options parameters (CPU)
4
+
========================
5
5
6
6
Aside from the mandatory inputs (dimension, type,
7
7
nonuniform points, strengths or coefficients, and, in C++/C/Fortran/MATLAB,
@@ -140,7 +140,7 @@ automatically from call to call in the same executable (incidentally, also in th
140
140
* ``spread_sort=2`` : uses a heuristic to decide whether to sort or not.
141
141
142
142
The heuristic bakes in empirical findings such as: generally it is not worth sorting in 1D type 2 transforms, or when the number of nonuniform points is small.
143
-
Do not change this from its default unless you obsever.
143
+
Feel free to try experimenting here; if you have highly-structured nonuniform point ordering (such as coming from polar-grid or propeller-type MRI k-points) it may be advantageous not to sort.
144
144
145
145
**spread_kerevalmeth**: Kernel evaluation method in spreader/interpolator.
146
146
This should not be changed from its default value, unless you are an
0 commit comments