@@ -11,15 +11,15 @@ NonlinearSolve.jl supports GPU acceleration on a wide array of devices, such as:
1111
1212To use NonlinearSolve.jl on GPUs, there are two distinctly different approaches:
1313
14- 1 . You can build a ` NonlinearProblem ` / ` NonlinearLeastSquaresProblem ` where the elements
15- of the problem, i.e. ` u0 ` and ` p ` , are defined on GPUs. This will make the evaluations
16- of ` f ` occur on the GPU, and all internal updates of the solvers will be completely
17- on the GPU as well. This is the optimal form for large systems of nonlinear equations.
18- 2 . You can use SimpleNonlinearSolve.jl as kernels in KernelAbstractions.jl. This will build
19- problem-specific GPU kernels in order to parallelize the solution of the chosen nonlinear
20- system over a large number of inputs. This is useful for cases where you have a small
21- ` NonlinearProblem ` / ` NonlinearLeastSquaresProblem ` which you want to solve over a large
22- number of initial guesses or parameters.
14+ 1 . You can build a ` NonlinearProblem ` / ` NonlinearLeastSquaresProblem ` where the elements
15+ of the problem, i.e. ` u0 ` and ` p ` , are defined on GPUs. This will make the evaluations
16+ of ` f ` occur on the GPU, and all internal updates of the solvers will be completely
17+ on the GPU as well. This is the optimal form for large systems of nonlinear equations.
18+ 2 . You can use SimpleNonlinearSolve.jl as kernels in KernelAbstractions.jl. This will build
19+ problem-specific GPU kernels in order to parallelize the solution of the chosen nonlinear
20+ system over a large number of inputs. This is useful for cases where you have a small
21+ ` NonlinearProblem ` / ` NonlinearLeastSquaresProblem ` which you want to solve over a large
22+ number of initial guesses or parameters.
2323
2424For a deeper dive into the computational difference between these techniques and why it
2525leads to different pros/cons, see the
0 commit comments