@@ -29,7 +29,7 @@ a few ways:
29
29
can slow down calculations. LinearSolve.jl has proper caches for fully preallocated no-GC workflows.
30
30
3 . LinearSolve.jl makes a lot of other optimizations, like factorization reuse and symbolic factorization reuse, automatic.
31
31
Many of these optimizations are not even possible from the high-level APIs of things like Python's major libraries and MATLAB.
32
- 4 . LinearSolve.jl has a much more extensive set of sparse matrix solvers, which is why you see a major difference (2x-10x) for sparse
32
+ 4 . LinearSolve.jl has a much more extensive set of sparse matrix solvers, which is why you see a major difference (2x-10x) for sparse
33
33
matrices. Which sparse matrix solver between KLU, UMFPACK, Pardiso, etc. is optimal depends a lot on matrix sizes, sparsity patterns,
34
34
and threading overheads. LinearSolve.jl's heuristics handle these kinds of issues.
35
35
@@ -48,7 +48,7 @@ A = rand(n,n)
48
48
b = rand (n)
49
49
50
50
prob = LinearProblem (A,b)
51
- sol = solve (prob,IterativeSolvers_GMRES (),Pl= Pl,Pr= Pr)
51
+ sol = solve (prob,IterativeSolversJL_GMRES (),Pl= Pl,Pr= Pr)
52
52
```
53
53
54
54
If you want to use a "real" preconditioner under the norm ` weights ` , then one
@@ -64,5 +64,5 @@ A = rand(n,n)
64
64
b = rand (n)
65
65
66
66
prob = LinearProblem (A,b)
67
- sol = solve (prob,IterativeSolvers_GMRES (),Pl= Pl,Pr= Pr)
67
+ sol = solve (prob,IterativeSolversJL_GMRES (),Pl= Pl,Pr= Pr)
68
68
```
0 commit comments