You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: joss/paper_sparse.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,7 +21,7 @@ authors:
21
21
affiliations:
22
22
- name: Transvalor S.A., France
23
23
index: 1
24
-
- name: Leibniz Centre of Supercomputing, Germany
24
+
- name: Leibniz Supercomputing Centre, Garching, Germany
25
25
index: 2
26
26
- name: Wageningen University and Research, The Netherlands
27
27
index: 3
@@ -70,7 +70,7 @@ All sparse formats extend an abstract base derived type sparse_type, which holds
70
70
* CSR: compressed sparse row (Yale) format.
71
71
* CSC: compressed sparse column format.
72
72
* ELLPACK: fixed number of nonzeros per row, suited for vectorization.
73
-
* SELL-C: sliced ELLPACK, balancing CSR and ELLPACK trade-offs [@anzt2014implementing].
73
+
* SELL-C: sliced ELLPACK, balancing CSR and ELLPACK trade-offs [@scellc,@anzt2014implementing].
74
74
75
75
## Core functionality
76
76
@@ -83,7 +83,7 @@ $$ y = \alpha op(A) * x + \beta * y$$
83
83
84
84
## Implementation details
85
85
86
-
Before introducing stdlib_sparse, the core structure and API was crafted under a stand-alone project, FSPARSE [@fsparse2024]. This enabled testing and refinement of the library before integration into stdlib.
86
+
Before introducing stdlib_sparse, the core structure and API was crafted under a standalone project, FSPARSE [@fsparse2024]. This enabled testing and refinement of the library before integration into stdlib.
87
87
88
88
The module is designed with the following key features:
89
89
@@ -132,7 +132,7 @@ end program main
132
132
133
133
# Performance and limitations
134
134
135
-
Sparse matrix–vector multiplication has been implemented for all formats. Preliminary tests confirm correctness and scalability to moderately large problems. However:
135
+
Sparse matrix–vector multiplication has been implemented for all formats. Tests confirm correctness and scalability to moderately large problems. However:
136
136
137
137
* No sparse matrix–matrix multiplication or factorizations are yet implemented.
138
138
* For data-parallelism (multi-processing with MPI or coarrays) the `spmv` kernel can be used as basis within each process. Multi-threading or GPU acceleration is not currently supported.
0 commit comments