You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: ALP_Transition_Path_Tutorial.tex
+27-27Lines changed: 27 additions & 27 deletions
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ \subsection{Setup: Installing ALP and Preparing to Use the Solver}\label{sec:set
11
11
\subsubsection*{Installation on Linux}
12
12
13
13
\begin{enumerate}
14
-
\item Install prerequisites: Ensure you have a C++11 compatible compiler (e.g. \texttt{g++} 4.8.2 or later) with OpenMP support, CMake (>= 3.13) and GNU Make, plus development headers for libNUMA and POSIX threads.
14
+
\item Install prerequisites: Ensure you have a C++11 compatible compiler (e.g. \texttt{g++} 4.8.2 or later) with OpenMP support, CMake (>= 3.13) and GNU Make, plus development headers for libNUMA and POSIX threads.
This API is non-blocking in the sense that internally ALP may overlap operations (like sparse matrix-vector multiplications and vector updates) and use asynchronous execution for performance. However, the above functions themselves appear synchronous. For example, sparse\_cg\_solve will only return after the solve is complete (there’s no separate “wait” call exposed in this C interface). The benefit of ALP’s approach is that you, the developer, don’t need to manage threads or message passing at all. ALP’s GraphBLAS engine handles parallelism behind the scenes. You just call these routines as you would any standard library. Now, let’s put these functions into practice with a concrete example.
85
85
86
-
86
+
87
87
\subsection{Example: Solving a Linear System with ALP’s CG Solver}
88
88
89
-
Suppose we want to solve a small system $Ax = b$ to familiarize ourselves with the CG interface. We will use the following $3\times3$ symmetric positive-definite matrix $A$: $$ A = \begin{pmatrix} 4 & 1 & 0\\
90
-
1 & 3 & -1\\
91
-
0 & -1 & 2\end{pmatrix}, $$
92
-
and we choose a right-hand side vector $b$ such that the true solution is easy to verify. If we take the solution to be $x = (1,\;2,\;3)$, then $b = A x$ can be calculated as: $$ b = \begin{pmatrix}6\ 4\ 4\end{pmatrix}, $$ since $4\cdot1 + 1\cdot2 + 0\cdot3 = 6$, $1\cdot1 + 3\cdot2 + (-1)\cdot3 = 4$, and $0\cdot1 + (-1)\cdot2 + 2\cdot3 = 4$. Our goal is to see if the CG solver recovers $x = (1,2,3)$ from $A$ and $b$.
93
-
89
+
Suppose we want to solve a small system $Ax = b$ to familiarize ourselves with the CG interface. We will use the following $3\times3$ symmetric positive-definite matrix $A$: $$ A = \begin{pmatrix} 4 & 1 & 0\\
90
+
1 & 3 & -1\\
91
+
0 & -1 & 2\end{pmatrix}, $$
92
+
and we choose a right-hand side vector $b$ such that the true solution is easy to verify. If we take the solution to be $x = (1,\;2,\;3)$, then $b = A x$ can be calculated as: $$ b = \begin{pmatrix}6\ 4\ 4\end{pmatrix}, $$ since $4\cdot1 + 1\cdot2 + 0\cdot3 = 6$, $1\cdot1 + 3\cdot2 + (-1)\cdot3 = 4$, and $0\cdot1 + (-1)\cdot2 + 2\cdot3 = 4$. Our goal is to see if the CG solver recovers $x = (1,2,3)$ from $A$ and $b$.
93
+
94
94
We will hard-code $A$ in CRS format (also called CSR: Compressed Sparse Row) for the solver. In CRS, the matrix is stored by rows, using parallel arrays for values and column indices, plus an offset index for where each row starts. For matrix $A$ above:
95
95
96
96
@@ -116,34 +116,34 @@ \subsection{Example: Solving a Linear System with ALP’s CG Solver}
116
116
int main(){
117
117
// Define the 3x3 test matrix in CRS format
118
118
const size_t n = 3;
119
-
119
+
120
120
double A_vals[] = {
121
121
4.0, 1.0, // row 0 values
122
122
1.0, 3.0, -1.0, // row 1 values
123
123
-1.0, 2.0 // row 2 values
124
124
};
125
-
125
+
126
126
int A_cols[] = {
127
127
0, 1, // row 0 column indices
128
128
0, 1, 2, // row 1 column indices
129
129
1, 2 // row 2 column indices
130
130
};
131
-
131
+
132
132
int A_offs[] = { 0, 2, 5, 7 }; // row start offsets: 0,2,5 and total nnz=7
133
-
133
+
134
134
// Right-hand side b and solution vector x
135
135
double b[] = { 6.0, 4.0, 4.0 }; // b = A * [1,2,3]^T
136
136
double x[] = { 0.0, 0.0, 0.0 }; // initial guess x=0 (will hold the solution)
137
-
137
+
138
138
// Solver handle
139
139
sparse_cg_handle_t handle;
140
-
140
+
141
141
int err = sparse_cg_init_dii(&handle, n, A_vals, A_cols, A_offs);
142
142
if (err != 0) {
143
143
fprintf(stderr, "CG init failed with error %d\n", err);
144
144
return EXIT_FAILURE;
145
145
}
146
-
146
+
147
147
// (Optional) set a preconditioner here if needed, e.g. Jacobi or others.
148
148
// We skip this, so no preconditioner (effectively M = Identity).
149
149
err = sparse_cg_solve_dii(handle, x, b);
@@ -153,13 +153,13 @@ \subsection{Example: Solving a Linear System with ALP’s CG Solver}
153
153
sparse_cg_destroy_dii(handle);
154
154
return EXIT_FAILURE;
155
155
}
156
-
156
+
157
157
// Print the solution vector x
158
158
printf("Solution x = [%.2f, %.2f, %.2f]\n", x[0], x[1], x[2]);
159
-
159
+
160
160
// Clean up
161
161
sparse_cg_destroy_dii(handle);
162
-
162
+
163
163
return 0;
164
164
}
165
165
@@ -171,27 +171,27 @@ \subsection{Example: Solving a Linear System with ALP’s CG Solver}
171
171
\begin{itemize}
172
172
173
173
\item We included <graphblas/solver.h> (the exact include path might be alp/solver.h or similar depending on ALP’s install, but typically it resides in the GraphBLAS include directory of ALP). This header defines the sparse\_cg\_* functions and the \textbf{sparse\_cg\_handle\_t} type.
174
-
174
+
175
175
\item We set up the matrix data in CRS format. For clarity, the values and indices are grouped by row in the code. The offsets array \{0,2,5,7\} indicates: row0 uses vals[0..1], row1 uses vals[2..4] , row2 uses vals[5..6]. The matrix dimension n is 3.
176
-
176
+
177
177
\item We prepare the vectors b and x. b is initialized to \{6,4,4\} as computed above. We initialize x to all zeros (as a starting guess). In a real scenario, you could start from a different guess, but zero is a common default.
178
-
178
+
179
179
\item We create a \textbf{sparse\_cg\_handle\_t} and call {sparse\_cg\_init}. This hands the matrix to ALP’s solver. Under the hood, ALP will likely copy or reference this data and possibly analyze $A$ for the CG algorithm. We check the return code err, if non-zero, we print an error and exit. (For example, an error might occur if n or the offsets are inconsistent. In our case, it should succeed with err == 0.)
180
-
180
+
181
181
\item We do not call \textbf{sparse\_cg\_set\_preconditioner} in this example, which means the CG will run unpreconditioned. If we wanted to, we could implement a simple preconditioner. For instance, a Jacobi preconditioner would use the diagonal of $A$: we’d create an array with $\text{diag}(A) = [4,3,2]$ and a function to divide the residual by this diagonal. We would then call \textbf{sparse\_cg\_set\_preconditioner(handle, my\_prec\_func, diag\_data)}. For brevity, we skip this. ALP will just use the identity preconditioner by default (no acceleration).
182
-
182
+
183
183
\item Next, we call \textbf{sparse\_cg\_solve(handle, x, b)}. ALP will iterate internally to solve $Ax=b$. When this function returns, x should contain the solution. We again check err. A non-zero code could indicate the solver failed to converge (though typically it would still return 0 and one would check convergence via a status or residual, ALP’s API may evolve to provide more info). In our small case, it should converge in at most 3 iterations since $A$ is $3\times3$.
184
-
184
+
185
185
\item We print the resulting x. We expect to see something close to [1.00, 2.00, 3.00]. Because our matrix and $b$ were consistent with an exact solution of $(1,2,3)$, the CG method should find that exactly (within floating-point rounding). You can compare this output with the known true solution to verify the solver worked correctly.
186
-
186
+
187
187
\item Finally, we call \textbf{sparse\_cg\_destroy(handle)} to free ALP’s resources for the solver. This is important especially for larger problems to avoid memory leaks. After this, we return from main.
188
-
188
+
189
189
\end{itemize}
190
190
191
191
192
192
193
193
\section*{Building and Running the Example}
194
-
To compile the above code with ALP, we will use the direct linking option as discussed.
194
+
To compile the above code with ALP, we will use the direct linking option as discussed.
0 commit comments