Replies: 2 comments 6 replies
-
|
Hi @lj-cug, I've converted your issue into a discussion since I think that fits better the topic of your questions.
Feel free to reach out if you encounter any issues. |
Beta Was this translation helpful? Give feedback.
-
PETSc_AmgXWrapper codePetscErrorCode AmgXSolver::setA(const Mat &A) } ginkgo's example distributed-multigrid-preconditioned-solver// If have already the partition in my code, is it not necessary to create the partition like that in the example code ? I want to inject the PETSc's local matrix and vector into Ginkgo's distributed matrix and vector. // Assemble the matrix A and fill the right-hand-side b and unknown vector x // Use .emplace_back to create the following matrix and vector data // Create the distributed Matrix and Vector on Host // Transfer matrix and vector data into the corresponding ginkgo array |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Dear Developers,
ginkgo is a powerful linear solver that can be ported into different processors including Nvidia, Intel and AMD etc. I have integrated PETSc, AmgX with my hydrodynamic code, and ginkgo also attracted me. Before using ginkgo, I have some questions to consult with developers, and hope you can guide me:
(1) I can see the distributed multi-grid algorithm implemention in examples code, and I want to know how to inject PETSc Mat A and Vector b into ginkgo's A and b?
Because my hydrodynamic code uses PETSc to integrate the external linear algebraic library AmgX (Nvidia) like the project AmgXWrapper? In setA.cpp of AmgXWrapper, the folowing function can easily upload my assembly PETSc-CSR matrix into AmgX following (Vector b operation is alike):
AMGX_matrix_upload_distributed(
AmgXA, nGlobalRows, nLocalRows, row[nLocalRows],
1, 1, row.data(), col.data(), data.data(),
nullptr, dist);
(2) Does the MPI-parallelized ginkgo (the distributed solvers) achieve the production-level application? And the application in multi-GPU clusters?
(3) Can the example code in external-lib-interfacing be used to implement the above-mentioned AmgXWrapper ?
Sincerely,
Li Jian
China University of Geoscience
Beta Was this translation helpful? Give feedback.
All reactions