[femutils] multi-CPU for direct PETSc solver#320
[femutils] multi-CPU for direct PETSc solver#320mohd-afeef-badri merged 10 commits intoarcaneframework:mainfrom
Conversation
…nd + renamed options to petsc ones
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #320 +/- ##
==========================================
+ Coverage 69.44% 69.52% +0.08%
==========================================
Files 114 114
Lines 14493 14487 -6
Branches 1967 1974 +7
==========================================
+ Hits 10064 10072 +8
+ Misses 3979 3967 -12
+ Partials 450 448 -2 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
| void setPreconditioner(String v) { m_preconditionner_method = std::string{v.localstr()}; } | ||
| void setSolver(String v) { m_ksp_type = std::string{v.localstr()}; } | ||
| void setPreconditioner(String v) { m_pc_type = std::string{v.localstr()}; } | ||
| void setMatrixType(String v) { m_mat_type = std::string{v.localstr()}; } |
There was a problem hiding this comment.
@jojoasticot maybe we can also add m_vec_type as we control both vector and matrix for execution
|
|
||
| <simple name="solver" type="string" default="cg" /> | ||
| <simple name="preconditioner" type="string" default="jacobi" /> | ||
| <simple name="pc-type" type="string" default="jacobi" /> |
There was a problem hiding this comment.
@jojoasticot better to keep it as bjacobi it has more chances of convergeing
| arcanefem_add_gpu_test(NAME [testlab]petsc_direct_2p NB_MPI 2 COMMAND ./Testlab ARGS inputs/Test.petsc_direct.arc) | ||
| arcanefem_add_gpu_test(NAME [testlab]petsc_direct_2p_gpu NB_MPI 2 COMMAND ./Testlab ARGS -A,petsc_flags='-vec_type cuda -mat_type mpiaijcusparse -use_gpu_aware_mpi 0' inputs/Test.petsc_direct.arc) | ||
| arcanefem_add_gpu_test(NAME [testlab]petsc_direct_4p NB_MPI 4 COMMAND ./Testlab ARGS inputs/Test.petsc_direct.arc) | ||
| arcanefem_add_gpu_test(NAME [testlab]petsc_direct_4p_gpu NB_MPI 4 COMMAND ./Testlab ARGS -A,petsc_flags='-vec_type cuda -mat_type mpiaijcusparse -use_gpu_aware_mpi 0' inputs/Test.petsc_direct.arc) |
There was a problem hiding this comment.
@jojoasticot i guess it is better to remove the gpu-aware-mpi
| elapsedTime = platform::getRealTime() - TimeStart; | ||
| _printArcaneFemTime("[ArcaneFem-Timer] solving-linear-system-3", elapsedTime); | ||
|
|
||
| elapsedTime = platform::getRealTime() - TimeStart; |
| _printArcaneFemTime("[ArcaneFem-Timer] solving-linear-system-3", elapsedTime); | ||
|
|
||
| elapsedTime = platform::getRealTime() - TimeStart; | ||
| _printArcaneFemTime("[ArcaneFem-Timer] solving-linear-system2", elapsedTime); |
grospelliergilles
left a comment
There was a problem hiding this comment.
Some parts of the code are not indented with clang-format and should be re-indented
| PetscInt global_rows = dof_family->nbItem(); // total rows across all ranks | ||
| PetscInt global_rows = m_nb_total_row; // total rows across all ranks | ||
|
|
||
| PetscCallAbort(mpi_comm, MatCreate(mpi_comm, &m_petsc_matrix)); |
There was a problem hiding this comment.
Do not use PetscCallAbort() in case of error.
Instead, use a function like that here : https://github.com/arcaneframework/framework/blob/c906ad0c50b5ebbee12d975211e02a15a2ec1d0e/arcane/src/arcane/aleph/petsc/AlephPETSc.cc#L56.
Using ARCANE_FATAL instead of PetscCallAbort() will allow us to display the stack trace in case of error and the error message will also be reported in the logs.
| const Real* rhs_data = rhs_variable.asArray().data(); | ||
| const Real* result_data = dof_variable.asArray().data(); | ||
| const Int32* rows_index_data = rows_index_span.data(); |
There was a problem hiding this comment.
It is better to stay with SmallSpan instead of pointer. Using SmallSpan will add boundary checking:
SmallSpan<const Real> rhs_data = rhs_data.asArray();
...
| ENUMERATE_ (DoF, idof, dof_family->allItems()) { | ||
| DoF dof = *idof; | ||
| if (!dof.isOwn()) | ||
| continue; | ||
| m_parallel_rows_index[index] = rows_index_span[idof.index()]; | ||
| ++index; |
There was a problem hiding this comment.
You cas use dof_family->ownItems() and then you do not need the check if (!dof.isOwn())
changes can be added in another PR as Gilles suggested
mpi for petsc_direct
No description provided.