Skip to content

Conversation

@1tnguyen
Copy link
Contributor

A top-level MPI_Init at XACC Initialize() is not ideal since we may want to use an MPI-enabled backend (without HPC Virtualization) => a global MPI_Init at XACC::Initialize() could be problematic.
Hence, move it back within the scope of HPCVirt decorator.

Fixing an MPI_Finalize race condition issue when ExaTN MPI is present within the installation.
ExaTN has exatnInitializedMPI variable to determine if it should do the MPI_Finalize step, hence HPC Virt should have the same mechanism to prevent HPC Virt from finalizing MPI pre-maturely and causing MPI errors during ExaTN::Finalize() which could call MPI API's

A top-level MPI_Init at XACC Initialize() is not ideal since we may want to use an MPI-enabled backend (without HPC Virtualization) => a global MPI_Init is not ideal. Hence, move it back to within the scope of HPCVirt decorator.

Fixing a MPI_Finalize race condition issue when ExaTN MPI is present within the installation.
ExaTN has `exatnInitializedMPI` variable to determine if it should do the MPI_Finalize step, hence HPC Virt should have the same mechanism to prevent HPC Virt from finalizing MPI pre-maturely and causing MPI errors during ExaTN Finalize() which could call MPI API's

Signed-off-by: Thien Nguyen <[email protected]>
@1tnguyen 1tnguyen self-assigned this Jul 22, 2022
@1tnguyen
Copy link
Contributor Author

@danclaudino Could you please review this PR? This is trying to fix the MPI API usage after Finalize and an issue with double MPI_Init in some cases.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants