Skip to content

Conversation

@paulromano
Copy link
Contributor

Description

This PR makes changes in the R2SManager and related classes in order to work properly with MPI parallelism. The changes include:

  • The neutron/photon transport steps are now executed via openmc.lib. Because the activation step uses mpi4py to decompose across materials, this implies that the Python script itself has to be called with mpiexec. Consequently, the neutron/photon transport steps cannot make a subprocess call to mpiexec ... openmc because nested mpiexec calls are are not supported.
  • Some results have to be broadcast across ranks in order for the full workflow to work, and when results are written to file, they should only be written from one rank.
  • Care has to be taken with temporary directories to ensure that all ranks see the same directory
  • The TemporarySession context manager was also updated to work when called in parallel

Checklist

  • I have performed a self-review of my own code
  • I have run clang-format (version 15) on any C++ source files (if applicable)
  • I have followed the style guidelines for Python source files (if applicable)
  • I have made corresponding changes to the documentation (if applicable)
  • I have added tests that prove my fix is effective or that my feature works (if applicable)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant