-
Couldn't load subscription status.
- Fork 5
Open
Description
This issue is the consequence from solving the bug in #144
We have been using the send/recv particularly in ghost-cells communication. In case of CuPy + MPI, this comes with the cost of copying the GPU array to the CPU array (confirmed by discussion wiith mpi4py maintainers. It is strongly encouraged to Send/Recv, which requires some upfront buffer allocation (and possibly synchronization) but will be more performant.
For further reference, mpi4py - communication with buffer . I think at some point, we have to revisit this part and change it to using Send/Recv for the benefit of CuPy + MPI users.
mrava87
Metadata
Metadata
Assignees
Labels
No labels