-
Notifications
You must be signed in to change notification settings - Fork 928
support for mpi_comm_attach_buffer and friends #13338
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
the fortran binding infrastructure additions in PR #13279 need to be merged in to main prior to adding fortran bindings for these functions. |
b0af63d to
de8d884
Compare
|
@edgargabriel we'll start with you for reviewing. |
| ompi_instance_t* instance; | ||
|
|
||
| /* pointer to buffer object used for buffered sends */ | ||
| void *bsend_buffer; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should probably double check that the size of the communicator structure is still below the threshold for being backwards compatible with the 5.0 series
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh good point. I think what we really need to check sizeof ompi_predefined_communicator_t.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm actually looking at how that struct is defined, if sizeof(ompi_communicator_t) is greater than PREDEFINED_COMMUNICATOR_PAD I would hope one would get an error from the compiler when trying to use the communicator.h header in a source file. Any rate, we still get 512 bytes for size of ompi_predefined_communicator_t so I think we're good.
| void *addr; | ||
| const uint64_t seg_size = (1024UL * 1024UL); | ||
|
|
||
| addr = malloc(seg_size); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not necessarily for this PR, but it was discussed in the MPI Forum that one of the advantages that MPI_BUFFER_AUTOMATIC could have is that if the memkind info object on the communicator indicates that the user guarantees that only GPU buffers are being used, MPI_BUFFER_AUTOMATIC could allocate a buffer in device memory. Not sure whether that would be a completely different allocator that we would have to provide, or just a method to influence the implementation of the malloc operation in this routine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm putting a note near line 273 about this. I guess we could have a mechanism for specifying the allocator type by means of an info key/value associated with a communicator?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good to me, thank you!
de8d884 to
cf2fc8d
Compare
|
@edgargabriel check the comment I added in pml_base_bsend.c at starting at line 272. |
|
hmmm not sure what's going on with NVIDIA openshmem test. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Add support for functions added as part of MPI 4.1 standard concerning buffer management for buffered send mode: MPI_Comm_attach_buffer MPI_Comm_detach_buffer MPI_Session_attach_buffer MPI_Session_detach_buffer MPI_Comm_buffer_flush MPI_Comm_buffer_iflush MPI_Session_buffer_flush MPI_Session_buffer_iflush Note the MPI_BUFFER_AUTOMATIC handling in the use mpi_f08 methods requires some beefing up of the fortran binding generation code. Full support for non-blocking flush is deferred to a subsequent PR to avoid reviewer overload. Related to open-mpi#12074 Signed-off-by: Howard Pritchard <[email protected]>
cf2fc8d to
1f02abc
Compare
Add support for functions added as part of MPI 4.1 standard concerning buffer management for buffered send mode:
MPI_Comm_attach_buffer
MPI_Comm_detach_buffer
MPI_Session_attach_buffer
MPI_Session_detach_buffer
MPI_Comm_buffer_flush
MPI_Comm_buffer_iflush
MPI_Session_buffer_flush
MPI_Session_buffer_iflush
Full support for non-blocking flush is deferred to a subsequent PR to avoid reviewer overload.
Related to #12074