Skip to content

Conversation

@markalle
Copy link
Contributor

This is a convenience feature for PMPI wrapper library writers, that allows them to have F77 language wrapping for free (something allowed but not required by the standard).

Eg if a wrapper library redefines
int MPI_Send(...args...) { ... record stuff... ; rv = PMPI_Send; return(rv); }
and a Fortran app calls mpi_send() it's possible to have the mpi_send() definition in fortran/mpif-h/send_f.c call MPI_Send which would resolve back out into the wrapper library.

A longer discussion about this feature is here:
#3954

The main argument against this is it's not required by the standard, and hence there's no good way for wrapper library writers to use this functionality: If they want to support all MPI vendors they can't rely on having access to the above. If they want to be complete, they have to write their own mpi_send() as well, and call either pmpi_send or PMPI_Send from it.

The argument for this feature is that pragmatically these C-only wrapper tools already exist, and customers try to use them with their Fortran apps, so it's a nice convenience feature to offer (and it's not random, it's an optional part of the standard).

This PR changes the fortran/mpif-h/send_f.c (etc) code from a direct PMPI_Send() call to a function pointer where the function pointer defaults to PMPI_Send(), and gives runtime control:
-mca mpi_fortcall PMPI : the default behavior (mpi_send calls PMPI_Send)
-mca mpi_fortcall MPI : make mpi_send call MPI_Send

The code's got an extra
#if OMPI_FORTRAN_USE_FPTR
that isn't being used right now, but is a stub for the idea of making this configurable. Eg if someone wanted no trace of the above changes and just wanted the original direct PMPI_Send() calls.

The Fortran wrappers like send_f.c have been defining mpi_send() and
calling C PMPI_Send(). This checkin defines a set of function pointers like
    ompi_fptr_MPI_Send
to be used instead. And these can dynamically point to either MPI_Send or
PMPI_Send.

There's a corresponding checkin that involves a bunch of scripted changes
to make the calls happen as described above. This checkin has all the
by-hand changes.

Currently the arguments needed to define function pointers for all the MPI C
routines are coming from parsing mpi.h in a script normalize_mpih.pl which
outputs a file "flist" in a very uniform layout. Then a second script mkcode.pl
generates the declarations and settings for all those function pointers.

A small number of fortran wrappers contain an ompi_fptr_init() call that
initializes the function pointers with values. Then at the end of
ompi_mpi_init() there's another initialization. The second initialization
is there since the MCA system is available then so the setting for
    --mca mpi_fortcall MPI
    --mca mpi_fortcall PMPI
is expected to be available at that point. The earlier initialization is
based on less complete information.

Signed-off-by: Mark Allen <[email protected]>
This is to enable the fortran wrappers to have a runtime ability to
call either MPI_Foo() or PMPI_Foo() for the calls that are language
wrapped.

The code change actually uses a macro so we could put it back to
vanilla PMPI_Foo() if desired, possibly with configure options.

Signed-off-by: Mark Allen <[email protected]>
@ggouaillardet
Copy link
Contributor

@markalle what happens if a PMPI_* subroutine is invoked from Fortean ?
Does this PR guarantee the corresponding MPI_* C subroutine will never be invoked ?

@markalle
Copy link
Contributor Author

Oh no, I think you're right. The way this is done there's only one send_f.o and it uses pragma weak to create both mpi_send and pmpi_send which aren't distinguished from each other. So yeah, a fortran app calling pmpi_send() would go back out into MPI_Send() and through any wrapper that might exist there.

I don't see how to fix that without a more significant change to separate the behavior of mpi_send() from pmpi_send().

Hmm, the current PR is within my threshold of "triviality" for adding convenience features like this, but I'm not so sure about the next step of doubling up on the mpi_send()/pmpi_send() definitions... I may have to retract this PR then and go back to our layered entrypoint feature which is a different solution to the same problem.

@jsquyres
Copy link
Member

jsquyres commented Jan 3, 2018

@markalle Given your last reply, I marked this PR as "WIP DNM" (work in progress / do not merge). Feel free to update this PR or close it.

@awlauria
Copy link
Contributor

@markalle did you still want this PR or are you planning to pick it up again?

@lanl-ompi
Copy link
Contributor

Can one of the admins verify this patch?

@hppritcha
Copy link
Member

closing no intent to merge

@hppritcha hppritcha closed this Dec 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants