Skip to content

Commit f97d2f1

Browse files
authored
Added interface to MPI Ibarrier collective (#512)
* Added interface to MPI Ibarrier collective Added ability to call MPI Ibarrier. Useful for allowing ranks to continue listening for messages or otherwise processing while waiting at a barrier. * Creates a test for Ibarrier Tests to ensure the Ibarrier capability works. Each rank >0 hits the Ibarrier and then cycles, waiting for messages from until they've all processed the messages. Then 0 can hit the barrier. Represents the usage where different ranks can finish their work at the barrier, but still need to potentially listen for requests from other ranks. * Update collective.md Added reference to Ibarrier
1 parent c7bed59 commit f97d2f1

File tree

3 files changed

+95
-0
lines changed

3 files changed

+95
-0
lines changed

docs/src/collective.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@
44

55
```@docs
66
MPI.Barrier
7+
MPI.Ibarrier
78
```
89

910
## Broadcast

src/collective.jl

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,26 @@ function Barrier(comm::Comm)
1616
return nothing
1717
end
1818

19+
"""
20+
Ibarrier(comm::Comm)
21+
22+
Blocks until `comm` is synchronized.
23+
24+
If `comm` is an intracommunicator, then it blocks until all members of the group have called it.
25+
26+
If `comm` is an intercommunicator, then it blocks until all members of the other group have called it.
27+
28+
# External links
29+
$(_doc_external("MPI_Ibarrier"))
30+
"""
31+
function Ibarrier(comm::Comm)
32+
req = Request()
33+
# int MPI_Ibarrier(MPI_Comm comm, MPI_Req req)
34+
@mpichk ccall((:MPI_Ibarrier, libmpi), Cint, (MPI_Comm, Ptr{MPI_Request}), comm, req)
35+
return req
36+
end
37+
38+
1939
"""
2040
Bcast!(buf, root::Integer, comm::Comm)
2141

test/test_Ibarrier.jl

Lines changed: 74 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,74 @@
1+
using Test
2+
using MPI
3+
4+
function check_for_query(comm)
5+
is_message, status = MPI.Iprobe(MPI.MPI_ANY_SOURCE, MPI.MPI_ANY_TAG, comm)
6+
if is_message
7+
recv_id = status.source
8+
tag_ind = status.tag
9+
return true, recv_id, tag_ind
10+
else
11+
return false, -1, -1
12+
end
13+
end
14+
15+
MPI.Init()
16+
comm = MPI.COMM_WORLD
17+
myrank = MPI.Comm_rank(comm)
18+
mysize = MPI.Comm_size(comm)
19+
20+
# First rank will send, one-at-a-time, the rank + 4 to each other rank.
21+
# They will then sum these and test if the sum is correct. Each rank > 0
22+
# will be waiting at the Ibarrier for messages from 0 in order to test
23+
# that the Ibarrier is working properly.
24+
# They then communicate back to 0 when they have received all messages to
25+
# to allow it to reach the barrier.
26+
#
27+
# This is a contrived example but does test to ensure Ibarrier works
28+
29+
if myrank == 0
30+
for reps in 1:10
31+
for ii in 1:(mysize-1)
32+
smsg = MPI.Send(ii + 4, ii, ii, comm)
33+
end
34+
end
35+
for ii in 1:(mysize-1)
36+
dummy = [0]
37+
rmsg = MPI.Recv!(dummy, ii, ii, comm)
38+
end
39+
end
40+
41+
42+
43+
all_done = false
44+
localsum = 0
45+
msg_num = 0
46+
47+
barrier_req = MPI.Ibarrier(comm)
48+
49+
50+
all_done, barrier_status = MPI.Test!(barrier_req)
51+
52+
while !all_done
53+
global all_done
54+
global msg_num
55+
global myrank
56+
is_request, recv_id, tag_ind = check_for_query(comm)
57+
if is_request
58+
dummy = [0]
59+
rmsg = MPI.Recv!(dummy, recv_id, tag_ind, comm)
60+
msg_num += 1
61+
global localsum += dummy[1]
62+
end # is_request
63+
if msg_num == 10
64+
smsg = MPI.Send(tag_ind, 0, myrank, comm)
65+
end
66+
all_done, barrier_status = MPI.Test!(barrier_req)
67+
end # !all_done
68+
69+
if myrank > 0
70+
@test localsum == 10 * (myrank + 4)
71+
end
72+
73+
MPI.Finalize()
74+
@test MPI.Finalized()

0 commit comments

Comments
 (0)