Skip to content

Commit ca420f6

Browse files
authored
Merge pull request #355 from JuliaParallel/sb/has_cuda
expose interface to check CUDA support
2 parents 5bcb514 + 55c7cd4 commit ca420f6

File tree

6 files changed

+44
-2
lines changed

6 files changed

+44
-2
lines changed

docs/src/environment.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,5 @@ MPI.Initialized
99
MPI.Finalize
1010
MPI.Finalized
1111
MPI.universe_size
12+
MPI.has_cuda
1213
```
13-
14-

docs/src/installation.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,7 @@ controlled with the optional environment variables:
3434
- `JULIA_MPICC`: MPI C compiler (default: `mpicc`)
3535
- `JULIA_MPIEXEC`: MPI launcher command (default: `mpiexec`)
3636
- `JULIA_MPIEXEC_ARGS`: Additional arguments to be passed to MPI launcher (only used in the build step and tests).
37+
- `JULIA_MPI_HAS_CUDA`: override the [`MPI.has_cuda`](@ref) function.
3738

3839
If your MPI installation changes (e.g. it is upgraded by the system, or you switch
3940
libraries), you will need to re-run `build MPI` at the package prompt.

docs/src/usage.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,9 @@ If your MPI implementation has been compiled with CUDA support, then `CuArray`s
3333
send and receive buffers for point-to-point and collective operations (they may also work
3434
with one-sided operations, but these are not often supported).
3535

36+
If using Open MPI, the status of CUDA support can be checked via the
37+
[`MPI.has_cuda()`](@ref) function.
38+
3639
## Finalizers
3740

3841
In order to ensure MPI routines are called in the correct order at finalization time,

src/cuda.jl

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,7 @@ import .CuArrays: CuArray
22
import .CuArrays.CUDAdrv: CuPtr, synchronize
33
import .CuArrays.CUDAdrv.Mem: DeviceBuffer
44

5+
56
function Base.cconvert(::Type{MPIPtr}, buf::CuArray{T}) where T
67
Base.cconvert(CuPtr{T}, buf) # returns DeviceBuffer
78
end

src/environment.jl

Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -158,3 +158,28 @@ function Wtime()
158158
end
159159
end
160160

161+
162+
"""
163+
MPI.has_cuda()
164+
165+
Check if the MPI implementation is known to have CUDA support. Currently only Open MPI
166+
provides a mechanism to check, so it will return `false` with other implementations
167+
(unless overriden).
168+
169+
This can be overriden by setting the `JULIA_MPI_HAS_CUDA` environment variable to `true`
170+
or `false`.
171+
"""
172+
function has_cuda()
173+
flag = get(ENV, "JULIA_MPI_HAS_CUDA", nothing)
174+
if flag === nothing
175+
# Only Open MPI provides a function to check CUDA support
176+
@static if startswith(MPI_LIBRARY_VERSION, "Open MPI")
177+
# int MPIX_Query_cuda_support(void)
178+
return 0 != ccall((:MPIX_Query_cuda_support, libmpi), Cint, ())
179+
else
180+
return false
181+
end
182+
else
183+
return parse(Bool, flag)
184+
end
185+
end

test/test_basic.jl

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,25 @@
11
using Test
22
using MPI
33

4+
if get(ENV,"JULIA_MPI_TEST_ARRAYTYPE","") == "CuArray"
5+
using CuArrays
6+
ArrayType = CuArray
7+
else
8+
ArrayType = Array
9+
end
10+
411
@test !MPI.Initialized()
512
MPI.Init()
613
@test MPI.Initialized()
714

815
@test 0 <= MPI.Comm_rank(MPI.COMM_WORLD) < MPI.Comm_size(MPI.COMM_WORLD)
916

17+
@test MPI.has_cuda() isa Bool
18+
19+
if ArrayType != Array
20+
@test MPI.has_cuda()
21+
end
22+
1023
@test !MPI.Finalized()
1124
MPI.Finalize()
1225
@test MPI.Finalized()

0 commit comments

Comments
 (0)