Skip to content

Commit a851bfd

Browse files
ranochaluraess
andauthored
fix typos in docs (#618)
* fix typos * Update docs/src/usage.md Co-authored-by: Ludovic Räss <[email protected]> * add another missing backslash * update link to alltoall_test_rocm_multigpu.jl Co-authored-by: Ludovic Räss <[email protected]>
1 parent 393c5b8 commit a851bfd

File tree

1 file changed

+4
-1
lines changed

1 file changed

+4
-1
lines changed

docs/src/usage.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -82,6 +82,9 @@ If using Open MPI, the status of CUDA support can be checked via the
8282
If your MPI implementation has been compiled with ROCm support (AMDGPU), then `AMDGPU.ROCArray`s (from the
8383
[AMDGPU.jl](https://github.com/JuliaGPU/AMDGPU.jl) package) can be passed directly as send and receive buffers for point-to-point and collective operations (they may also work with one-sided operations, but these are not often supported).
8484

85-
Successfully running the [alltoall_test_rocm.jl](https://gist.github.com/luraess/c228ec08629737888a18c6a1e397643c) should confirm your MPI implementation to have the ROCm support (AMDGPU) enabled. Moreover, successfully running the [alltoall_test_rocm_mulitgpu.jl](https://gist.github.com/luraess/d478b3f98eae984931fd39a7158f4b9e) should confirm your ROCm-aware MPI implementation to use multiple AMD GPUs (one GPU per rank).
85+
Successfully running the [alltoall\_test\_rocm.jl](https://gist.github.com/luraess/c228ec08629737888a18c6a1e397643c)
86+
should confirm your MPI implementation to have the ROCm support (AMDGPU) enabled. Moreover, successfully running the
87+
[alltoall\_test\_rocm\_multigpu.jl](https://gist.github.com/luraess/a47931d7fb668bd4348a2c730d5489f4) should confirm
88+
your ROCm-aware MPI implementation to use multiple AMD GPUs (one GPU per rank).
8689

8790
The status of ROCm (AMDGPU) support cannot currently be queried.

0 commit comments

Comments
 (0)