Skip to content

Commit 5f25dcc

Browse files
simonbyrnegiordanosloede
authored
revamp MPIPreferences docs (#594)
* revamp MPIPreferences docs * expand section for cluster admins * move mpipref docs * split notes * Update lib/MPIPreferences/src/MPIPreferences.jl Co-authored-by: Mosè Giordano <[email protected]> * Update docs/src/configuration.md Co-authored-by: Michael Schlottke-Lakemper <[email protected]> * typos * mention installation, and indent * Fix indentation of code blocks in enumerated list * remove MPIPreferences exports from MPI * update CUDA known issues Co-authored-by: Mosè Giordano <[email protected]> Co-authored-by: Michael Schlottke-Lakemper <[email protected]> Co-authored-by: Mosè Giordano <[email protected]>
1 parent 89875d4 commit 5f25dcc

19 files changed

+186
-169
lines changed

docs/make.jl

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -59,17 +59,17 @@ makedocs(
5959
"knownissues.md",
6060
"Examples" => EXAMPLES,
6161
"Reference" => [
62-
"mpipreferences.md",
63-
"library.md",
64-
"environment.md",
65-
"comm.md",
66-
"buffers.md",
67-
"pointtopoint.md",
68-
"collective.md",
69-
"onesided.md",
70-
"topology.md",
71-
"io.md",
72-
"advanced.md",
62+
"reference/mpipreferences.md",
63+
"reference/library.md",
64+
"reference/environment.md",
65+
"reference/comm.md",
66+
"reference/buffers.md",
67+
"reference/pointtopoint.md",
68+
"reference/collective.md",
69+
"reference/onesided.md",
70+
"reference/topology.md",
71+
"reference/io.md",
72+
"reference/advanced.md",
7373
],
7474
"refindex.md",
7575
],

docs/src/configuration.md

Lines changed: 80 additions & 124 deletions
Original file line numberDiff line numberDiff line change
@@ -9,94 +9,21 @@ clusters or multi-GPU machines, you will probably want to configure against a
99
system-provided MPI implementation in order to exploit features such as fast network
1010
interfaces and CUDA-aware MPI interfaces.
1111

12-
## Julia wrapper for `mpiexec`
13-
14-
Since you can configure `MPI.jl` to use one of several MPI implementations, you
15-
may have different Julia projects using different implementation. Thus, it may
16-
be cumbersome to find out which `mpiexec` executable is associated to a specific
17-
project. To make this easy, on Unix-based systems `MPI.jl` comes with a thin
18-
project-aware wrapper around `mpiexec`, called `mpiexecjl`.
19-
20-
### Installation
21-
22-
You can install `mpiexecjl` with [`MPI.install_mpiexecjl()`](@ref). The default
23-
destination directory is `joinpath(DEPOT_PATH[1], "bin")`, which usually
24-
translates to `~/.julia/bin`, but check the value on your system. You can also
25-
tell `MPI.install_mpiexecjl` to install to a different directory.
26-
27-
```sh
28-
$ julia
29-
julia> using MPI
30-
julia> MPI.install_mpiexecjl()
31-
```
32-
33-
To quickly call this wrapper we recommend you to add the destination directory
34-
to your [`PATH`](https://en.wikipedia.org/wiki/PATH_(variable)) environment
35-
variable.
36-
37-
### Usage
38-
39-
`mpiexecjl` has the same syntax as the `mpiexec` binary that will be called, but
40-
it takes in addition a `--project` option to call the specific binary associated
41-
to the `MPI.jl` version in the given project. If no `--project` flag is used,
42-
the `MPI.jl` in the global Julia environment will be used instead.
43-
44-
After installing `mpiexecjl` and adding its directory to `PATH`, you can run it
45-
with:
46-
47-
```sh
48-
$ mpiexecjl --project=/path/to/project -n 20 julia script.jl
49-
```
50-
51-
## Using MPIPreferences.jl
52-
53-
MPI.jl uses [Preferences.jl](https://github.com/JuliaPackaging/Preferences.jl) to
54-
allow the user to choose which MPI implementation to use for a project. This provides
12+
The MPIPreferences.jl package allows the user to choose which MPI implementation to use in MPI.jl. It uses [Preferences.jl](https://github.com/JuliaPackaging/Preferences.jl) to
13+
configure the MPI backend for each project separately. This provides
5514
a single source of truth that can be used for JLL packages (Julia packages providing C libraries)
56-
that link against MPI, localizes the choice of MPI implementation to a project.
57-
58-
Users can use the provided [`use_system_binary`](@ref MPIPreferences.use_system_binary) or
59-
[`use_jll_binary`](@ref MPIPreferences.use_jll_binary) to switch MPI implementations. By
60-
default, the JLL-provided binaries are used.
61-
62-
### Migration from MPI.jl `v0.19`
63-
64-
Prior to MPI.jl `v0.20` environment variables were used to configure which MPI
65-
library to use. These have now been removed and have no effect anymore:
66-
67-
- `JULIA_MPI_BINARY`
68-
- `JULIA_MPIEXEC`
69-
- `JULIA_MPI_INCLUDE_PATH`
70-
- `JULIA_MPI_CFLAGS`
71-
- `JULIA_MPICC`
72-
73-
### Using a system-provided MPI backend
74-
75-
#### Requirements
76-
77-
MPI.jl requires a shared library installation of a C MPI library, supporting the MPI 3.0
78-
standard or later.
79-
80-
### Configuration
81-
82-
To use the system MPI library, run `MPI.use_system_binary()`.
83-
This will attempt to locate and to identify any available MPI implementation, and create
84-
a file called `LocalPreferences.toml` adjacent to the current `Project.toml`.
85-
Use `Base.active_project()` to obtain the location of the currently active project.
15+
that link against MPI. It can be installed by
8616

8717
```sh
88-
julia --project -e 'using MPI; MPI.use_system_binary()'
18+
julia -e 'using Pkg; Pkg.add("MPIPreferences")'
8919
```
9020

91-
!!! note
92-
You can copy `LocalPreferences.toml` to a different project folder, but you must list
93-
`MPIPreferences` in the `[extras]` section of the `Project.toml` for the settings
94-
to take effect. Due to a bug in Julia (until `v1.6.5` and `v1.7.1`), getting preferences
95-
from transitive dependencies is broken (https://github.com/JuliaPackaging/Preferences.jl/issues/24).
96-
To fix this update your version of Julia, or add `MPIPreferences` as a direct dependency to your project.
21+
## Using a system-provided MPI backend
9722

23+
### Requirements
9824

99-
The following MPI implementations should work out-of-the-box with MPI.jl:
25+
MPI.jl requires a shared library installation of a C MPI library, supporting the MPI 3.0
26+
standard or later. The following MPI implementations should work out-of-the-box with MPI.jl:
10027

10128
- [Open MPI](http://www.open-mpi.org/)
10229
- [MPICH](http://www.mpich.org/) (v3.1 or later)
@@ -108,67 +35,83 @@ The following MPI implementations should work out-of-the-box with MPI.jl:
10835
- [Fujitsu MPI](https://www.fujitsu.com/global/about/resources/publications/technicalreview/2020-03/article07.html#cap-03)
10936
- [HPE MPT/HMPT](https://support.hpe.com/hpesc/public/docDisplay?docLocale=en_US&docId=a00105727en_us)
11037

111-
If the implementation is changed, you will need to use [`MPI.use_system_binary()`](@ref MPIPreferences.use_system_binary)
112-
or [`MPI.use_jll_binary()`](@ref MPIPreferences.use_jll_binary).
38+
### Configuration
11339

114-
#### Advanced options
40+
Run `MPIPreferences.use_system_binary()`. This will attempt to locate and to identify any available MPI implementation, and create a file called `LocalPreferences.toml` adjacent to the current `Project.toml`.
11541

116-
```@doc
117-
MPI.use_system_binary
42+
```sh
43+
julia --project -e 'using MPIPreferences; MPIPreferences.use_system_binary()'
11844
```
11945

120-
You can use the argument `mpiexec` to provide the name (or full path) of the MPI launcher executable. The default is
121-
`mpiexec`, but some clusters require using the scheduler launcher interface (e.g. `srun`
122-
on Slurm, `aprun` on PBS). If the MPI library has an uncommon name you can provide it in `library_names`.
123-
The MPI standard does not specify the exact application binary interface (ABI).
124-
In case ABI detection fails you can provide a manual choice (either `MPICH`, `MPItrampoline`, `OpenMPI`, or `MicrosoftMPI`),
125-
but also open an issue such that the automatic detection can be improved.
126-
`export_prefs=true` can be used to copy the preferences into the `Project.toml` instead of creating a
127-
`LocalPreferences.toml` file to hold them.
46+
If the implementation is changed, you will need to call this function again. See the [`MPIPreferences.use_system_binary`](@ref) documentation for specific options.
47+
48+
!!! note
49+
You can copy `LocalPreferences.toml` to a different project folder, but you must list
50+
`MPIPreferences` in the `[extras]` or `[deps]` section of the `Project.toml` for the settings
51+
to take effect.
52+
53+
!!! note
54+
Due to a bug in Julia (until `v1.6.5` and `v1.7.1`), getting preferences
55+
from transitive dependencies is broken (https://github.com/JuliaPackaging/Preferences.jl/issues/24).
56+
To fix this update your version of Julia, or add `MPIPreferences` as a direct dependency to your project.
57+
12858

129-
#### Notes to HPC cluster adminstators
59+
### Notes to HPC cluster adminstators
13060

13161
Preferences are merged across the Julia load path, such that it is feasible to provide a module file that appends a path to
132-
`JULIA_LOAD_PATH` variable that contains system-wide preferences.
62+
`JULIA_LOAD_PATH` variable that contains system-wide preferences. The steps are as follows:
13363

134-
As an example you can use [`MPI.use_system_binary()`](@ref MPIPreferences.use_system_binary)
135-
to create a file `LocalPreferences.toml` containing:
64+
1. Run [`MPIPreferences.use_system_binary()`](@ref MPIPreferences.use_system_binary), which will generate a file `LocalPreferences.toml` containing something like the following:
13665

137-
```toml
138-
[MPIPreferences]
139-
abi = "OpenMPI"
140-
binary = "system"
141-
libmpi = "/software/mpi/lib/libmpi.so"
142-
mpiexec = "/software/mpi/bin/mpiexec"
143-
```
66+
```toml
67+
[MPIPreferences]
68+
abi = "OpenMPI"
69+
binary = "system"
70+
libmpi = "/software/mpi/lib/libmpi.so"
71+
mpiexec = "/software/mpi/bin/mpiexec"
72+
```
14473

145-
Copying this `LocalPreferences.toml` to a central location such as `/software/mpi/julia` and
146-
create adjacent to it a `Project.toml` containing:
74+
2. Create a file called `Project.toml` or `JuliaProject.toml` in a central location, for example `/software/mpi/julia` or in the same directory as the MPI library module, and add the following contents:
14775

148-
```toml
149-
[extras]
150-
MPIPreferences = "3da0fdf6-3ccc-4f1b-acd9-58baa6c99267"
151-
```
76+
```toml
77+
[extras]
78+
MPIPreferences = "3da0fdf6-3ccc-4f1b-acd9-58baa6c99267"
79+
80+
[preferences.MPIPreferences]
81+
abi = "OpenMPI"
82+
binary = "system"
83+
libmpi = "/software/mpi/lib/libmpi.so"
84+
mpiexec = "/software/mpi/bin/mpiexec"
85+
```
86+
87+
updating the contents of the `[preferences.MPIPreferences]` section match those of the `[MPIPreferences]` in `LocalPreferences.toml`.
15288

153-
Now exporting the environment variable `JULIA_LOAD_PATH=":/software/mpi/julia"`
154-
(note the `:` before the path) in the corresponding
155-
module file (preferably the module file for the MPI installation or for Julia),
156-
will cause MPI.jl to default to your cluster MPI installation.
89+
3. Append the directory containing the file to the [`JULIA_LOAD_PATH`](https://docs.julialang.org/en/v1/manual/environment-variables/#JULIA_LOAD_PATH) environment variable, with a colon (`:`) separator.
15790

158-
The user can still provide differing MPI configurations for each Julia project that
159-
will take precedent by modifying the local `Project.toml` or by providing a `LocalPreferences.toml` file.
91+
If this variable is _not_ already set, it should be prefixed with a colon to ensure correct
92+
behavior of the Julia load path, e.g. `JULIA_LOAD_PATH=":/software/mpi/julia"`.
93+
If using environment modules, this can be achieved with
94+
```
95+
append-path -d {} JULIA_LOAD_PATH :/software/mpi/julia
96+
```
97+
in the corresponding module file (preferably the module file for the MPI installation or for Julia).
16098

161-
### Using a different JLL provided MPI library
99+
The user can still provide differing MPI configurations for each Julia project
100+
that will take precedent by modifying the local `Project.toml` or by providing a
101+
`LocalPreferences.toml` file.
102+
103+
## Using an alternative JLL-provided MPI library
162104

163105
The following MPI implementations are provided as JLL packages and automatically obtained when installing MPI.jl:
164106

165-
- `MicrosoftMPI_jll`: Default for Windows
166-
- `MPICH_jll`: Default for all Unix-like systems
167-
- [`MPItrampoline_jll`](https://github.com/eschnett/MPItrampoline): Binaries built against MPItrampoline can be efficiently retargetted to a system MPI implementation.
168-
- `OpenMPI_jll`:
107+
- `MicrosoftMPI_jll`: [Microsoft MPI](https://docs.microsoft.com/en-us/message-passing-interface/microsoft-mpi) Default for Windows
108+
- `MPICH_jll`: [MPICH](https://www.mpich.org/). Default for all other systems
109+
- `OpenMPI_jll`: [Open MPI](https://www.open-mpi.org/)
110+
- `MPItrampoline_jll`: [MPItrampoline](https://github.com/eschnett/MPItrampoline): an MPI forwarding layer.
169111

170-
```@doc
171-
MPI.use_jll_binary
112+
Call [`MPIPreferences.use_jll_binary`](@ref), for example
113+
```sh
114+
julia --project -e 'using MPIPreferences; MPIPreferences.use_jll_binary("MPItrampoline_jll")'
172115
```
173116

174117
## Configuration of the MPI.jl testsuite
@@ -195,3 +138,16 @@ The test suite can also be modified by the following variables:
195138
[`CUDA.CuArray](https://github.com/JuliaGPU/CUDA.jl) buffers.
196139
- `JULIA_MPI_TEST_BINARY`: Check that the specified MPI binary is used for the tests
197140
- `JULIA_MPI_TEST_ABI`: Check that the specified MPI ABI is used for the tests
141+
142+
143+
## Migration from MPI.jl version 0.19 or earlier
144+
145+
Prior to MPI.jl version 0.20, environment variables were used to configure which MPI
146+
library to use. These have now been removed and no longer have any effect:
147+
148+
- `JULIA_MPI_BINARY`
149+
- `JULIA_MPIEXEC`
150+
- `JULIA_MPIEXEC_ARGS`
151+
- `JULIA_MPI_INCLUDE_PATH`
152+
- `JULIA_MPI_CFLAGS`
153+
- `JULIA_MPICC`

docs/src/knownissues.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -100,17 +100,17 @@ Make sure to:
100100
- Make sure to have:
101101
```
102102
export JULIA_CUDA_MEMORY_POOL=none
103-
export JULIA_MPI_BINARY=system
104103
export JULIA_CUDA_USE_BINARYBUILDER=false
105104
```
106-
- Add CUDA and MPI packages in Julia. Build MPI.jl in verbose mode to check whether correct versions are built/used:
105+
- Add CUDA, MPIPreferences, and MPI packages in Julia. Switch to using the system binary
107106
```
108-
julia -e 'using Pkg; pkg"add CUDA"; pkg"add MPI"; Pkg.build("MPI"; verbose=true)'
107+
julia -e 'using Pkg; pkg"add CUDA, MPIPreferences, MPI"'
108+
julia -e 'using MPIPreferences; MPIPreferences.use_system_binary()'
109109
```
110110
- Then in Julia, upon loading MPI and CUDA modules, you can check
111111
- CUDA version: `CUDA.versioninfo()`
112112
- If MPI has CUDA: `MPI.has_cuda()`
113-
- If you are using correct MPI implementation: `MPI.identify_implementation()`
113+
- If you are using correct MPI library: `MPI.libmpi`
114114
115115
After that, it may be preferred to run the Julia MPI script (as suggested [here](https://discourse.julialang.org/t/cuda-aware-mpi-works-on-system-but-not-for-julia/75060/11)) launching it from a shell script (as suggested [here](https://discourse.julialang.org/t/cuda-aware-mpi-works-on-system-but-not-for-julia/75060/4)).
116116
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.

0 commit comments

Comments
 (0)