Skip to content

Commit 2b93e68

Browse files
authored
Merge pull request #566 from JuliaParallel/vc/rewrite_docs
Rewrite configuration docs for MPIPreferences
2 parents 690378f + dfa92bc commit 2b93e68

File tree

3 files changed

+126
-45
lines changed

3 files changed

+126
-45
lines changed

docs/src/configuration.md

Lines changed: 118 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -9,10 +9,6 @@ clusters or multi-GPU machines, you will probably want to configure against a
99
system-provided MPI implementation in order to exploit features such as fast network
1010
interfaces and CUDA-aware MPI interfaces.
1111

12-
MPI.jl will attempt to detect when you are running on a HPC cluster, and warn the user
13-
about this. To disable this warning, set the environment variable
14-
`JULIA_MPI_CLUSTER_WARN=n`.
15-
1612
## Julia wrapper for `mpiexec`
1713

1814
Since you can configure `MPI.jl` to use one of several MPI implementations, you
@@ -52,64 +48,147 @@ with:
5248
$ mpiexecjl --project=/path/to/project -n 20 julia script.jl
5349
```
5450

55-
## Using a system-provided MPI
51+
## Using MPIPreferences.jl
52+
53+
MPI.jl uses [Preferences.jl](https://github.com/JuliaPackaging/Preferences.jl) to
54+
allow the user to choose which MPI implementation to use for a project. This provides
55+
a single source of truth that can be used for JLL packages (Julia packages providing C libraries)
56+
that link against MPI, localizes the choice of MPI implementation to a project.
57+
58+
Users can use the provided [`use_system_binary`](@ref) or [`use_jll_binary`](@ref)
59+
to switch MPI implementations. By default, the JLL-provided binaries are used.
60+
61+
### Migration from MPI.jl `v0.19`
5662

57-
### Requirements
63+
Prior to MPI.jl `v0.20` environment variables were used to configure which MPI
64+
library to use. These have now been removed and have no effect anymore:
65+
66+
- `JULIA_MPI_BINARY`
67+
- `JULIA_MPIEXEC`
68+
- `JULIA_MPI_INCLUDE_PATH`
69+
- `JULIA_MPI_CFLAGS`
70+
- `JULIA_MPICC`
71+
72+
### Using a system-provided MPI backend
73+
74+
#### Requirements
5875

5976
MPI.jl requires a shared library installation of a C MPI library, supporting the MPI 3.0
6077
standard or later.
6178

62-
### Building
79+
### Configuration
6380

64-
To use the the system MPI, set the environment variable `JULIA_MPI_BINARY=system` and run
65-
`Pkg.build("MPI")`. This can be done by:
66-
```
67-
julia --project -e 'ENV["JULIA_MPI_BINARY"]="system"; using Pkg; Pkg.build("MPI"; verbose=true)'
81+
To use the system MPI library, run `MPI.use_system_binary()`.
82+
This will attempt to locate and to identify any available MPI implementation, and create
83+
a file called `LocalPreferences.toml` adjacent to the current `Project.toml`.
84+
Use `Base.active_project()` to obtain the location of the currently active project.
85+
86+
```sh
87+
julia --project -e 'using MPI; MPI.use_system_binary()'
6888
```
69-
This will attempt find and identify any available MPI implementation.
7089

71-
The MPI standard doesn't specify the exact application binary interface (ABI).
72-
The build script will attempt to build a small C program to
73-
determine the appropriate type definitions and constants. This requires a compatible C
74-
compiler (`mpicc` by default).
90+
!!! note
91+
You can copy `LocalPreferences.toml` to a different project folder, but you must list
92+
`MPIPreferences` in the `[extras]` section of the `Project.toml` for the settings
93+
to take effect. Due to a bug in Julia (until `v1.6.5` and `v1.7.1`), getting preferences
94+
from transitive dependencies is broken (https://github.com/JuliaPackaging/Preferences.jl/issues/24).
95+
To fix this update your version of Julia, or add `MPIPreferences` as a direct dependency to your project.
7596

76-
The following implementations should work:
97+
98+
The following MPI implementations should work out-of-the-box with MPI.jl:
7799

78100
- [Open MPI](http://www.open-mpi.org/)
79101
- [MPICH](http://www.mpich.org/) (v3.1 or later)
80102
- [Intel MPI](https://software.intel.com/en-us/mpi-library)
81103
- [Microsoft MPI](https://docs.microsoft.com/en-us/message-passing-interface/microsoft-mpi)
82104
- [IBM Spectrum MPI](https://www.ibm.com/us-en/marketplace/spectrum-mpi)
105+
- [MVAPICH](http://mvapich.cse.ohio-state.edu/)
106+
- [Cray MPICH](https://docs.nersc.gov/development/compilers/wrappers/)
107+
- [Fujitsu MPI](https://www.fujitsu.com/global/about/resources/publications/technicalreview/2020-03/article07.html#cap-03)
108+
109+
If the implementation is changed, you will need to use [`MPI.use_system_binary()`](@ref) or
110+
[`MPI.use_jll_binary()`](@ref).
83111

84-
If the implementation is changed, you will need to re-run `Pkg.build("MPI")`.
112+
#### Advanced options
85113

86-
### [Environment variables](@id environment_variables)
114+
```@doc
115+
MPI.use_system_binary
116+
```
117+
118+
You can use the argument `mpiexec` to provide the name (or full path) of the MPI launcher executable. The default is
119+
`mpiexec`, but some clusters require using the scheduler launcher interface (e.g. `srun`
120+
on Slurm, `aprun` on PBS). If the MPI library has an uncommon name you can provide it in `library_names`.
121+
The MPI standard does not specify the exact application binary interface (ABI).
122+
In case ABI detection fails you can provide a manual choice (either `MPICH`, `MPItrampoline`, `OpenMPI`, or `MicrosoftMPI`),
123+
but also open an issue such that the automatic detection can be improved.
124+
`export_prefs=true` can be used to copy the preferences into the `Project.toml` instead of creating a
125+
`LocalPreferences.toml` file to hold them.
87126

88-
The following optional environment variables can be used to control certain aspects of the
89-
build script and other library behaviour. The results of these will be cached in a
90-
configuration file located at `~/.julia/prefs/MPI.toml` and so can be used for subsequent
91-
MPI builds.
127+
#### Notes to HPC cluster adminstators
92128

93-
- `JULIA_MPI_BINARY`: can be set to either the empty string (to use the default implementations
94-
above) or `system` (to use a system-provided implementation).
95-
- `JULIA_MPI_PATH`: the top-level installation directory of MPI. i.e. the library should
96-
be located in `${JULIA_MPI_PATH}/lib` and `mpiexec` in `${JULIA_MPI_PATH}/bin`
97-
- `JULIA_MPI_LIBRARY`: the library name or full path of the MPI shared library. By
98-
default, it will attempt to look for common MPI library names in the standard library
99-
paths (e.g. `libmpi`, `libmpich`, `msmpi`).
100-
- `JULIA_MPIEXEC`: the name (or full path) of the MPI launcher executable. The default is
101-
`mpiexec`, but some clusters require using the scheduler launcher interface (e.g. `srun`
102-
on Slurm, `aprun` on PBS).
103-
- `JULIA_MPIEXEC_ARGS`: Additional arguments to be passed to MPI launcher.
129+
Preferences are merged across the Julia load path, such that it is feasible to provide a module file that appends a path to
130+
`JULIA_LOAD_PATH` variable that contains system-wide preferences.
104131

105-
The following variables are also queried:
132+
As an example you can use [`MPI.use_system_binary()`](@ref) to create a file `LocalPreferences.toml` containing:
133+
134+
```toml
135+
[MPIPreferences]
136+
abi = "OpenMPI"
137+
binary = "system"
138+
libmpi = "/software/mpi/lib/libmpi.so"
139+
mpiexec = "/software/mpi/bin/mpiexec"
140+
```
106141

107-
- `JULIA_MPI_INCLUDE_PATH`: the directory containing the MPI header files.
108-
- `JULIA_MPI_CFLAGS`: C flags passed to the constant generation build (default: `-lmpi`)
109-
- `JULIA_MPICC`: MPI C compiler (default: `mpicc`)
142+
Copying this `LocalPreferences.toml` to a central location such as `/software/mpi/julia` and
143+
create adjacent to it a `Project.toml` containing:
144+
145+
```toml
146+
[extras]
147+
MPIPreferences = "3da0fdf6-3ccc-4f1b-acd9-58baa6c99267"
148+
```
149+
150+
Now exporting the environment variable `JULIA_LOAD_PATH=":/software/mpi/julia"`
151+
(note the `:` before the path) in the corresponding
152+
module file (preferably the module file for the MPI installation or for Julia),
153+
will cause MPI.jl to default to your cluster MPI installation.
154+
155+
The user can still provide differing MPI configurations for each Julia project that
156+
will take precedent by modifying the local `Project.toml` or by providing a `LocalPreferences.toml` file.
157+
158+
### Using a different JLL provided MPI library
159+
160+
The following MPI implementations are provided as JLL packages and automatically obtained when installing MPI.jl:
161+
162+
- `MicrosoftMPI_jll`: Default for Windows
163+
- `MPICH_jll`: Default for all Unix-like systems
164+
- [`MPItrampoline_jll`](https://github.com/eschnett/MPItrampoline): Binaries built against MPItrampoline can be efficiently retargetted to a system MPI implementation.
165+
- `OpenMPI_jll`:
166+
167+
```@doc
168+
MPI.use_jll_binary
169+
```
170+
171+
## Configuration of the MPI.jl testsuite
172+
173+
### Testing against a different MPI implementation
174+
175+
The `LocalPreferences.toml` must be located within the `test` folder, you can
176+
either create it in place or copy it into place.
177+
178+
```
179+
~/MPI> julia --project=test
180+
julia> using MPIPreferences
181+
julia> MPIPreferences.use_system_binary()
182+
~/MPI> rm test/Manifest.toml
183+
~/MPI> julia --project
184+
(MPI) pkg> test
185+
```
110186

187+
### Environment variables
111188
The test suite can also be modified by the following variables:
112189

113-
- `JULIA_MPIEXEC_TEST_ARGS`: Additional arguments to be passed to the MPI launcher for the tests only.
190+
- `JULIA_MPI_TEST_NPROCS`: How many ranks to use within the tests
114191
- `JULIA_MPI_TEST_ARRAYTYPE`: Set to `CuArray` to test the CUDA-aware interface with
115192
[`CUDA.CuArray](https://github.com/JuliaGPU/CUDA.jl) buffers.
193+
- `JULIA_MPI_TEST_BINARY`: Check that the specified MPI binary is used for the tests
194+
- `JULIA_MPI_TEST_ABI`: Check that the specified MPI ABI is used for the tests

src/MPI.jl

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -35,6 +35,8 @@ end
3535

3636

3737
import MPIPreferences
38+
const use_jll_binary = MPIPreferences.use_jll_binary
39+
const use_system_binary = MPIPreferences.use_system_binary
3840

3941
if MPIPreferences.binary == "MPICH_jll"
4042
import MPICH_jll: libmpi, mpiexec

test/runtests.jl

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -11,12 +11,6 @@ else
1111
ArrayType = Array
1212
end
1313

14-
if Sys.isunix()
15-
# This test doesn't need to be run with mpiexec. `mpiexecjl` is currently
16-
# available only on Unix systems
17-
include("mpiexecjl.jl")
18-
end
19-
2014
nprocs_str = get(ENV, "JULIA_MPI_TEST_NPROCS", "")
2115
nprocs = nprocs_str == "" ? clamp(Sys.CPU_THREADS, 2, 4) : parse(Int, nprocs_str)
2216

@@ -29,6 +23,12 @@ if haskey(ENV,"JULIA_MPI_TEST_ABI")
2923
@test ENV["JULIA_MPI_TEST_ABI"] == MPIPreferences.abi
3024
end
3125

26+
if Sys.isunix()
27+
# This test doesn't need to be run with mpiexec. `mpiexecjl` is currently
28+
# available only on Unix systems
29+
include("mpiexecjl.jl")
30+
end
31+
3232
testdir = @__DIR__
3333
istest(f) = endswith(f, ".jl") && startswith(f, "test_")
3434
testfiles = sort(filter(istest, readdir(testdir)))

0 commit comments

Comments
 (0)