Skip to content

Commit 0557fcb

Browse files
committed
Rewrite configuration docs for MPIPreferences
1 parent 1389fcc commit 0557fcb

File tree

1 file changed

+82
-37
lines changed

1 file changed

+82
-37
lines changed

docs/src/configuration.md

Lines changed: 82 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -9,10 +9,6 @@ clusters or multi-GPU machines, you will probably want to configure against a
99
system-provided MPI implementation in order to exploit features such as fast network
1010
interfaces and CUDA-aware MPI interfaces.
1111

12-
MPI.jl will attempt to detect when you are running on a HPC cluster, and warn the user
13-
about this. To disable this warning, set the environment variable
14-
`JULIA_MPI_CLUSTER_WARN=n`.
15-
1612
## Julia wrapper for `mpiexec`
1713

1814
Since you can configure `MPI.jl` to use one of several MPI implementations, you
@@ -52,26 +48,51 @@ with:
5248
$ mpiexecjl --project=/path/to/project -n 20 julia script.jl
5349
```
5450

55-
## Using a system-provided MPI
51+
## Using MPIPreferences.jl
52+
53+
MPI.jl uses [Preferences.jl](https://github.com/JuliaPackaging/Preferences.jl) to
54+
allow the user to choose which MPI implementation to use for a project. This provides
55+
a single source of truth that can be used for `JLL` (Julia packages providing C libraries)
56+
that link against MPI, as well as localizing the choice of MPI implementation to a project.
57+
58+
Users can use the provided [`use_system_binary`](@ref) or [`use_jll_binary`](@ref)
59+
to switch MPI implementations.
60+
61+
### Migration from MPI.jl `v0.19`
62+
63+
Prior to MPI.jl `v0.20` environment variables were used to configure which MPI
64+
library to use. These have now been removed.
5665

57-
### Requirements
66+
- `JULIA_MPI_BINARY`
67+
- `JULIA_MPIEXEC`
68+
- `JULIA_MPI_INCLUDE_PATH`
69+
- `JULIA_MPI_CFLAGS`
70+
- `JULIA_MPICC`
71+
72+
### Using a system-provided MPI
73+
74+
#### Requirements
5875

5976
MPI.jl requires a shared library installation of a C MPI library, supporting the MPI 3.0
6077
standard or later.
6178

62-
### Building
79+
### Configuration
6380

64-
To use the the system MPI, set the environment variable `JULIA_MPI_BINARY=system` and run
65-
`Pkg.build("MPI")`. This can be done by:
66-
```
67-
julia --project -e 'ENV["JULIA_MPI_BINARY"]="system"; using Pkg; Pkg.build("MPI"; verbose=true)'
81+
To use the the system MPI, run `MPI.use_system_binary()`.
82+
This will attempt find and identify any available MPI implementation, and create
83+
a file called `LocalPreferences.toml` adjacent to the current `Project.toml`.
84+
85+
```sh
86+
julia --project -e 'using MPI; MPI.use_system_binary()'
6887
```
69-
This will attempt find and identify any available MPI implementation.
88+
!!! note
89+
You can copy `LocalPreferences.toml` to a different project, but you must list
90+
`MPIPreferences` in the Project.toml either in `[deps]` or `[extras]` for the settings
91+
to take effect. Due to a bug in Julia (until `v1.6.5` and `v1.7.1`), getting preferences
92+
gtom transitive dependencies is broken (https://github.com/JuliaPackaging/Preferences.jl/issues/24).
93+
To fix this either update your version of Julia, or add `MPIPreferences` as a direct dependency to your project.
7094

7195
The MPI standard doesn't specify the exact application binary interface (ABI).
72-
The build script will attempt to build a small C program to
73-
determine the appropriate type definitions and constants. This requires a compatible C
74-
compiler (`mpicc` by default).
7596

7697
The following implementations should work:
7798

@@ -81,35 +102,59 @@ The following implementations should work:
81102
- [Microsoft MPI](https://docs.microsoft.com/en-us/message-passing-interface/microsoft-mpi)
82103
- [IBM Spectrum MPI](https://www.ibm.com/us-en/marketplace/spectrum-mpi)
83104

84-
If the implementation is changed, you will need to re-run `Pkg.build("MPI")`.
105+
#### Advanced options
85106

86-
### [Environment variables](@id environment_variables)
107+
```@doc
108+
MPI.use_system_binary
109+
```
110+
You can use the argument `mpiexec` to provide the the name (or full path) of the MPI launcher executable. The default is
111+
`mpiexec`, but some clusters require using the scheduler launcher interface (e.g. `srun`
112+
on Slurm, `aprun` on PBS). If the MPI library has an uncommon name you can provide it in `library_names`.
113+
In case ABI detection fails you can provide a manual choice,
114+
but also open an issue so that the auto-detection can be improved. `export_prefs=true` can be used to copy the preferences into the `Project.toml` instead of creating a
115+
`LocalPreferences.toml` file to hold them.
116+
117+
#### Notes to HPC cluster adminstators
118+
119+
Preferences are merged across the Julia load path, so it is feasible to provide a module file that appends a path to
120+
`JULIA_LOAD_PATH` variable that contains system wide preferences.
87121

88-
The following optional environment variables can be used to control certain aspects of the
89-
build script and other library behaviour. The results of these will be cached in a
90-
configuration file located at `~/.julia/prefs/MPI.toml` and so can be used for subsequent
91-
MPI builds.
122+
As an example you can use [`MPI.use_system_binary(;export_prefs)`](@ref) to create a file `Project.toml` containing:
92123

93-
- `JULIA_MPI_BINARY`: can be set to either the empty string (to use the default implementations
94-
above) or `system` (to use a system-provided implementation).
95-
- `JULIA_MPI_PATH`: the top-level installation directory of MPI. i.e. the library should
96-
be located in `${JULIA_MPI_PATH}/lib` and `mpiexec` in `${JULIA_MPI_PATH}/bin`
97-
- `JULIA_MPI_LIBRARY`: the library name or full path of the MPI shared library. By
98-
default, it will attempt to look for common MPI library names in the standard library
99-
paths (e.g. `libmpi`, `libmpich`, `msmpi`).
100-
- `JULIA_MPIEXEC`: the name (or full path) of the MPI launcher executable. The default is
101-
`mpiexec`, but some clusters require using the scheduler launcher interface (e.g. `srun`
102-
on Slurm, `aprun` on PBS).
103-
- `JULIA_MPIEXEC_ARGS`: Additional arguments to be passed to MPI launcher.
124+
```
125+
[extras]
126+
MPIPreferences = "3da0fdf6-3ccc-4f1b-acd9-58baa6c99267"
127+
128+
[preferences.MPIPreferences]
129+
abi = "OpenMPI"
130+
binary = "system"
131+
libmpi = "/software/mpi/lib/libmpi.so"
132+
mpiexec = "/software/mpi/bin/mpiexec"
133+
```
134+
copying that to a central location like: `/software/mpi/julia` and setting `JULIA_LOAD_PATH=":/software/mpi/julia"` (note the `:` before the path) in the coressponding module file (preferrably the MPI installation, or the Julia module)
135+
, will cause the user to default to your cluster MPI installation.
104136

105-
The following variables are also queried:
137+
The user can still create a differing per Julia project that
138+
will take precedent.
106139

107-
- `JULIA_MPI_INCLUDE_PATH`: the directory containing the MPI header files.
108-
- `JULIA_MPI_CFLAGS`: C flags passed to the constant generation build (default: `-lmpi`)
109-
- `JULIA_MPICC`: MPI C compiler (default: `mpicc`)
140+
### Using a different JLL provided MPI library
141+
142+
The following JLL implementations are provided:
143+
144+
- `MicrosoftMPI_jll`: Default for Windows
145+
- `MPICH_jll`: Default on all Unix systems
146+
- [`MPItrampoline_jll`](https://github.com/eschnett/MPItrampoline): Binaries built against MPItrampoline can be efficiently retargetted to a system MPI implementation.
147+
- `OpenMPI_jll`
148+
149+
```@doc
150+
MPI.use_jll_binary
151+
```
110152

153+
## Environment variables for the testsuite
111154
The test suite can also be modified by the following variables:
112155

113-
- `JULIA_MPIEXEC_TEST_ARGS`: Additional arguments to be passed to the MPI launcher for the tests only.
156+
- `JULIA_MPI_TEST_NPROCS`: How many ranks to use within the tests
114157
- `JULIA_MPI_TEST_ARRAYTYPE`: Set to `CuArray` to test the CUDA-aware interface with
115158
[`CUDA.CuArray](https://github.com/JuliaGPU/CUDA.jl) buffers.
159+
- `JULIA_MPI_TEST_BINARY`: Check that the correct MPI binary is used for the tests
160+
- `JULIA_MPI_TEST_ABI`: Check that the correct MPI ABI is used for the tests

0 commit comments

Comments
 (0)