@@ -9,10 +9,6 @@ clusters or multi-GPU machines, you will probably want to configure against a
9
9
system-provided MPI implementation in order to exploit features such as fast network
10
10
interfaces and CUDA-aware MPI interfaces.
11
11
12
- MPI.jl will attempt to detect when you are running on a HPC cluster, and warn the user
13
- about this. To disable this warning, set the environment variable
14
- ` JULIA_MPI_CLUSTER_WARN=n ` .
15
-
16
12
## Julia wrapper for ` mpiexec `
17
13
18
14
Since you can configure ` MPI.jl ` to use one of several MPI implementations, you
@@ -52,64 +48,147 @@ with:
52
48
$ mpiexecjl --project=/path/to/project -n 20 julia script.jl
53
49
```
54
50
55
- ## Using a system-provided MPI
51
+ ## Using MPIPreferences.jl
52
+
53
+ MPI.jl uses [ Preferences.jl] ( https://github.com/JuliaPackaging/Preferences.jl ) to
54
+ allow the user to choose which MPI implementation to use for a project. This provides
55
+ a single source of truth that can be used for JLL packages (Julia packages providing C libraries)
56
+ that link against MPI, localizes the choice of MPI implementation to a project.
57
+
58
+ Users can use the provided [ ` use_system_binary ` ] ( @ref ) or [ ` use_jll_binary ` ] ( @ref )
59
+ to switch MPI implementations. By default, the JLL-provided binaries are used.
60
+
61
+ ### Migration from MPI.jl ` v0.19 `
56
62
57
- ### Requirements
63
+ Prior to MPI.jl ` v0.20 ` environment variables were used to configure which MPI
64
+ library to use. These have now been removed and have no effect anymore:
65
+
66
+ - ` JULIA_MPI_BINARY `
67
+ - ` JULIA_MPIEXEC `
68
+ - ` JULIA_MPI_INCLUDE_PATH `
69
+ - ` JULIA_MPI_CFLAGS `
70
+ - ` JULIA_MPICC `
71
+
72
+ ### Using a system-provided MPI backend
73
+
74
+ #### Requirements
58
75
59
76
MPI.jl requires a shared library installation of a C MPI library, supporting the MPI 3.0
60
77
standard or later.
61
78
62
- ### Building
79
+ ### Configuration
63
80
64
- To use the the system MPI, set the environment variable ` JULIA_MPI_BINARY=system ` and run
65
- ` Pkg.build("MPI") ` . This can be done by:
66
- ```
67
- julia --project -e 'ENV["JULIA_MPI_BINARY"]="system"; using Pkg; Pkg.build("MPI"; verbose=true)'
81
+ To use the system MPI library, run ` MPI.use_system_binary() ` .
82
+ This will attempt to locate and to identify any available MPI implementation, and create
83
+ a file called ` LocalPreferences.toml ` adjacent to the current ` Project.toml ` .
84
+ Use ` Base.active_project() ` to obtain the location of the currently active project.
85
+
86
+ ``` sh
87
+ julia --project -e ' using MPI; MPI.use_system_binary()'
68
88
```
69
- This will attempt find and identify any available MPI implementation.
70
89
71
- The MPI standard doesn't specify the exact application binary interface (ABI).
72
- The build script will attempt to build a small C program to
73
- determine the appropriate type definitions and constants. This requires a compatible C
74
- compiler (` mpicc ` by default).
90
+ !!! note
91
+ You can copy ` LocalPreferences.toml ` to a different project folder, but you must list
92
+ ` MPIPreferences ` in the ` [extras] ` section of the ` Project.toml ` for the settings
93
+ to take effect. Due to a bug in Julia (until ` v1.6.5 ` and ` v1.7.1 ` ), getting preferences
94
+ from transitive dependencies is broken (https://github.com/JuliaPackaging/Preferences.jl/issues/24 ).
95
+ To fix this update your version of Julia, or add ` MPIPreferences ` as a direct dependency to your project.
75
96
76
- The following implementations should work:
97
+
98
+ The following MPI implementations should work out-of-the-box with MPI.jl:
77
99
78
100
- [ Open MPI] ( http://www.open-mpi.org/ )
79
101
- [ MPICH] ( http://www.mpich.org/ ) (v3.1 or later)
80
102
- [ Intel MPI] ( https://software.intel.com/en-us/mpi-library )
81
103
- [ Microsoft MPI] ( https://docs.microsoft.com/en-us/message-passing-interface/microsoft-mpi )
82
104
- [ IBM Spectrum MPI] ( https://www.ibm.com/us-en/marketplace/spectrum-mpi )
105
+ - [ MVAPICH] ( http://mvapich.cse.ohio-state.edu/ )
106
+ - [ Cray MPICH] ( https://docs.nersc.gov/development/compilers/wrappers/ )
107
+ - [ Fujitsu MPI] ( https://www.fujitsu.com/global/about/resources/publications/technicalreview/2020-03/article07.html#cap-03 )
108
+
109
+ If the implementation is changed, you will need to use [ ` MPI.use_system_binary() ` ] ( @ref ) or
110
+ [ ` MPI.use_jll_binary() ` ] ( @ref ) .
83
111
84
- If the implementation is changed, you will need to re-run ` Pkg.build("MPI") ` .
112
+ #### Advanced options
85
113
86
- ### [ Environment variables] (@id environment_variables)
114
+ ``` @doc
115
+ MPI.use_system_binary
116
+ ```
117
+
118
+ You can use the argument ` mpiexec ` to provide the name (or full path) of the MPI launcher executable. The default is
119
+ ` mpiexec ` , but some clusters require using the scheduler launcher interface (e.g. ` srun `
120
+ on Slurm, ` aprun ` on PBS). If the MPI library has an uncommon name you can provide it in ` library_names ` .
121
+ The MPI standard does not specify the exact application binary interface (ABI).
122
+ In case ABI detection fails you can provide a manual choice (either ` MPICH ` , ` MPItrampoline ` , ` OpenMPI ` , or ` MicrosoftMPI ` ),
123
+ but also open an issue such that the automatic detection can be improved.
124
+ ` export_prefs=true ` can be used to copy the preferences into the ` Project.toml ` instead of creating a
125
+ ` LocalPreferences.toml ` file to hold them.
87
126
88
- The following optional environment variables can be used to control certain aspects of the
89
- build script and other library behaviour. The results of these will be cached in a
90
- configuration file located at ` ~/.julia/prefs/MPI.toml ` and so can be used for subsequent
91
- MPI builds.
127
+ #### Notes to HPC cluster adminstators
92
128
93
- - ` JULIA_MPI_BINARY ` : can be set to either the empty string (to use the default implementations
94
- above) or ` system ` (to use a system-provided implementation).
95
- - ` JULIA_MPI_PATH ` : the top-level installation directory of MPI. i.e. the library should
96
- be located in ` ${JULIA_MPI_PATH}/lib ` and ` mpiexec ` in ` ${JULIA_MPI_PATH}/bin `
97
- - ` JULIA_MPI_LIBRARY ` : the library name or full path of the MPI shared library. By
98
- default, it will attempt to look for common MPI library names in the standard library
99
- paths (e.g. ` libmpi ` , ` libmpich ` , ` msmpi ` ).
100
- - ` JULIA_MPIEXEC ` : the name (or full path) of the MPI launcher executable. The default is
101
- ` mpiexec ` , but some clusters require using the scheduler launcher interface (e.g. ` srun `
102
- on Slurm, ` aprun ` on PBS).
103
- - ` JULIA_MPIEXEC_ARGS ` : Additional arguments to be passed to MPI launcher.
129
+ Preferences are merged across the Julia load path, such that it is feasible to provide a module file that appends a path to
130
+ ` JULIA_LOAD_PATH ` variable that contains system-wide preferences.
104
131
105
- The following variables are also queried:
132
+ As an example you can use [ ` MPI.use_system_binary() ` ] ( @ref ) to create a file ` LocalPreferences.toml ` containing:
133
+
134
+ ``` toml
135
+ [MPIPreferences ]
136
+ abi = " OpenMPI"
137
+ binary = " system"
138
+ libmpi = " /software/mpi/lib/libmpi.so"
139
+ mpiexec = " /software/mpi/bin/mpiexec"
140
+ ```
106
141
107
- - ` JULIA_MPI_INCLUDE_PATH ` : the directory containing the MPI header files.
108
- - ` JULIA_MPI_CFLAGS ` : C flags passed to the constant generation build (default: ` -lmpi ` )
109
- - ` JULIA_MPICC ` : MPI C compiler (default: ` mpicc ` )
142
+ Copying this ` LocalPreferences.toml ` to a central location such as ` /software/mpi/julia ` and
143
+ create adjacent to it a ` Project.toml ` containing:
144
+
145
+ ``` toml
146
+ [extras ]
147
+ MPIPreferences = " 3da0fdf6-3ccc-4f1b-acd9-58baa6c99267"
148
+ ```
149
+
150
+ Now exporting the environment variable ` JULIA_LOAD_PATH=":/software/mpi/julia" `
151
+ (note the ` : ` before the path) in the corresponding
152
+ module file (preferably the module file for the MPI installation or for Julia),
153
+ will cause MPI.jl to default to your cluster MPI installation.
154
+
155
+ The user can still provide differing MPI configurations for each Julia project that
156
+ will take precedent by modifying the local ` Project.toml ` or by providing a ` LocalPreferences.toml ` file.
157
+
158
+ ### Using a different JLL provided MPI library
159
+
160
+ The following MPI implementations are provided as JLL packages and automatically obtained when installing MPI.jl:
161
+
162
+ - ` MicrosoftMPI_jll ` : Default for Windows
163
+ - ` MPICH_jll ` : Default for all Unix-like systems
164
+ - [ ` MPItrampoline_jll ` ] ( https://github.com/eschnett/MPItrampoline ) : Binaries built against MPItrampoline can be efficiently retargetted to a system MPI implementation.
165
+ - ` OpenMPI_jll ` :
166
+
167
+ ``` @doc
168
+ MPI.use_jll_binary
169
+ ```
170
+
171
+ ## Configuration of the MPI.jl testsuite
172
+
173
+ ### Testing against a different MPI implementation
174
+
175
+ The ` LocalPreferences.toml ` must be located within the ` test ` folder, you can
176
+ either create it in place or copy it into place.
177
+
178
+ ```
179
+ ~/MPI> julia --project=test
180
+ julia> using MPIPreferences
181
+ julia> MPIPreferences.use_system_binary()
182
+ ~/MPI> rm test/Manifest.toml
183
+ ~/MPI> julia --project
184
+ (MPI) pkg> test
185
+ ```
110
186
187
+ ### Environment variables
111
188
The test suite can also be modified by the following variables:
112
189
113
- - ` JULIA_MPIEXEC_TEST_ARGS ` : Additional arguments to be passed to the MPI launcher for the tests only.
190
+ - ` JULIA_MPI_TEST_NPROCS ` : How many ranks to use within the tests
114
191
- ` JULIA_MPI_TEST_ARRAYTYPE ` : Set to ` CuArray ` to test the CUDA-aware interface with
115
192
[ `CUDA.CuArray] ( https://github.com/JuliaGPU/CUDA.jl ) buffers.
193
+ - ` JULIA_MPI_TEST_BINARY ` : Check that the specified MPI binary is used for the tests
194
+ - ` JULIA_MPI_TEST_ABI ` : Check that the specified MPI ABI is used for the tests
0 commit comments