You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You can use the argument `mpiexec` to provide the the name (or full path) of the MPI launcher executable. The default is
111
+
`mpiexec`, but some clusters require using the scheduler launcher interface (e.g. `srun`
112
+
on Slurm, `aprun` on PBS). If the MPI library has an uncommon name you can provide it in `library_names`.
113
+
In case ABI detection fails you can provide a manual choice,
114
+
but also open an issue so that the auto-detection can be improved. `export_prefs=true` can be used to copy the preferences into the `Project.toml` instead of creating a
115
+
`LocalPreferences.toml` file to hold them.
116
+
117
+
#### Notes to HPC cluster adminstators
118
+
119
+
Preferences are merged across the Julia load path, so it is feasible to provide a module file that appends a path to
120
+
`JULIA_LOAD_PATH` variable that contains system wide preferences.
87
121
88
-
The following optional environment variables can be used to control certain aspects of the
89
-
build script and other library behaviour. The results of these will be cached in a
90
-
configuration file located at `~/.julia/prefs/MPI.toml` and so can be used for subsequent
91
-
MPI builds.
122
+
As an example you can use [`MPI.use_system_binary(;export_prefs)`](@ref) to create a file `Project.toml` containing:
92
123
93
-
-`JULIA_MPI_BINARY`: can be set to either the empty string (to use the default implementations
94
-
above) or `system` (to use a system-provided implementation).
95
-
-`JULIA_MPI_PATH`: the top-level installation directory of MPI. i.e. the library should
96
-
be located in `${JULIA_MPI_PATH}/lib` and `mpiexec` in `${JULIA_MPI_PATH}/bin`
97
-
-`JULIA_MPI_LIBRARY`: the library name or full path of the MPI shared library. By
98
-
default, it will attempt to look for common MPI library names in the standard library
99
-
paths (e.g. `libmpi`, `libmpich`, `msmpi`).
100
-
-`JULIA_MPIEXEC`: the name (or full path) of the MPI launcher executable. The default is
101
-
`mpiexec`, but some clusters require using the scheduler launcher interface (e.g. `srun`
102
-
on Slurm, `aprun` on PBS).
103
-
-`JULIA_MPIEXEC_ARGS`: Additional arguments to be passed to MPI launcher.
copying that to a central location like: `/software/mpi/julia` and setting `JULIA_LOAD_PATH=":/software/mpi/julia"` (note the `:` before the path) in the coressponding module file (preferrably the MPI installation, or the Julia module)
135
+
, will cause the user to default to your cluster MPI installation.
104
136
105
-
The following variables are also queried:
137
+
The user can still create a differing per Julia project that
138
+
will take precedent.
106
139
107
-
-`JULIA_MPI_INCLUDE_PATH`: the directory containing the MPI header files.
108
-
-`JULIA_MPI_CFLAGS`: C flags passed to the constant generation build (default: `-lmpi`)
109
-
-`JULIA_MPICC`: MPI C compiler (default: `mpicc`)
140
+
### Using a different JLL provided MPI library
141
+
142
+
The following JLL implementations are provided:
143
+
144
+
-`MicrosoftMPI_jll`: Default for Windows
145
+
-`MPICH_jll`: Default on all Unix systems
146
+
-[`MPItrampoline_jll`](https://github.com/eschnett/MPItrampoline): Binaries built against MPItrampoline can be efficiently retargetted to a system MPI implementation.
147
+
-`OpenMPI_jll`
148
+
149
+
```@doc
150
+
MPI.use_jll_binary
151
+
```
110
152
153
+
## Environment variables for the testsuite
111
154
The test suite can also be modified by the following variables:
112
155
113
-
-`JULIA_MPIEXEC_TEST_ARGS`: Additional arguments to be passed to the MPI launcher for the tests only.
156
+
-`JULIA_MPI_TEST_NPROCS`: How many ranks to use within the tests
114
157
-`JULIA_MPI_TEST_ARRAYTYPE`: Set to `CuArray` to test the CUDA-aware interface with
0 commit comments