You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -9,94 +9,21 @@ clusters or multi-GPU machines, you will probably want to configure against a
9
9
system-provided MPI implementation in order to exploit features such as fast network
10
10
interfaces and CUDA-aware MPI interfaces.
11
11
12
-
## Julia wrapper for `mpiexec`
13
-
14
-
Since you can configure `MPI.jl` to use one of several MPI implementations, you
15
-
may have different Julia projects using different implementation. Thus, it may
16
-
be cumbersome to find out which `mpiexec` executable is associated to a specific
17
-
project. To make this easy, on Unix-based systems `MPI.jl` comes with a thin
18
-
project-aware wrapper around `mpiexec`, called `mpiexecjl`.
19
-
20
-
### Installation
21
-
22
-
You can install `mpiexecjl` with [`MPI.install_mpiexecjl()`](@ref). The default
23
-
destination directory is `joinpath(DEPOT_PATH[1], "bin")`, which usually
24
-
translates to `~/.julia/bin`, but check the value on your system. You can also
25
-
tell `MPI.install_mpiexecjl` to install to a different directory.
26
-
27
-
```sh
28
-
$ julia
29
-
julia> using MPI
30
-
julia>MPI.install_mpiexecjl()
31
-
```
32
-
33
-
To quickly call this wrapper we recommend you to add the destination directory
34
-
to your [`PATH`](https://en.wikipedia.org/wiki/PATH_(variable)) environment
35
-
variable.
36
-
37
-
### Usage
38
-
39
-
`mpiexecjl` has the same syntax as the `mpiexec` binary that will be called, but
40
-
it takes in addition a `--project` option to call the specific binary associated
41
-
to the `MPI.jl` version in the given project. If no `--project` flag is used,
42
-
the `MPI.jl` in the global Julia environment will be used instead.
43
-
44
-
After installing `mpiexecjl` and adding its directory to `PATH`, you can run it
45
-
with:
46
-
47
-
```sh
48
-
$ mpiexecjl --project=/path/to/project -n 20 julia script.jl
49
-
```
50
-
51
-
## Using MPIPreferences.jl
52
-
53
-
MPI.jl uses [Preferences.jl](https://github.com/JuliaPackaging/Preferences.jl) to
54
-
allow the user to choose which MPI implementation to use for a project. This provides
12
+
The MPIPreferences.jl package allows the user to choose which MPI implementation to use in MPI.jl. It uses [Preferences.jl](https://github.com/JuliaPackaging/Preferences.jl) to
13
+
configure the MPI backend for each project separately. This provides
55
14
a single source of truth that can be used for JLL packages (Julia packages providing C libraries)
56
-
that link against MPI, localizes the choice of MPI implementation to a project.
57
-
58
-
Users can use the provided [`use_system_binary`](@ref MPIPreferences.use_system_binary) or
59
-
[`use_jll_binary`](@ref MPIPreferences.use_jll_binary) to switch MPI implementations. By
60
-
default, the JLL-provided binaries are used.
61
-
62
-
### Migration from MPI.jl `v0.19`
63
-
64
-
Prior to MPI.jl `v0.20` environment variables were used to configure which MPI
65
-
library to use. These have now been removed and have no effect anymore:
66
-
67
-
-`JULIA_MPI_BINARY`
68
-
-`JULIA_MPIEXEC`
69
-
-`JULIA_MPI_INCLUDE_PATH`
70
-
-`JULIA_MPI_CFLAGS`
71
-
-`JULIA_MPICC`
72
-
73
-
### Using a system-provided MPI backend
74
-
75
-
#### Requirements
76
-
77
-
MPI.jl requires a shared library installation of a C MPI library, supporting the MPI 3.0
78
-
standard or later.
79
-
80
-
### Configuration
81
-
82
-
To use the system MPI library, run `MPI.use_system_binary()`.
83
-
This will attempt to locate and to identify any available MPI implementation, and create
84
-
a file called `LocalPreferences.toml` adjacent to the current `Project.toml`.
85
-
Use `Base.active_project()` to obtain the location of the currently active project.
15
+
that link against MPI. It can be installed by
86
16
87
17
```sh
88
-
julia --project -e 'using MPI; MPI.use_system_binary()'
18
+
julia -e 'using Pkg; Pkg.add("MPIPreferences")'
89
19
```
90
20
91
-
!!! note
92
-
You can copy `LocalPreferences.toml` to a different project folder, but you must list
93
-
`MPIPreferences` in the `[extras]` section of the `Project.toml` for the settings
94
-
to take effect. Due to a bug in Julia (until `v1.6.5` and `v1.7.1`), getting preferences
95
-
from transitive dependencies is broken (https://github.com/JuliaPackaging/Preferences.jl/issues/24).
96
-
To fix this update your version of Julia, or add `MPIPreferences` as a direct dependency to your project.
21
+
## Using a system-provided MPI backend
97
22
23
+
### Requirements
98
24
99
-
The following MPI implementations should work out-of-the-box with MPI.jl:
25
+
MPI.jl requires a shared library installation of a C MPI library, supporting the MPI 3.0
26
+
standard or later. The following MPI implementations should work out-of-the-box with MPI.jl:
100
27
101
28
-[Open MPI](http://www.open-mpi.org/)
102
29
-[MPICH](http://www.mpich.org/) (v3.1 or later)
@@ -108,67 +35,83 @@ The following MPI implementations should work out-of-the-box with MPI.jl:
If the implementation is changed, you will need to use [`MPI.use_system_binary()`](@ref MPIPreferences.use_system_binary)
112
-
or [`MPI.use_jll_binary()`](@ref MPIPreferences.use_jll_binary).
38
+
### Configuration
113
39
114
-
#### Advanced options
40
+
Run `MPIPreferences.use_system_binary()`. This will attempt to locate and to identify any available MPI implementation, and create a file called `LocalPreferences.toml` adjacent to the current `Project.toml`.
115
41
116
-
```@doc
117
-
MPI.use_system_binary
42
+
```sh
43
+
julia --project -e 'using MPIPreferences; MPIPreferences.use_system_binary()'
118
44
```
119
45
120
-
You can use the argument `mpiexec` to provide the name (or full path) of the MPI launcher executable. The default is
121
-
`mpiexec`, but some clusters require using the scheduler launcher interface (e.g. `srun`
122
-
on Slurm, `aprun` on PBS). If the MPI library has an uncommon name you can provide it in `library_names`.
123
-
The MPI standard does not specify the exact application binary interface (ABI).
124
-
In case ABI detection fails you can provide a manual choice (either `MPICH`, `MPItrampoline`, `OpenMPI`, or `MicrosoftMPI`),
125
-
but also open an issue such that the automatic detection can be improved.
126
-
`export_prefs=true` can be used to copy the preferences into the `Project.toml` instead of creating a
127
-
`LocalPreferences.toml` file to hold them.
46
+
If the implementation is changed, you will need to call this function again. See the [`MPIPreferences.use_system_binary`](@ref) documentation for specific options.
47
+
48
+
!!! note
49
+
You can copy `LocalPreferences.toml` to a different project folder, but you must list
50
+
`MPIPreferences` in the `[extras]` or `[deps]` section of the `Project.toml` for the settings
51
+
to take effect.
52
+
53
+
!!! note
54
+
Due to a bug in Julia (until `v1.6.5` and `v1.7.1`), getting preferences
55
+
from transitive dependencies is broken (https://github.com/JuliaPackaging/Preferences.jl/issues/24).
56
+
To fix this update your version of Julia, or add `MPIPreferences` as a direct dependency to your project.
57
+
128
58
129
-
####Notes to HPC cluster adminstators
59
+
### Notes to HPC cluster adminstators
130
60
131
61
Preferences are merged across the Julia load path, such that it is feasible to provide a module file that appends a path to
132
-
`JULIA_LOAD_PATH` variable that contains system-wide preferences.
62
+
`JULIA_LOAD_PATH` variable that contains system-wide preferences. The steps are as follows:
133
63
134
-
As an example you can use [`MPI.use_system_binary()`](@ref MPIPreferences.use_system_binary)
135
-
to create a file `LocalPreferences.toml` containing:
64
+
1. Run [`MPIPreferences.use_system_binary()`](@ref MPIPreferences.use_system_binary), which will generate a file `LocalPreferences.toml` containing something like the following:
136
65
137
-
```toml
138
-
[MPIPreferences]
139
-
abi = "OpenMPI"
140
-
binary = "system"
141
-
libmpi = "/software/mpi/lib/libmpi.so"
142
-
mpiexec = "/software/mpi/bin/mpiexec"
143
-
```
66
+
```toml
67
+
[MPIPreferences]
68
+
abi = "OpenMPI"
69
+
binary = "system"
70
+
libmpi = "/software/mpi/lib/libmpi.so"
71
+
mpiexec = "/software/mpi/bin/mpiexec"
72
+
```
144
73
145
-
Copying this `LocalPreferences.toml` to a central location such as `/software/mpi/julia` and
146
-
create adjacent to it a `Project.toml` containing:
74
+
2. Create a file called `Project.toml` or `JuliaProject.toml` in a central location, for example `/software/mpi/julia` or in the same directory as the MPI library module, and add the following contents:
updating the contents of the `[preferences.MPIPreferences]` section match those of the `[MPIPreferences]` in `LocalPreferences.toml`.
152
88
153
-
Now exporting the environment variable `JULIA_LOAD_PATH=":/software/mpi/julia"`
154
-
(note the `:` before the path) in the corresponding
155
-
module file (preferably the module file for the MPI installation or for Julia),
156
-
will cause MPI.jl to default to your cluster MPI installation.
89
+
3. Append the directory containing the file to the [`JULIA_LOAD_PATH`](https://docs.julialang.org/en/v1/manual/environment-variables/#JULIA_LOAD_PATH) environment variable, with a colon (`:`) separator.
157
90
158
-
The user can still provide differing MPI configurations for each Julia project that
159
-
will take precedent by modifying the local `Project.toml` or by providing a `LocalPreferences.toml` file.
91
+
If this variable is _not_ already set, it should be prefixed with a colon to ensure correct
92
+
behavior of the Julia load path, e.g. `JULIA_LOAD_PATH=":/software/mpi/julia"`.
93
+
If using environment modules, this can be achieved with
in the corresponding module file (preferably the module file for the MPI installation or for Julia).
160
98
161
-
### Using a different JLL provided MPI library
99
+
The user can still provide differing MPI configurations for each Julia project
100
+
that will take precedent by modifying the local `Project.toml` or by providing a
101
+
`LocalPreferences.toml` file.
102
+
103
+
## Using an alternative JLL-provided MPI library
162
104
163
105
The following MPI implementations are provided as JLL packages and automatically obtained when installing MPI.jl:
164
106
165
-
-`MicrosoftMPI_jll`: Default for Windows
166
-
-`MPICH_jll`: Default for all Unix-like systems
167
-
-[`MPItrampoline_jll`](https://github.com/eschnett/MPItrampoline): Binaries built against MPItrampoline can be efficiently retargetted to a system MPI implementation.
168
-
-`OpenMPI_jll`:
107
+
-`MicrosoftMPI_jll`: [Microsoft MPI](https://docs.microsoft.com/en-us/message-passing-interface/microsoft-mpi)Default for Windows
108
+
-`MPICH_jll`: [MPICH](https://www.mpich.org/). Default for all other systems
Copy file name to clipboardExpand all lines: docs/src/knownissues.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -100,17 +100,17 @@ Make sure to:
100
100
- Make sure to have:
101
101
```
102
102
export JULIA_CUDA_MEMORY_POOL=none
103
-
export JULIA_MPI_BINARY=system
104
103
export JULIA_CUDA_USE_BINARYBUILDER=false
105
104
```
106
-
- Add CUDAand MPI packages in Julia. Build MPI.jl in verbose mode to check whether correct versions are built/used:
105
+
- Add CUDA, MPIPreferences, and MPI packages in Julia. Switch to using the system binary
107
106
```
108
-
julia -e 'using Pkg; pkg"add CUDA"; pkg"add MPI"; Pkg.build("MPI"; verbose=true)'
107
+
julia -e 'using Pkg; pkg"add CUDA, MPIPreferences, MPI"'
108
+
julia -e 'using MPIPreferences; MPIPreferences.use_system_binary()'
109
109
```
110
110
- Then in Julia, upon loading MPI and CUDA modules, you can check
111
111
- CUDA version: `CUDA.versioninfo()`
112
112
- If MPI has CUDA: `MPI.has_cuda()`
113
-
- If you are using correct MPI implementation: `MPI.identify_implementation()`
113
+
- If you are using correct MPI library: `MPI.libmpi`
114
114
115
115
After that, it may be preferred to run the Julia MPI script (as suggested [here](https://discourse.julialang.org/t/cuda-aware-mpi-works-on-system-but-not-for-julia/75060/11)) launching it from a shell script (as suggested [here](https://discourse.julialang.org/t/cuda-aware-mpi-works-on-system-but-not-for-julia/75060/4)).
0 commit comments