You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/configuration.md
+5-4Lines changed: 5 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -110,8 +110,9 @@ You can use the argument `mpiexec` to provide the name (or full path) of the MPI
110
110
`mpiexec`, but some clusters require using the scheduler launcher interface (e.g. `srun`
111
111
on Slurm, `aprun` on PBS). If the MPI library has an uncommon name you can provide it in `library_names`.
112
112
The MPI standard does not specify the exact application binary interface (ABI).
113
-
In case ABI detection fails you can provide a manual choice,
114
-
but also open an issue so that the auto-detection can be improved. `export_prefs=true` can be used to copy the preferences into the `Project.toml` instead of creating a
113
+
In case ABI detection fails you can provide a manual choice (either `MPICH`, `MPItrampoline`, `OpenMPI`, or `MicrosoftMPI`),
114
+
but also open an issue such that the automatic detection can be improved.
115
+
`export_prefs=true` can be used to copy the preferences into the `Project.toml` instead of creating a
115
116
`LocalPreferences.toml` file to hold them.
116
117
117
118
#### Notes to HPC cluster adminstators
@@ -158,5 +159,5 @@ The test suite can also be modified by the following variables:
158
159
-`JULIA_MPI_TEST_NPROCS`: How many ranks to use within the tests
159
160
-`JULIA_MPI_TEST_ARRAYTYPE`: Set to `CuArray` to test the CUDA-aware interface with
0 commit comments