You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/sources/distributed-mode.rst
+39-12Lines changed: 39 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,30 +26,57 @@ Several :doc:`GPU-supported algorithms <oneapi-gpu>`
26
26
also provide distributed, multi-GPU computing capabilities via integration with |mpi4py|. The prerequisites
27
27
match those of GPU computing, along with an MPI backend of your choice (`Intel MPI recommended
28
28
<https://www.intel.com/content/www/us/en/developer/tools/oneapi/mpi-library.html>`_, available
29
-
via ``impi_rt`` python package) and the |mpi4py| python package. If using |sklearnex|
29
+
via the ``impi_rt`` python/conda package) and the |mpi4py| python package. If using |sklearnex|
30
30
`installed from sources <https://github.com/uxlfoundation/scikit-learn-intelex/blob/main/INSTALL.md#build-from-sources>`_,
31
31
ensure that the spmd_backend is built.
32
32
33
33
.. important::
34
-
SMPD mode requires the |mpi4py| package used at runtime to be compiled with the same MPI backend as the |sklearnex|. The PyPI and Conda distributions of |sklearnex| both use Intel's MPI as backend, and hence require an |mpi4py| also built with Intel's MPI - it can be easily installed from Intel's conda channel as follows::
SMPD mode requires the |mpi4py| package used at runtime to be compiled with the same MPI backend as the |sklearnex|, or with an ABI-compatible MPI backend. The PyPI and Conda distributions of |sklearnex| are both built with Intel's MPI as backend, which follows the MPICH ABI and hence require an |mpi4py| also built with either Intel's MPI, or with another MPICH-compatible MPI backend (such as MPICH itself) - versions of |mpi4py| built with Intel's MPI can be installed as follows:
37
35
38
-
.. warning:: Packages from the Intel channel are meant to be compatible with dependencies from ``conda-forge``, and might not work correctly in environments that have packages installed from the ``anaconda`` channel.
36
+
.. tabs::
37
+
.. tab:: From conda-forge
38
+
::
39
39
40
-
It also requires the MPI runtime executable (``mpiexec`` / ``mpirun``) to be from the same library that was used to compile |sklearnex|. Intel's MPI runtime library is offered as a Python package ``impi_rt`` and will be installed together with the ``mpi4py`` package if executing the command above, but otherwise, it can be installed separately from different distribution channels:
.. tip:: ``impi_rt`` is also available from the Intel channel: ``https://software.repos.intel.com/python/conda``.
47
+
.. warning:: Packages from the Intel channel are meant to be compatible with dependencies from ``conda-forge``, and might not work correctly in environments that have packages installed from the ``anaconda`` channel.
It also requires the MPI runtime executable (``mpiexec`` / ``mpirun``) to be from the same library that was used to compile |sklearnex| or from a compatible library. Intel's MPI runtime library is offered as a Python package ``impi_rt`` and will be installed together with the ``mpi4py`` package if executing the commands above, but otherwise, it can be installed separately from different distribution channels:
Using other MPI backends that are not MPICH-compatible (e.g. OpenMPI) requires building |sklearnex| from source with that backend, and using an |mpi4py| built with that same backend.
51
79
52
-
Using other MPI backends (e.g. OpenMPI) requires building |sklearnex| from source with that backend.
53
80
54
81
Note that |sklearnex| supports GPU offloading to speed up MPI operations. This is supported automatically with
55
82
some MPI backends, but in order to use GPU offloading with Intel MPI, it is required to set the environment variable ``I_MPI_OFFLOAD`` to ``1`` (providing
Copy file name to clipboardExpand all lines: doc/sources/distributed_daal4py.rst
+24-11Lines changed: 24 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -43,29 +43,42 @@ same algorithms to much larger problem sizes.
43
43
44
44
Just like SPMD mode in ``sklearnex``, using distributed mode in ``daal4py`` requires
45
45
the MPI runtime library managing the computations to be the same MPI backend library
46
-
with which the |sklearnex| library was compiled. Distributions of the |sklearnex| in
47
-
PyPI and conda are both compiled with `Intel's MPI <https://www.intel.com/content/www/us/en/developer/tools/oneapi/mpi-library.html>`__
46
+
with which the |sklearnex| library was compiled, or to be ABI compatible with it.
47
+
Distributions of the |sklearnex| in PyPI and conda-forge are both compiled with `Intel's MPI <https://www.intel.com/content/www/us/en/developer/tools/oneapi/mpi-library.html>`__
48
48
as MPI backend (offered as Python package ``impi_rt`` in both PyPI and conda): ::
.. warning:: Packages from the Intel channel are meant to be compatible with dependencies from ``conda-forge``, and might not work correctly in environments that have packages installed from the ``anaconda`` channel.
0 commit comments