You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Under the hood, PyLops-MPI use both MPI Communicator and NCCL Communicator to manage distributed operations. Each GPU is logically binded to
116
+
one MPI process. Generally speaking, the small operation like array-related shape and size remain using MPI while the collective calls
117
+
like AllReduce will be carried through NCCL.
118
+
82
119
.. note::
83
120
84
-
The CuPy backend is in active development, with many examples not yet in the docs.
85
-
You can find many `other examples <https://github.com/PyLops/pylops_notebooks/tree/master/developement-mpi/Cupy_MPI>`_ from the `PyLops Notebooks repository <https://github.com/PyLops/pylops_notebooks>`_.
121
+
The CuPy and NCCL backend is in active development, with many examples not yet in the docs.
122
+
You can find many `other examples <https://github.com/PyLops/pylops_notebooks/tree/master/developement-mpi/Cupy_MPI>`_ from the `PyLops Notebooks repository <https://github.com/PyLops/pylops_notebooks>`_.
123
+
124
+
Supports for NCCL Backend
125
+
-------------------
126
+
In the following, we provide a list of modules in which operates on :class:`pylops_mpi.DistributedArray`
0 commit comments