|
4 | 4 | [NCCL](https://developer.nvidia.com/nccl) is an optimized inter-GPU communication library for NVIDIA GPUs. |
5 | 5 | It is commonly used in machine learning frameworks, but traditional scientific applications can also benefit from NCCL. |
6 | 6 |
|
7 | | -!!! todo |
8 | | - - high level description |
9 | | - - libfabric/aws-ofi-nccl plugin |
10 | | - - configuration options |
| 7 | +## Using NCCL |
| 8 | + |
| 9 | +To use the Slingshot network on Alps, the [`aws-ofi-nccl`](https://github.com/aws/aws-ofi-nccl) plugin must be used. |
| 10 | +With the container engine, the [AWS OFI NCCL hook][ref-ce-aws-ofi-hook] can be used to load the plugin into the container and configure NCCL to use it. |
| 11 | + |
| 12 | +Most uenvs, like [`prgenv-gnu`][ref-uenv-prgenv-gnu], also contain the NCCL plugin. |
| 13 | +When using e.g. the `default` view of `prgenv-gnu` the `aws-ofi-nccl` plugin will be available in the environment. |
| 14 | +Alternatively, loading the `aws-ofi-nccl` module with the `modules` view also makes the plugin available in the environment. |
| 15 | +The environment variables described below must be set to ensure that NCCL uses the plugin. |
| 16 | + |
| 17 | +While the container engine sets these automatically when using the NCCL hook, the following environment variables should always be set for correctness and optimal performance when using NCCL: |
| 18 | + |
| 19 | +```bash |
| 20 | +export NCCL_NET="AWS Libfabric" # (1)! |
| 21 | +export NCCL_NET_GDR_LEVEL=PHB # (2)! |
| 22 | +export FI_CXI_DEFAULT_CQ_SIZE=131072 # (3)! |
| 23 | +export FI_CXI_DEFAULT_TX_SIZE=32768 |
| 24 | +export FI_CXI_DISABLE_HOST_REGISTER=1 |
| 25 | +export FI_CXI_RX_MATCH_MODE=software |
| 26 | +export FI_MR_CACHE_MONITOR=userfaultfd |
| 27 | +export MPICH_GPU_SUPPORT_ENABLED=0 # (4)! |
| 28 | +``` |
| 29 | + |
| 30 | +1. This forces NCCL to use the libfabric plugin, enabling full use of the Slingshot network. If the plugin can not be found, applications will fail to start. With the default value, applications would instead fall back to e.g. TCP, which would be significantly slower than with the plugin. [More information about `NCCL_NET`](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/env.html#nccl-net). |
| 31 | +2. Use GPU Direct RDMA when GPU and NIC are on the same NUMA node. [More information about `NCCL_NET_GDR_LEVEL`](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/env.html#nccl-net-gdr-level-formerly-nccl-ib-gdr-level). |
| 32 | +3. This and the other `FI` (libfabric) environment variables have been found to give the best performance on the Alps network across a wide range of applications. Specific applications may perform better with other values. |
| 33 | +4. Disable GPU-aware MPI explicitly, to avoid potential deadlocks between MPI and NCCL. |
| 34 | + |
| 35 | +!!! warning "Using NCCL with uenvs" |
| 36 | + The environment variables listed above are not set automatically when using uenvs. |
| 37 | + |
| 38 | +!!! warning "GPU-aware MPI with NCCL" |
| 39 | + Using GPU-aware MPI together with NCCL [can easily lead to deadlocks](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/mpi.html#inter-gpu-communication-with-cuda-aware-mpi). |
| 40 | + Unless care is taken to ensure that the two methods of communication are not used concurrently, we recommend not using GPU-aware MPI with NCCL. |
| 41 | + To explicitly disable GPU-aware MPI with Cray MPICH, explicitly set `MPICH_GPU_SUPPORT_ENABLED=0`. |
| 42 | + Note that this option may be set to `1` by default on some Alps clusters. |
| 43 | + See [the Cray MPICH documentation][ref-communication-cray-mpich] for more details on GPU-aware MPI with Cray MPICH. |
| 44 | + |
| 45 | +!!! warning "`invalid usage` error with `NCCL_NET="AWS Libfabric`" |
| 46 | + If you are getting error messages such as: |
| 47 | + ```console |
| 48 | + nid006352: Test NCCL failure common.cu:958 'invalid usage (run with NCCL_DEBUG=WARN for details) |
| 49 | + ``` |
| 50 | + this may be due to the plugin not being found by NCCL. |
| 51 | + If this is the case, running the application with the recommended `NCCL_DEBUG=WARN` should print something similar to the following: |
| 52 | + ```console |
| 53 | + nid006352:34157:34217 [1] net.cc:626 NCCL WARN Error: network AWS Libfabric not found. |
| 54 | + ``` |
| 55 | + When using uenvs like `prgenv-gnu`, make sure you are either using the `default` view which loads `aws-ofi-nccl` automatically, or, if using the `modules` view, load the `aws-ofi-nccl` module with `module load aws-ofi-nccl`. |
| 56 | + If the plugin is found correctly, running the application with `NCCL_DEBUG=INFO` should print: |
| 57 | + ```console |
| 58 | + nid006352:34610:34631 [0] NCCL INFO Using network AWS Libfabric |
| 59 | + ``` |
| 60 | + |
| 61 | +!!! warning "Do not use `NCCL_NET_PLUGIN="ofi"` with uenvs" |
| 62 | + NCCL has an alternative way of specifying what plugin to use: `NCCL_NET_PLUGIN`. |
| 63 | + When using uenvs, do not set `NCCL_NET_PLUGIN="ofi"` instead of, or in addition to, `NCCL_NET="AWS Libfabric"`. |
| 64 | + If you do, your application will fail to start since NCCL will: |
| 65 | + |
| 66 | + 1. fail to find the plugin because of the name of the shared library in the uenv, and |
| 67 | + 2. prefer `NCCL_NET_PLUGIN` over `NCCL_NET`, so it will fail to find the plugin even if `NCCL_NET="AWS Libfabric"` is correctly set. |
| 68 | + |
| 69 | + When both environment variables are set the error message, with `NCCL_DEBUG=WARN`, will look similar to when the plugin isn't available: |
| 70 | + ```console |
| 71 | + nid006365:179857:179897 [1] net.cc:626 NCCL WARN Error: network AWS Libfabric not found. |
| 72 | + ``` |
| 73 | + |
| 74 | + With `NCCL_DEBUG=INFO`, NCCL will print: |
| 75 | + ```console |
| 76 | + nid006365:180142:180163 [0] NCCL INFO NET/Plugin: Could not find: ofi libnccl-net-ofi.so. Using internal network plugin. |
| 77 | + ... |
| 78 | + nid006365:180142:180163 [0] net.cc:626 NCCL WARN Error: network AWS Libfabric not found. |
| 79 | + ``` |
| 80 | + |
| 81 | + If you only set `NCCL_NET="ofi"`, NCCL may silently fail to load the plugin but fall back to the default implementation. |
0 commit comments