Skip to content

Commit 734213a

Browse files
Merge pull request #212395 from fannyou/patch-4
Update hb-hc-known-issues.md
2 parents 27a554a + 61c5112 commit 734213a

File tree

1 file changed

+0
-13
lines changed

1 file changed

+0
-13
lines changed

articles/virtual-machines/workloads/hpc/hb-hc-known-issues.md

Lines changed: 0 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -29,19 +29,6 @@ To prevent low-level hardware access that can result in security vulnerabilities
2929
On Ubuntu-18.04 based marketplace VM images with kernels version `5.4.0-1039-azure #42` and newer, some older Mellanox OFED are incompatible causing an increase in VM boot time up to 30 minutes in some cases. This has been reported for both Mellanox OFED versions 5.2-1.0.4.0 and 5.2-2.2.0.0. The issue is resolved with Mellanox OFED 5.3-1.0.0.1.
3030
If it is necessary to use the incompatible OFED, a solution is to use the **Canonical:UbuntuServer:18_04-lts-gen2:18.04.202101290** marketplace VM image, or older and not to update the kernel.
3131

32-
## MPI QP creation errors
33-
If in the midst of running any MPI workloads, InfiniBand QP creation errors such as shown below, are thrown, we suggest rebooting the VM and retrying the workload. This issue will be fixed in the future.
34-
35-
```bash
36-
ib_mlx5_dv.c:150 UCX ERROR mlx5dv_devx_obj_create(QP) failed, syndrome 0: Invalid argument
37-
```
38-
39-
You may verify the values of the maximum number of queue-pairs when the issue is observed as follows.
40-
```bash
41-
[user@azurehpc-vm ~]$ ibv_devinfo -vv | grep qp
42-
max_qp: 4096
43-
```
44-
4532
## Accelerated Networking on HB, HC, HBv2, HBv3 and NDv2
4633

4734
[Azure Accelerated Networking](https://azure.microsoft.com/blog/maximize-your-vm-s-performance-with-accelerated-networking-now-generally-available-for-both-windows-and-linux/) is now available on the RDMA and InfiniBand capable and SR-IOV enabled VM sizes [HB](../../hb-series.md), [HC](../../hc-series.md), [HBv2](../../hbv2-series.md), [HBv3](../../hbv3-series.md) and [NDv2](../../ndv2-series.md). This capability now allows enhanced throughout (up to 30 Gbps) and latencies over the Azure Ethernet network. Though this is separate from the RDMA capabilities over the InfiniBand network, some platform changes for this capability may impact behavior of certain MPI implementations when running jobs over InfiniBand. Specifically the InfiniBand interface on some VMs may have a slightly different name (mlx5_1 as opposed to earlier mlx5_0). This may require tweaking of the MPI command lines especially when using the UCX interface (commonly with OpenMPI and HPC-X). The simplest solution currently may be to use the latest HPC-X on the CentOS-HPC VM images or disable Accelerated Networking if not required.

0 commit comments

Comments
 (0)