|
| 1 | +--- |
| 2 | +features: |
| 3 | + - | |
| 4 | + The libvirt driver has added support for hardware-offloaded OVS |
| 5 | + with vDPA (vhost Data Path Acceleration) type interfaces. |
| 6 | + vDPA allows virtio net interfaces to be presented to the guest while |
| 7 | + the datapath can be offloaded to a software or hardware implementation. |
| 8 | + This enables high performance networking with the portablity of standard |
| 9 | + virtio interfaces. |
| 10 | +issues: |
| 11 | + - | |
| 12 | + Nova currenly does not support the following livecycle operations when |
| 13 | + combined with a instance using vDPA ports: shelve, resize, cold migration, |
| 14 | + live migration, evacuate, suspend or interface attach/detach. |
| 15 | + Attempting to use one of the above operations will result in a HTTP 409 |
| 16 | + (Conflict) error. While some operations like "resize to same host", |
| 17 | + shelve or attach interface technically work, they have been blocked since |
| 18 | + unshelve and detach interface currently do not. Resize to a different |
| 19 | + host has been blocked since its untested, evacuate has also been blocked |
| 20 | + for the same reason. These limitation may be removed in the future as |
| 21 | + testing is improved. Live migration is currently not supported with vDPA |
| 22 | + interfaces by QEMU and therefore cannot be enabled in openstack at this |
| 23 | + time. |
| 24 | +
|
| 25 | + Like SR-IOV, vDPA leverages DMA transfer between the guest and hardware. |
| 26 | + This requires the DMA buffers to be locked in memory. As the DMA buffers |
| 27 | + are allocated by the guest and can be allocated anywhere in the guest RAM, |
| 28 | + QEMU locks **all** guest RAM. By default the ``RLIMIT_MEMLOCK`` for a |
| 29 | + normal QEMU intance is set to 0 and qemu is not allowed to lock guest |
| 30 | + memory. In the case of SR-IOV, libvirt automatically set the limit to guest |
| 31 | + RAM + 1G which enables QEMU to lock the memory. This does not happen today |
| 32 | + with vDPA ports. As a result if you use VDPA ports without enabling locking |
| 33 | + of the guest memory you will get DMA errors. To workaround this issues |
| 34 | + until libvirt is updated, you must set ``hw:cpu_realtime=yes`` and define a |
| 35 | + valid ``CPU-REALTIME-MASK`` e.g ``hw:cpu_realtime_mask=^0`` or define |
| 36 | + ``hw:emulator_threads_policy=share|isolate``. Note that since we are just |
| 37 | + using ``hw:cpu_realtime`` for its side-effect of locking the guest memory, |
| 38 | + this usage does not require the guest or host to use realtime kernels. |
| 39 | + However, all other requirements of ``hw:cpu_realtime`` such as requiring |
| 40 | + hw:cpu_policy=dedicated still apply. It is also stongly recommended that |
| 41 | + hugpages be enabled for all instnace with locked memory. This can be done |
| 42 | + by setting ``hw:mem_page_size``. This will enable nova to correctly account |
| 43 | + for the fact that the memory is unswapable. |
0 commit comments