Skip to content

Commit 861341a

Browse files
Zuulopenstack-gerrit
authored andcommitted
Merge "docs: Document virtio-net multiqueue"
2 parents b47f356 + 9515731 commit 861341a

File tree

1 file changed

+90
-0
lines changed

1 file changed

+90
-0
lines changed

doc/source/admin/networking.rst

Lines changed: 90 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -199,3 +199,93 @@ As with the L2-type networks, this configuration will ensure instances using
199199
one or more L3-type networks must be scheduled on host cores from NUMA node 0.
200200
It is also possible to define more than one NUMA node, in which case the
201201
instance must be split across these nodes.
202+
203+
204+
virtio-net Multiqueue
205+
---------------------
206+
207+
.. versionadded:: 12.0.0 (Liberty)
208+
209+
.. versionchanged:: 24.0.0 (Xena)
210+
211+
Support for configuring multiqueue via the ``hw:vif_multiqueue_enabled``
212+
flavor extra spec was introduced in the Xena (24.0.0) release.
213+
214+
.. important::
215+
216+
The functionality described below is currently only supported by the
217+
libvirt/KVM driver.
218+
219+
Virtual NICs using the virtio-net driver support the multiqueue feature. By
220+
default, these vNICs will only use a single virtio-net TX/RX queue pair,
221+
meaning guests will not transmit or receive packets in parallel. As a result,
222+
the scale of the protocol stack in a guest may be restricted as the network
223+
performance will not scale as the number of vCPUs increases and per-queue data
224+
processing limits in the underlying vSwitch are encountered. The solution to
225+
this issue is to enable virtio-net multiqueue, which can allow the guest
226+
instances to increase the total network throughput by scaling the number of
227+
receive and transmit queue pairs with CPU count.
228+
229+
Multiqueue virtio-net isn't always necessary, but it can provide a significant
230+
performance benefit when:
231+
232+
- Traffic packets are relatively large.
233+
- The guest is active on many connections at the same time, with traffic
234+
running between guests, guest to host, or guest to an external system.
235+
- The number of queues is equal to the number of vCPUs. This is because
236+
multi-queue support optimizes RX interrupt affinity and TX queue selection in
237+
order to make a specific queue private to a specific vCPU.
238+
239+
However, while the virtio-net multiqueue feature will often provide a welcome
240+
performance benefit, it has some limitations and therefore should not be
241+
unconditionally enabled:
242+
243+
- Enabling virtio-net multiqueue increases the total network throughput, but in
244+
parallel it also increases the CPU consumption.
245+
- Enabling virtio-net multiqueue in the host QEMU config does not enable the
246+
functionality in the guest OS. The guest OS administrator needs to manually
247+
turn it on for each guest NIC that requires this feature, using
248+
:command:`ethtool`.
249+
- In case the number of vNICs in a guest instance is proportional to the number
250+
of vCPUs, enabling the multiqueue feature is less important.
251+
252+
Having considered these points, multiqueue can be enabled or explicitly
253+
disabled using either the :nova:extra-spec:`hw:vif_multiqueue_enabled` flavor
254+
extra spec or equivalent ``hw_vif_multiqueue_enabled`` image metadata property.
255+
For example, to enable virtio-net multiqueue for a chosen flavor:
256+
257+
.. code-block:: bash
258+
259+
$ openstack flavor set --property hw:vif_multiqueue_enabled=true $FLAVOR
260+
261+
Alternatively, to explicitly disable multiqueue for a chosen image:
262+
263+
.. code-block:: bash
264+
265+
$ openstack image set --property hw_vif_multiqueue_enabled=false $IMAGE
266+
267+
.. note::
268+
269+
If both the flavor extra spec and image metadata property are provided,
270+
their values must match or an error will be raised.
271+
272+
Once the guest has started, you must enable multiqueue using
273+
:command:`ethtool`. For example:
274+
275+
.. code-block:: bash
276+
277+
$ ethtool -L $devname combined $N
278+
279+
where ``$devname`` is the name of the network device, and ``$N`` is the number
280+
of TX/RX queue pairs to configure corresponding to the number of instance
281+
vCPUs. Alternatively, you can configure this persistently using udev. For
282+
example, to configure four TX/RX queue pairs for network device ``eth0``:
283+
284+
.. code-block:: bash
285+
286+
# cat /etc/udev/rules.d/50-ethtool.rules
287+
ACTION=="add", SUBSYSTEM=="net", NAME=="eth0", RUN+="/sbin/ethtool -L eth0 combined 4"
288+
289+
For more information on this feature, refer to the `original spec`__.
290+
291+
.. __: https://specs.openstack.org/openstack/nova-specs/specs/liberty/implemented/libvirt-virtiomq.html

0 commit comments

Comments
 (0)