Skip to content

Commit 457c39d

Browse files
Zuulopenstack-gerrit
authored andcommitted
Merge "enable blocked VDPA move operations" into stable/xena
2 parents d0d474f + c3092e3 commit 457c39d

File tree

5 files changed

+385
-26
lines changed

5 files changed

+385
-26
lines changed

doc/source/admin/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -114,6 +114,7 @@ instance for these kind of workloads.
114114
virtual-gpu
115115
file-backed-memory
116116
ports-with-resource-requests
117+
vdpa
117118
virtual-persistent-memory
118119
emulated-tpm
119120
uefi

doc/source/admin/vdpa.rst

Lines changed: 92 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,92 @@
1+
============================
2+
Using ports vnic_type='vdpa'
3+
============================
4+
.. versionadded:: 23.0.0 (Wallaby)
5+
6+
Introduced support for vDPA.
7+
8+
.. important::
9+
The functionality described below is only supported by the
10+
libvirt/KVM virt driver.
11+
12+
The kernel vDPA (virtio Data Path Acceleration) framework
13+
provides a vendor independent framework for offloading data-plane
14+
processing to software or hardware virtio device backends.
15+
While the kernel vDPA framework supports many types of vDPA devices,
16+
at this time nova only support ``virtio-net`` devices
17+
using the ``vhost-vdpa`` front-end driver. Support for ``virtio-blk`` or
18+
``virtio-gpu`` may be added in the future but is not currently planned
19+
for any specific release.
20+
21+
vDPA device tracking
22+
~~~~~~~~~~~~~~~~~~~~
23+
When implementing support for vDPA based neutron ports one of the first
24+
decisions nova had to make was how to model the availability of vDPA devices
25+
and the capability to virtualize vDPA devices. As the initial use-case
26+
for this technology was to offload networking to hardware offload OVS via
27+
neutron ports the decision was made to extend the existing PCI tracker that
28+
is used for SR-IOV and pci-passthrough to support vDPA devices. As a result
29+
a simplification was made to assume that the parent device of a vDPA device
30+
is an SR-IOV Virtual Function (VF). As a result software only vDPA device such
31+
as those created by the kernel ``vdpa-sim`` sample module are not supported.
32+
33+
To make vDPA device available to be scheduled to guests the operator should
34+
include the device using the PCI address or vendor ID and product ID of the
35+
parent VF in the PCI ``device_spec``.
36+
See: :nova-doc:`pci-passthrough <admin/pci-passthrough>` for details.
37+
38+
Nova will not create the VFs or vDPA devices automatically. It is expected
39+
that the operator will allocate them before starting the nova-compute agent.
40+
While no specific mechanisms is prescribed to do this udev rules or systemd
41+
service files are generally the recommended approach to ensure the devices
42+
are created consistently across reboots.
43+
44+
.. note::
45+
As vDPA is an offload only for the data plane and not the control plane a
46+
vDPA control plane is required to properly support vDPA device passthrough.
47+
At the time of writing only hardware offloaded OVS is supported when using
48+
vDPA with nova. Because of this vDPA devices cannot be requested using the
49+
PCI alias. While nova could allow vDPA devices to be requested by the
50+
flavor using a PCI alias we would not be able to correctly configure the
51+
device as there would be no suitable control plane. For this reason vDPA
52+
devices are currently only consumable via neutron ports.
53+
54+
Virt driver support
55+
~~~~~~~~~~~~~~~~~~~
56+
57+
Supporting neutron ports with ``vnic_type=vdpa`` depends on the capability
58+
of the virt driver. At this time only the ``libvirt`` virt driver with KVM
59+
is fully supported. QEMU may also work but is untested.
60+
61+
vDPA support depends on kernel 5.7+, Libvirt 6.9.0+ and QEMU 5.1+.
62+
63+
vDPA lifecycle operations
64+
~~~~~~~~~~~~~~~~~~~~~~~~~
65+
66+
At this time vDPA ports can only be added to a VM when it is first created.
67+
To do this the normal SR-IOV workflow is used where by the port is first created
68+
in neutron and passed into nova as part of the server create request.
69+
70+
.. code-block:: bash
71+
72+
openstack port create --network <my network> --vnic-type vdpa vdpa-port
73+
openstack server create --flavor <my-flavor> --image <my-image> --port <vdpa-port uuid> vdpa-vm
74+
75+
When vDPA support was first introduced no move operations were supported.
76+
As this documentation was added in the change that enabled some move operations
77+
The following should be interpreted both as a retrospective and future looking
78+
viewpoint and treated as a living document which will be updated as functionality evolves.
79+
80+
23.0.0: initial support is added for creating a VM with vDPA ports, move operations
81+
are blocked in the API but implemented in code.
82+
26.0.0: support for all move operation except live migration is tested and api blocks are removed.
83+
25.x.y: (planned) api block removal backported to stable/Yoga
84+
24.x.y: (planned) api block removal backported to stable/Xena
85+
23.x.y: (planned) api block removal backported to stable/wallaby
86+
26.0.0: (in progress) interface attach/detach, suspend/resume and hot plug live migration
87+
are implemented to fully support all lifecycle operations on instances with vDPA ports.
88+
89+
.. note::
90+
The ``(planned)`` and ``(in progress)`` qualifiers will be removed when those items are
91+
completed. If your current version of the document contains those qualifiers then those
92+
lifecycle operations are unsupported.

nova/compute/api.py

Lines changed: 0 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -4029,9 +4029,6 @@ def _validate_host_for_cold_migrate(
40294029
# finally split resize and cold migration into separate code paths
40304030
@block_extended_resource_request
40314031
@block_port_accelerators()
4032-
# FIXME(sean-k-mooney): Cold migrate and resize to different hosts
4033-
# probably works but they have not been tested so block them for now
4034-
@reject_vdpa_instances(instance_actions.RESIZE)
40354032
@block_accelerators()
40364033
@check_instance_lock
40374034
@check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.STOPPED])
@@ -4250,10 +4247,7 @@ def _allow_resize_to_same_host(self, cold_migrate, instance):
42504247
allow_same_host = CONF.allow_resize_to_same_host
42514248
return allow_same_host
42524249

4253-
# FIXME(sean-k-mooney): Shelve works but unshelve does not due to bug
4254-
# #1851545, so block it for now
42554250
@block_port_accelerators()
4256-
@reject_vdpa_instances(instance_actions.SHELVE)
42574251
@reject_vtpm_instances(instance_actions.SHELVE)
42584252
@block_accelerators(until_service=54)
42594253
@check_instance_lock
@@ -5391,8 +5385,6 @@ def live_migrate_abort(self, context, instance, migration_id,
53915385

53925386
@block_extended_resource_request
53935387
@block_port_accelerators()
5394-
# FIXME(sean-k-mooney): rebuild works but we have not tested evacuate yet
5395-
@reject_vdpa_instances(instance_actions.EVACUATE)
53965388
@reject_vtpm_instances(instance_actions.EVACUATE)
53975389
@block_accelerators(until_service=SUPPORT_ACCELERATOR_SERVICE_FOR_REBUILD)
53985390
@check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.STOPPED,

0 commit comments

Comments
 (0)