Skip to content

Commit f4f0f8e

Browse files
Zuulopenstack-gerrit
authored andcommitted
Merge "docs: Remove duplicated PCI passthrough extra spec info"
2 parents 232fa8b + c5ebaef commit f4f0f8e

File tree

2 files changed

+94
-105
lines changed

2 files changed

+94
-105
lines changed

doc/source/admin/pci-passthrough.rst

Lines changed: 94 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -16,11 +16,12 @@ different guests. In the case of PCI passthrough, the full physical device is
1616
assigned to only one guest and cannot be shared.
1717

1818
PCI devices are requested through flavor extra specs, specifically via the
19-
``pci_passthrough:alias=<alias>`` flavor extra spec. This guide demonstrates
20-
how to enable PCI passthrough for a type of PCI device with a vendor ID of
21-
``8086`` and a product ID of ``154d`` - an Intel X520 Network Adapter - by
22-
mapping them to the alias ``a1``. You should adjust the instructions for other
23-
devices with potentially different capabilities.
19+
:nova:extra-spec:`pci_passthrough:alias` flavor extra spec.
20+
This guide demonstrates how to enable PCI passthrough for a type of PCI device
21+
with a vendor ID of ``8086`` and a product ID of ``154d`` - an Intel X520
22+
Network Adapter - by mapping them to the alias ``a1``.
23+
You should adjust the instructions for other devices with potentially different
24+
capabilities.
2425

2526
.. note::
2627

@@ -50,9 +51,12 @@ devices with potentially different capabilities.
5051
Nova will ignore PCI devices reported by the hypervisor if the address is
5152
outside of these ranges.
5253

53-
Configure host (Compute)
54+
Enabling PCI passthrough
5455
------------------------
5556

57+
Configure compute host
58+
~~~~~~~~~~~~~~~~~~~~~~
59+
5660
To enable PCI passthrough on an x86, Linux-based compute node, the following
5761
are required:
5862

@@ -83,9 +87,8 @@ passthrough`__.
8387

8488
.. __: https://devblogs.microsoft.com/scripting/passing-through-devices-to-hyper-v-vms-by-using-discrete-device-assignment/
8589

86-
87-
Configure ``nova-compute`` (Compute)
88-
------------------------------------
90+
Configure ``nova-compute``
91+
~~~~~~~~~~~~~~~~~~~~~~~~~~
8992

9093
Once PCI passthrough has been configured for the host, :program:`nova-compute`
9194
must be configured to allow the PCI device to pass through to VMs. This is done
@@ -115,9 +118,10 @@ In addition, it is necessary to configure the :oslo.config:option:`pci.alias`
115118
option, which is a JSON-style configuration option that allows you to map a
116119
given device type, identified by the standard PCI ``vendor_id`` and (optional)
117120
``product_id`` fields, to an arbitrary name or *alias*. This alias can then be
118-
used to request a PCI device using the ``pci_passthrough:alias=<alias>`` flavor
119-
extra spec, as discussed previously. For our sample device with a vendor ID of
120-
``0x8086`` and a product ID of ``0x154d``, this would be:
121+
used to request a PCI device using the :nova:extra-spec:`pci_passthrough:alias`
122+
flavor extra spec, as discussed previously.
123+
For our sample device with a vendor ID of ``0x8086`` and a product ID of
124+
``0x154d``, this would be:
121125

122126
.. code-block:: ini
123127
@@ -152,9 +156,8 @@ Refer to :oslo.config:option:`pci.alias` for syntax information.
152156

153157
Once configured, restart the :program:`nova-compute` service.
154158

155-
156-
Configure ``nova-scheduler`` (Controller)
157-
-----------------------------------------
159+
Configure ``nova-scheduler``
160+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
158161

159162
The :program:`nova-scheduler` service must be configured to enable the
160163
``PciPassthroughFilter``. To do this, add this filter to the list of filters
@@ -170,11 +173,8 @@ specified in :oslo.config:option:`filter_scheduler.enabled_filters` and set
170173
171174
Once done, restart the :program:`nova-scheduler` service.
172175

173-
174-
.. _pci-passthrough-alias:
175-
176-
Configure ``nova-api`` (Controller)
177-
-----------------------------------
176+
Configure ``nova-api``
177+
~~~~~~~~~~~~~~~~~~~~~~
178178

179179
It is necessary to also configure the :oslo.config:option:`pci.alias` config
180180
option on the controller. This configuration should match the configuration
@@ -186,13 +186,14 @@ found on the compute nodes. For example:
186186
alias = { "vendor_id":"8086", "product_id":"154d", "device_type":"type-PF", "name":"a1", "numa_policy":"preferred" }
187187
188188
Refer to :oslo.config:option:`pci.alias` for syntax information.
189-
Refer to :ref:`Affinity <pci_numa_affinity_policy>` for ``numa_policy`` information.
189+
Refer to :ref:`Affinity <pci-numa-affinity-policy>` for ``numa_policy``
190+
information.
190191

191192
Once configured, restart the :program:`nova-api` service.
192193

193194

194-
Configure a flavor (API)
195-
------------------------
195+
Configuring a flavor or image
196+
-----------------------------
196197

197198
Once the alias has been configured, it can be used for an flavor extra spec.
198199
For example, to request two of the PCI devices referenced by alias ``a1``, run:
@@ -202,15 +203,76 @@ For example, to request two of the PCI devices referenced by alias ``a1``, run:
202203
$ openstack flavor set m1.large --property "pci_passthrough:alias"="a1:2"
203204
204205
For more information about the syntax for ``pci_passthrough:alias``, refer to
205-
:ref:`Flavors <extra-spec-pci-passthrough>`.
206-
207-
208-
Create instances with PCI passthrough devices
209-
---------------------------------------------
210-
211-
The :program:`nova-scheduler` service selects a destination host that has PCI
212-
devices available that match the ``alias`` specified in the flavor.
206+
:doc:`the documentation </configuration/extra-specs>`.
207+
208+
209+
.. _pci-numa-affinity-policy:
210+
211+
PCI-NUMA affinity policies
212+
--------------------------
213+
214+
By default, the libvirt driver enforces strict NUMA affinity for PCI devices,
215+
be they PCI passthrough devices or neutron SR-IOV interfaces. This means that
216+
by default a PCI device must be allocated from the same host NUMA node as at
217+
least one of the instance's CPUs. This isn't always necessary, however, and you
218+
can configure this policy using the
219+
:nova:extra-spec:`hw:pci_numa_affinity_policy` flavor extra spec or equivalent
220+
image metadata property. There are three possible values allowed:
221+
222+
**required**
223+
This policy means that nova will boot instances with PCI devices **only**
224+
if at least one of the NUMA nodes of the instance is associated with these
225+
PCI devices. It means that if NUMA node info for some PCI devices could not
226+
be determined, those PCI devices wouldn't be consumable by the instance.
227+
This provides maximum performance.
228+
229+
**socket**
230+
This policy means that the PCI device must be affined to the same host
231+
socket as at least one of the guest NUMA nodes. For example, consider a
232+
system with two sockets, each with two NUMA nodes, numbered node 0 and node
233+
1 on socket 0, and node 2 and node 3 on socket 1. There is a PCI device
234+
affined to node 0. An PCI instance with two guest NUMA nodes and the
235+
``socket`` policy can be affined to either:
236+
237+
* node 0 and node 1
238+
* node 0 and node 2
239+
* node 0 and node 3
240+
* node 1 and node 2
241+
* node 1 and node 3
242+
243+
The instance cannot be affined to node 2 and node 3, as neither of those
244+
are on the same socket as the PCI device. If the other nodes are consumed
245+
by other instances and only nodes 2 and 3 are available, the instance
246+
will not boot.
247+
248+
**preferred**
249+
This policy means that ``nova-scheduler`` will choose a compute host
250+
with minimal consideration for the NUMA affinity of PCI devices.
251+
``nova-compute`` will attempt a best effort selection of PCI devices
252+
based on NUMA affinity, however, if this is not possible then
253+
``nova-compute`` will fall back to scheduling on a NUMA node that is not
254+
associated with the PCI device.
255+
256+
**legacy**
257+
This is the default policy and it describes the current nova behavior.
258+
Usually we have information about association of PCI devices with NUMA
259+
nodes. However, some PCI devices do not provide such information. The
260+
``legacy`` value will mean that nova will boot instances with PCI device
261+
if either:
262+
263+
* The PCI device is associated with at least one NUMA nodes on which the
264+
instance will be booted
265+
266+
* There is no information about PCI-NUMA affinity available
267+
268+
For example, to configure a flavor to use the ``preferred`` PCI NUMA affinity
269+
policy for any neutron SR-IOV interfaces attached by the user:
213270

214271
.. code-block:: console
215272
216-
# openstack server create --flavor m1.large --image cirros-0.3.5-x86_64-uec --wait test-pci
273+
$ openstack flavor set $FLAVOR \
274+
--property hw:pci_numa_affinity_policy=preferred
275+
276+
You can also configure this for PCI passthrough devices by specifying the
277+
policy in the alias configuration via :oslo.config:option:`pci.alias`. For more
278+
information, refer to :oslo.config:option:`the documentation <pci.alias>`.

doc/source/user/flavors.rst

Lines changed: 0 additions & 73 deletions
Original file line numberDiff line numberDiff line change
@@ -184,61 +184,6 @@ Performance Monitoring Unit (vPMU)
184184
required, such workloads should set ``hw:pmu=False``. For most workloads
185185
the default of unset or enabling the vPMU ``hw:pmu=True`` will be correct.
186186

187-
.. _pci_numa_affinity_policy:
188-
189-
PCI NUMA Affinity Policy
190-
For the libvirt driver, you can specify the NUMA affinity policy for
191-
PCI passthrough devices and neutron SR-IOV interfaces via the
192-
``hw:pci_numa_affinity_policy`` flavor extra spec or
193-
``hw_pci_numa_affinity_policy`` image property. The allowed values are
194-
``required``, ``socket``, ``preferred`` or ``legacy`` (default).
195-
196-
**required**
197-
This value will mean that nova will boot instances with PCI devices
198-
**only** if at least one of the NUMA nodes of the instance is associated
199-
with these PCI devices. It means that if NUMA node info for some PCI
200-
devices could not be determined, those PCI devices wouldn't be consumable
201-
by the instance. This provides maximum performance.
202-
203-
**socket**
204-
This means that the PCI device must be affined to the same host socket as
205-
at least one of the guest NUMA nodes. For example, consider a system with
206-
two sockets, each with two NUMA nodes, numbered node 0 and node 1 on
207-
socket 0, and node 2 and node 3 on socket 1. There is a PCI device
208-
affined to node 0. An PCI instance with two guest NUMA nodes and the
209-
``socket`` policy can be affined to either:
210-
211-
* node 0 and node 1
212-
* node 0 and node 2
213-
* node 0 and node 3
214-
* node 1 and node 2
215-
* node 1 and node 3
216-
217-
The instance cannot be affined to node 2 and node 3, as neither of those
218-
are on the same socket as the PCI device. If the other nodes are consumed
219-
by other instances and only nodes 2 and 3 are available, the instance
220-
will not boot.
221-
222-
**preferred**
223-
This value will mean that ``nova-scheduler`` will choose a compute host
224-
with minimal consideration for the NUMA affinity of PCI devices.
225-
``nova-compute`` will attempt a best effort selection of PCI devices
226-
based on NUMA affinity, however, if this is not possible then
227-
``nova-compute`` will fall back to scheduling on a NUMA node that is not
228-
associated with the PCI device.
229-
230-
**legacy**
231-
This is the default value and it describes the current nova behavior.
232-
Usually we have information about association of PCI devices with NUMA
233-
nodes. However, some PCI devices do not provide such information. The
234-
``legacy`` value will mean that nova will boot instances with PCI device
235-
if either:
236-
237-
* The PCI device is associated with at least one NUMA nodes on which the
238-
instance will be booted
239-
240-
* There is no information about PCI-NUMA affinity available
241-
242187
.. _extra-specs-memory-encryption:
243188

244189
Hardware encryption of guest memory
@@ -251,24 +196,6 @@ Hardware encryption of guest memory
251196
$ openstack flavor set FLAVOR-NAME \
252197
--property hw:mem_encryption=True
253198
254-
.. _extra-spec-pci-passthrough:
255-
256-
PCI passthrough
257-
You can assign PCI devices to a guest by specifying them in the flavor.
258-
259-
.. code:: console
260-
261-
$ openstack flavor set FLAVOR-NAME \
262-
--property pci_passthrough:alias=ALIAS:COUNT
263-
264-
Where:
265-
266-
- ALIAS: (string) The alias which correspond to a particular PCI device class
267-
as configured in the nova configuration file (see
268-
:oslo.config:option:`pci.alias`).
269-
- COUNT: (integer) The amount of PCI devices of type ALIAS to be assigned to
270-
a guest.
271-
272199
.. _extra-specs-hiding-hypervisor-signature:
273200

274201
Hiding hypervisor signature

0 commit comments

Comments
 (0)