You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Each PCI device and each PF is a separate RP in Placement and the
scheduler allocate them specifically so the PCI filtering and claiming
also needs to handle these devices individually. Nova pooled PCI devices
together if they had the same device_spec and same device type and numa
node. Now this is changed that only pool VFs from the same parent PF.
Fortunately nova already handled consuming devices for a single
InstancePCIRequest from multiple PCI pools, so this change does not
affect the device consumption code path.
The test_live_migrate_server_with_neutron test needed to be changed.
Originally this test used a compute with the following config:
* PF 81.00.0
** VFs 81.00.[1-4]
* PF 81.01.0
** VFs 81.01.[1-4]
* PF 82.00.0
And booted a VM that needed one VF and one PF. This request has two
widely different solutions:
1) allocate the VF from under 81.00 and therefore consume 81.00.0 and
allocate the 82.00.0 PF
This was what the test asserted to happen.
2) allocate the VF from under 81.00 and therefore consume 81.00.0 and
allocate the 81.00.0 PF and therefore consume all the VFs under it
This results in a different amount of free devices than #1)
AFAIK nova does not have any implemented preference for consuming PFs
without VFs. The test just worked by chance (some internal device and
pool ordering made it that way). However when the PCI pools are split
nova started choosing solution #2) making the test fail. As both
solution is equally good from nova's scheduling contract perspective I
don't consider this as a behavior change. Therefore the test is updated
not to create a situation where two different scheduling solutions are
possible.
blueprint: pci-device-tracking-in-placement
Change-Id: I4b67cca3807fbda9e9b07b220a28e331def57624
0 commit comments