Skip to content

Commit 7459213

Browse files
committed
Create RequestGroups from InstancePCIRequests
This patch adds logic to the RequestSpec creation to translate InstancePCIRequests created from Flavor to RequestGroup objects. This is just the first step to provide the full scheduling support for PCI in placement. The missing pieces will come in subsequent patches: * support user defined resource class and traits in the PCI alias * filter the allocation candidates returned by placement during PciDeviceStats.support_requests() during scheduling and PCI claim * make sure that the PCI claim always consumes the device that was allocated during scheduling blueprint: pci-device-tracking-in-placement Change-Id: Ied63221451e2412d8ee2d6b0ba6ec9cd796878b7
1 parent fc05e91 commit 7459213

File tree

3 files changed

+517
-33
lines changed

3 files changed

+517
-33
lines changed

nova/objects/request_spec.py

Lines changed: 80 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -473,6 +473,84 @@ def to_legacy_filter_properties_dict(self):
473473
filt_props['requested_destination'] = self.requested_destination
474474
return filt_props
475475

476+
@staticmethod
477+
def _rc_from_request(pci_request: 'objects.InstancePCIRequest') -> str:
478+
# FIXME(gibi): refactor this and the copy of the logic from the
479+
# translator to a common function
480+
# FIXME(gibi): handle directly requested resource_class
481+
# ??? can there be more than one spec???
482+
spec = pci_request.spec[0]
483+
rc = f"CUSTOM_PCI_{spec['vendor_id']}_{spec['product_id']}".upper()
484+
return rc
485+
486+
# This is here temporarily until the PCI placement scheduling is under
487+
# implementation. When that is done there will be a config option
488+
# [scheduler]pci_in_placement to configure this. Now we add this as a
489+
# function to allow tests to selectively enable the WIP feature
490+
@staticmethod
491+
def _pci_in_placement_enabled():
492+
return False
493+
494+
def _generate_request_groups_from_pci_requests(self):
495+
if not self._pci_in_placement_enabled():
496+
return False
497+
498+
for pci_request in self.pci_requests.requests:
499+
if pci_request.source == objects.InstancePCIRequest.NEUTRON_PORT:
500+
# TODO(gibi): Handle neutron based PCI requests here in a later
501+
# cycle.
502+
continue
503+
504+
# The goal is to translate InstancePCIRequest to RequestGroup. Each
505+
# InstancePCIRequest can be fulfilled from the whole RP tree. And
506+
# a flavor based InstancePCIRequest might request more than one
507+
# device (if count > 1) and those devices still need to be placed
508+
# independently to RPs. So we could have two options to translate
509+
# an InstancePCIRequest object to RequestGroup objects:
510+
# 1) put the all the requested resources from every
511+
# InstancePCIRequest to the unsuffixed RequestGroup.
512+
# 2) generate a separate RequestGroup for each individual device
513+
# request
514+
#
515+
# While #1) feels simpler it has a big downside. The unsuffixed
516+
# group will have a bulk request group resource provider mapping
517+
# returned from placement. So there would be no easy way to later
518+
# untangle which InstancePCIRequest is fulfilled by which RP, and
519+
# therefore which PCI device should be used to allocate a specific
520+
# device on the hypervisor during the PCI claim. Note that there
521+
# could be multiple PF RPs providing the same type of resources but
522+
# still we need to make sure that if a resource is allocated in
523+
# placement from a specific RP (representing a physical device)
524+
# then the PCI claim should consume resources from the same
525+
# physical device.
526+
#
527+
# So we need at least a separate RequestGroup per
528+
# InstancePCIRequest. However, for a InstancePCIRequest(count=2)
529+
# that would mean a RequestGroup(RC:2) which would mean both
530+
# resource should come from the same RP in placement. This is
531+
# impossible for PF or PCI type requests and over restrictive for
532+
# VF type requests. Therefore we need to generate one RequestGroup
533+
# per requested device. So for InstancePCIRequest(count=2) we need
534+
# to generate two separate RequestGroup(RC:1) objects.
535+
536+
# FIXME(gibi): make sure that if we have count=2 requests then
537+
# group_policy=none is in the request as group_policy=isolate
538+
# would prevent allocating two VFs from the same PF.
539+
540+
for i in range(pci_request.count):
541+
rg = objects.RequestGroup(
542+
use_same_provider=True,
543+
# we need to generate a unique ID for each group, so we use
544+
# a counter
545+
requester_id=f"{pci_request.request_id}-{i}",
546+
# as we split count >= 2 requests to independent groups
547+
# each group will have a resource request of one
548+
resources={
549+
self._rc_from_request(pci_request): 1}
550+
# FIXME(gibi): handle traits requested from alias
551+
)
552+
self.requested_resources.append(rg)
553+
476554
@classmethod
477555
def from_components(
478556
cls, context, instance_uuid, image, flavor,
@@ -539,6 +617,8 @@ def from_components(
539617
if port_resource_requests:
540618
spec_obj.requested_resources.extend(port_resource_requests)
541619

620+
spec_obj._generate_request_groups_from_pci_requests()
621+
542622
# NOTE(gibi): later the scheduler adds more request level params but
543623
# never overrides existing ones so we can initialize them here.
544624
if request_level_params is None:

0 commit comments

Comments
 (0)