@@ -19,20 +19,28 @@ features:
1919 - |
2020 A new configuration option, ``[compute] cpu_dedicated_set``, has been
2121 added. This can be used to configure the host CPUs that should be used for
22- ``PCPU`` inventory.
22+ ``PCPU`` inventory. Refer to the help text of the ``[compute]
23+ cpu_dedicated_set`` config option for more information.
24+ - |
25+ The ``[compute] cpu_shared_set`` configuration option will now be used to
26+ configure the host CPUs that should be used for ``VCPU`` inventory,
27+ replacing the deprecated ``vcpu_pin_set`` option. Refer to the help text of
28+ the ``[compute] cpu_shared_set`` config option for more infomration.
2329 - |
2430 A new configuration option, ``[workarounds] disable_fallback_pcpu_query``,
2531 has been added. When creating or moving pinned instances, the scheduler will
2632 attempt to provide a ``PCPU``-based allocation, but can also fallback to a legacy
2733 ``VCPU``-based allocation. This fallback behavior is enabled by
2834 default to ensure it is possible to upgrade without having to modify compute
29- node configuration but it results in an additional request for allocation
35+ node configuration, but it results in an additional request for allocation
3036 candidates from placement. This can have a slight performance impact and is
31- unnecessary on new or upgraded deployments where the compute nodes have been
37+ unnecessary on new or upgraded deployments where all compute nodes have been
3238 correctly configured to report ``PCPU`` inventory. The ``[workarounds]
3339 disable_fallback_pcpu_query`` config option can be used to disable this
3440 fallback allocation candidate request, meaning only ``PCPU``-based
35- allocation candidates will be retrieved.
41+ allocation candidates will be retrieved. Refer to the help text of the
42+ ``[workarounds] disable_fallback_pcpu_query`` config option for more
43+ information.
3644deprecations :
3745 - |
3846 The ``vcpu_pin_set`` configuration option has been deprecated. You should
@@ -41,8 +49,17 @@ deprecations:
4149 text of these config options for more information.
4250upgrade :
4351 - |
44- Previously, if ``vcpu_pin_set`` was not defined, the libvirt driver would
45- count all available host CPUs when calculating ``VCPU`` inventory,
46- regardless of whether those CPUs were online or not. The driver will now
47- only report the total number of online CPUs. This should result in fewer
48- build failures on hosts with offlined CPUs.
52+ Previously, if the ``vcpu_pin_set`` configuration option was not defined,
53+ the libvirt driver would count all available host CPUs when calculating
54+ ``VCPU`` inventory, regardless of whether those CPUs were online or not.
55+ The driver will now only report the total number of online CPUs. This
56+ should result in fewer build failures on hosts with offlined CPUs.
57+ - |
58+ Previously, if an instance was using the ``isolate`` CPU thread policy on a
59+ host with SMT (hyperthreading) enabled, the libvirt driver would fake a
60+ non-SMT host by marking the thread sibling(s) for each host CPU used by the
61+ instance as reserved and unusable. This is no longer the case. Instead,
62+ instances using the policy will be scheduled only to hosts that do not
63+ report the ``HW_CPU_HYPERTHREADING`` trait. If you have workloads that
64+ require the ``isolate`` policy, you should configure some or all of your
65+ hosts to disable SMT.
0 commit comments