@@ -91,32 +91,31 @@ steps:
91
91
needs to track how many slots are available and used in order to
92
92
avoid attempting to exceed that limit in the hardware.
93
93
94
- At the time of writing (September 2019), work is in progress to
95
- allow QEMU and libvirt to expose the number of slots available on
96
- SEV hardware; however until this is finished and released, it will
97
- not be possible for Nova to programmatically detect the correct
98
- value.
99
-
100
- So this configuration option serves as a stop-gap, allowing the
101
- cloud operator the option of providing this value manually. It may
102
- later be demoted to a fallback value for cases where the limit
103
- cannot be detected programmatically, or even removed altogether when
104
- Nova's minimum QEMU version guarantees that it can always be
105
- detected.
94
+ Since version 8.0.0, libvirt exposes maximun mumber of SEV guests
95
+ which can run concurrently in its host, so the limit is automatically
96
+ detected using this feature.
97
+
98
+ However in case an older version of libvirt is used, it is not possible for
99
+ Nova to programmatically detect the correct value and Nova imposes no limit.
100
+ So this configuration option serves as a stop-gap, allowing the cloud
101
+ operator the option of providing this value manually.
102
+
103
+ This option also allows the cloud operator to set the limit lower than
104
+ the actual hard limit.
106
105
107
106
.. note ::
108
107
109
- When deciding whether to use the default of `` None `` or manually
110
- impose a limit, operators should carefully weigh the benefits
111
- vs. the risk. The benefits of using the default are a) immediate
112
- convenience since nothing needs to be done now, and b) convenience
113
- later when upgrading compute hosts to future versions of Nova,
114
- since again nothing will need to be done for the correct limit to
115
- be automatically imposed. However the risk is that until
116
- auto-detection is implemented, users may be able to attempt to
117
- launch guests with encrypted memory on hosts which have already
118
- reached the maximum number of guests simultaneously running with
119
- encrypted memory. This risk may be mitigated by other limitations
108
+ If libvirt older than 8.0.0 is used, operators should carefully weigh
109
+ the benefits vs. the risk when deciding whether to use the default of
110
+ `` None `` or manually impose a limit.
111
+ The benefits of using the default are a) immediate convenience since
112
+ nothing needs to be done now, and b) convenience later when upgrading
113
+ compute hosts to future versions of libvirt, since again nothing will
114
+ need to be done for the correct limit to be automatically imposed.
115
+ However the risk is that until auto-detection is implemented, users may
116
+ be able to attempt to launch guests with encrypted memory on hosts which
117
+ have already reached the maximum number of guests simultaneously running
118
+ with encrypted memory. This risk may be mitigated by other limitations
120
119
which operators can impose, for example if the smallest RAM
121
120
footprint of any flavor imposes a maximum number of simultaneously
122
121
running guests which is less than or equal to the SEV limit.
@@ -221,16 +220,6 @@ features:
221
220
include using ``hw_disk_bus=scsi `` with
222
221
``hw_scsi_model=virtio-scsi `` , or ``hw_disk_bus=sata ``.
223
222
224
- - QEMU and libvirt cannot yet expose the number of slots available for
225
- encrypted guests in the memory controller on SEV hardware. Until
226
- this is implemented, it is not possible for Nova to programmatically
227
- detect the correct value. As a short-term workaround, operators can
228
- optionally manually specify the upper limit of SEV guests for each
229
- compute host, via the new
230
- :oslo.config:option: `libvirt.num_memory_encrypted_guests `
231
- configuration option :ref: `described above
232
- <num_memory_encrypted_guests>`.
233
-
234
223
Permanent limitations
235
224
~~~~~~~~~~~~~~~~~~~~~
236
225
0 commit comments