Skip to content

Commit 49767be

Browse files
committed
docs: Add SEV guide
This was previously hidden in the hypervisor configuration guide. Make it a top-level document. Change-Id: If402522c859c1413f0d90912e357496a0a67c5cf Signed-off-by: Stephen Finucane <[email protected]>
1 parent c5ebaef commit 49767be

File tree

4 files changed

+273
-277
lines changed

4 files changed

+273
-277
lines changed

doc/source/admin/configuration/hypervisor-kvm.rst

Lines changed: 0 additions & 265 deletions
Original file line numberDiff line numberDiff line change
@@ -525,271 +525,6 @@ See `the KVM documentation
525525
<https://www.linux-kvm.org/page/Nested_Guests#Limitations>`_ for more
526526
information on these limitations.
527527

528-
.. _amd-sev:
529-
530-
AMD SEV (Secure Encrypted Virtualization)
531-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
532-
533-
`Secure Encrypted Virtualization (SEV)`__ is a technology from AMD which
534-
enables the memory for a VM to be encrypted with a key unique to the VM.
535-
SEV is particularly applicable to cloud computing since it can reduce the
536-
amount of trust VMs need to place in the hypervisor and administrator of
537-
their host system.
538-
539-
__ https://developer.amd.com/sev/
540-
541-
Nova supports SEV from the Train release onwards.
542-
543-
Requirements for SEV
544-
--------------------
545-
546-
First the operator will need to ensure the following prerequisites are met:
547-
548-
- At least one of the Nova compute hosts must be AMD hardware capable
549-
of supporting SEV. It is entirely possible for the compute plane to
550-
be a mix of hardware which can and cannot support SEV, although as
551-
per the section on `Permanent limitations`_ below, the maximum
552-
number of simultaneously running guests with SEV will be limited by
553-
the quantity and quality of SEV-capable hardware available.
554-
555-
- An appropriately configured software stack on those compute hosts,
556-
so that the various layers are all SEV ready:
557-
558-
- kernel >= 4.16
559-
- QEMU >= 2.12
560-
- libvirt >= 4.5
561-
- ovmf >= commit 75b7aa9528bd 2018-07-06
562-
563-
.. _deploying-sev-capable-infrastructure:
564-
565-
Deploying SEV-capable infrastructure
566-
------------------------------------
567-
568-
In order for users to be able to use SEV, the operator will need to
569-
perform the following steps:
570-
571-
- Ensure that sufficient memory is reserved on the SEV compute hosts
572-
for host-level services to function correctly at all times. This is
573-
particularly important when hosting SEV-enabled guests, since they
574-
pin pages in RAM, preventing any memory overcommit which may be in
575-
normal operation on other compute hosts.
576-
577-
It is `recommended`__ to achieve this by configuring an ``rlimit`` at
578-
the ``/machine.slice`` top-level ``cgroup`` on the host, with all VMs
579-
placed inside that. (For extreme detail, see `this discussion on the
580-
spec`__.)
581-
582-
__ http://specs.openstack.org/openstack/nova-specs/specs/train/approved/amd-sev-libvirt-support.html#memory-reservation-solutions
583-
__ https://review.opendev.org/#/c/641994/2/specs/train/approved/amd-sev-libvirt-support.rst@167
584-
585-
An alternative approach is to configure the
586-
:oslo.config:option:`reserved_host_memory_mb` option in the
587-
``[DEFAULT]`` section of :file:`nova.conf`, based on the expected
588-
maximum number of SEV guests simultaneously running on the host, and
589-
the details provided in `an earlier version of the AMD SEV spec`__
590-
regarding memory region sizes, which cover how to calculate it
591-
correctly.
592-
593-
__ https://specs.openstack.org/openstack/nova-specs/specs/stein/approved/amd-sev-libvirt-support.html#proposed-change
594-
595-
See `the Memory Locking and Accounting section of the AMD SEV spec`__
596-
and `previous discussion for further details`__.
597-
598-
__ http://specs.openstack.org/openstack/nova-specs/specs/train/approved/amd-sev-libvirt-support.html#memory-locking-and-accounting
599-
__ https://review.opendev.org/#/c/641994/2/specs/train/approved/amd-sev-libvirt-support.rst@167
600-
601-
- A cloud administrator will need to define one or more SEV-enabled
602-
flavors :ref:`as described in the user guide
603-
<extra-specs-memory-encryption>`, unless it is sufficient for users
604-
to define SEV-enabled images.
605-
606-
Additionally the cloud operator should consider the following optional
607-
steps:
608-
609-
.. _num_memory_encrypted_guests:
610-
611-
- Configure the :oslo.config:option:`libvirt.num_memory_encrypted_guests`
612-
option in :file:`nova.conf` to represent the number of guests an SEV
613-
compute node can host concurrently with memory encrypted at the
614-
hardware level. For example:
615-
616-
.. code-block:: ini
617-
618-
[libvirt]
619-
num_memory_encrypted_guests = 15
620-
621-
This option exists because on AMD SEV-capable hardware, the memory
622-
controller has a fixed number of slots for holding encryption keys,
623-
one per guest. For example, at the time of writing, earlier
624-
generations of hardware only have 15 slots, thereby limiting the
625-
number of SEV guests which can be run concurrently to 15. Nova
626-
needs to track how many slots are available and used in order to
627-
avoid attempting to exceed that limit in the hardware.
628-
629-
At the time of writing (September 2019), work is in progress to
630-
allow QEMU and libvirt to expose the number of slots available on
631-
SEV hardware; however until this is finished and released, it will
632-
not be possible for Nova to programmatically detect the correct
633-
value.
634-
635-
So this configuration option serves as a stop-gap, allowing the
636-
cloud operator the option of providing this value manually. It may
637-
later be demoted to a fallback value for cases where the limit
638-
cannot be detected programmatically, or even removed altogether when
639-
Nova's minimum QEMU version guarantees that it can always be
640-
detected.
641-
642-
.. note::
643-
644-
When deciding whether to use the default of ``None`` or manually
645-
impose a limit, operators should carefully weigh the benefits
646-
vs. the risk. The benefits of using the default are a) immediate
647-
convenience since nothing needs to be done now, and b) convenience
648-
later when upgrading compute hosts to future versions of Nova,
649-
since again nothing will need to be done for the correct limit to
650-
be automatically imposed. However the risk is that until
651-
auto-detection is implemented, users may be able to attempt to
652-
launch guests with encrypted memory on hosts which have already
653-
reached the maximum number of guests simultaneously running with
654-
encrypted memory. This risk may be mitigated by other limitations
655-
which operators can impose, for example if the smallest RAM
656-
footprint of any flavor imposes a maximum number of simultaneously
657-
running guests which is less than or equal to the SEV limit.
658-
659-
- Configure :oslo.config:option:`libvirt.hw_machine_type` on all
660-
SEV-capable compute hosts to include ``x86_64=q35``, so that all
661-
x86_64 images use the ``q35`` machine type by default. (Currently
662-
Nova defaults to the ``pc`` machine type for the ``x86_64``
663-
architecture, although `it is expected that this will change in the
664-
future`__.)
665-
666-
Changing the default from ``pc`` to ``q35`` makes the creation and
667-
configuration of images by users more convenient by removing the
668-
need for the ``hw_machine_type`` property to be set to ``q35`` on
669-
every image for which SEV booting is desired.
670-
671-
.. caution::
672-
673-
Consider carefully whether to set this option. It is
674-
particularly important since a limitation of the implementation
675-
prevents the user from receiving an error message with a helpful
676-
explanation if they try to boot an SEV guest when neither this
677-
configuration option nor the image property are set to select
678-
a ``q35`` machine type.
679-
680-
On the other hand, setting it to ``q35`` may have other
681-
undesirable side-effects on other images which were expecting to
682-
be booted with ``pc``, so it is suggested to set it on a single
683-
compute node or aggregate, and perform careful testing of typical
684-
images before rolling out the setting to all SEV-capable compute
685-
hosts.
686-
687-
__ https://bugs.launchpad.net/nova/+bug/1780138
688-
689-
Launching SEV instances
690-
-----------------------
691-
692-
Once an operator has covered the above steps, users can launch SEV
693-
instances either by requesting a flavor for which the operator set the
694-
``hw:mem_encryption`` extra spec to ``True``, or by using an image
695-
with the ``hw_mem_encryption`` property set to ``True``.
696-
697-
These do not inherently cause a preference for SEV-capable hardware,
698-
but for now SEV is the only way of fulfilling the requirement for
699-
memory encryption. However in the future, support for other
700-
hardware-level guest memory encryption technology such as Intel MKTME
701-
may be added. If a guest specifically needs to be booted using SEV
702-
rather than any other memory encryption technology, it is possible to
703-
ensure this by adding ``trait:HW_CPU_X86_AMD_SEV=required`` to the
704-
flavor extra specs or image properties.
705-
706-
In all cases, SEV instances can only be booted from images which have
707-
the ``hw_firmware_type`` property set to ``uefi``, and only when the
708-
machine type is set to ``q35``. This can be set per image by setting
709-
the image property ``hw_machine_type=q35``, or per compute node by
710-
the operator via :oslo.config:option:`libvirt.hw_machine_type` as
711-
explained above.
712-
713-
Impermanent limitations
714-
-----------------------
715-
716-
The following limitations may be removed in the future as the
717-
hardware, firmware, and various layers of software receive new
718-
features:
719-
720-
- SEV-encrypted VMs cannot yet be live-migrated or suspended,
721-
therefore they will need to be fully shut down before migrating off
722-
an SEV host, e.g. if maintenance is required on the host.
723-
724-
- SEV-encrypted VMs cannot contain directly accessible host devices
725-
(PCI passthrough). So for example mdev vGPU support will not
726-
currently work. However technologies based on `vhost-user`__ should
727-
work fine.
728-
729-
__ https://wiki.qemu.org/Features/VirtioVhostUser
730-
731-
- The boot disk of SEV-encrypted VMs can only be ``virtio``.
732-
(``virtio-blk`` is typically the default for libvirt disks on x86,
733-
but can also be explicitly set e.g. via the image property
734-
``hw_disk_bus=virtio``). Valid alternatives for the disk
735-
include using ``hw_disk_bus=scsi`` with
736-
``hw_scsi_model=virtio-scsi`` , or ``hw_disk_bus=sata``.
737-
738-
- QEMU and libvirt cannot yet expose the number of slots available for
739-
encrypted guests in the memory controller on SEV hardware. Until
740-
this is implemented, it is not possible for Nova to programmatically
741-
detect the correct value. As a short-term workaround, operators can
742-
optionally manually specify the upper limit of SEV guests for each
743-
compute host, via the new
744-
:oslo.config:option:`libvirt.num_memory_encrypted_guests`
745-
configuration option :ref:`described above
746-
<num_memory_encrypted_guests>`.
747-
748-
Permanent limitations
749-
---------------------
750-
751-
The following limitations are expected long-term:
752-
753-
- The number of SEV guests allowed to run concurrently will always be
754-
limited. `On the first generation of EPYC machines it will be
755-
limited to 15 guests`__; however this limit becomes much higher with
756-
the second generation (Rome).
757-
758-
__ https://www.redhat.com/archives/libvir-list/2019-January/msg00652.html
759-
760-
- The operating system running in an encrypted virtual machine must
761-
contain SEV support.
762-
763-
Non-limitations
764-
---------------
765-
766-
For the sake of eliminating any doubt, the following actions are *not*
767-
expected to be limited when SEV encryption is used:
768-
769-
- Cold migration or shelve, since they power off the VM before the
770-
operation at which point there is no encrypted memory (although this
771-
could change since there is work underway to add support for `PMEM
772-
<https://pmem.io/>`_)
773-
774-
- Snapshot, since it only snapshots the disk
775-
776-
- ``nova evacuate`` (despite the name, more akin to resurrection than
777-
evacuation), since this is only initiated when the VM is no longer
778-
running
779-
780-
- Attaching any volumes, as long as they do not require attaching via
781-
an IDE bus
782-
783-
- Use of spice / VNC / serial / RDP consoles
784-
785-
- `VM guest virtual NUMA (a.k.a. vNUMA)
786-
<https://www.suse.com/documentation/sles-12/singlehtml/article_vt_best_practices/article_vt_best_practices.html#sec.vt.best.perf.numa.vmguest>`_
787-
788-
For further technical details, see `the nova spec for SEV support`__.
789-
790-
__ http://specs.openstack.org/openstack/nova-specs/specs/train/approved/amd-sev-libvirt-support.html
791-
792-
793528
Guest agent support
794529
~~~~~~~~~~~~~~~~~~~
795530

doc/source/admin/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -116,6 +116,7 @@ instance for these kind of workloads.
116116
emulated-tpm
117117
uefi
118118
secure-boot
119+
sev
119120
managing-resource-providers
120121
resource-limits
121122

0 commit comments

Comments
 (0)