diff --git a/content/en/blog/2024/building-trust-into-os-images-for-coco.md b/content/en/blog/2024/building-trust-into-os-images-for-coco.md index 5b4b603..c090328 100644 --- a/content/en/blog/2024/building-trust-into-os-images-for-coco.md +++ b/content/en/blog/2024/building-trust-into-os-images-for-coco.md @@ -25,7 +25,7 @@ This option is appealing for certain CoCo deployments. If we have a Trusted Exec An expected SEV-SNP launch measurement for Linux direct boot with Qemu can be calculated using trusted artifacts (firmware, kernel & initrd) and a few platform parameters. Please note that the respective kernel/fw components and tools are still being actively developed. The [AMDESE/AMDSEV](https://github.com/AMDESE/AMDSEV/tree/snp-latest) repository provides instructions and pointers to a working set of revisions. -```bash +```console $ sev-snp-measure \ --mode snp \ --vcpus=1 \ @@ -49,7 +49,7 @@ A rootfs can comfortably host the infrastructure components and we can still pac DM-Verity volumes feature a hash tree and a root hash in addition to the actual data. The hash tree can be stored on disk next to the verity volume or as a local file. We'll store the hash-tree as file for brevity and write a string `CoCo` into a file `/coco` on the formatted volume: -```bash +```console $ dd if=/dev/zero of=rootfs.raw bs=1M count=100 $ DEVICE="$(sudo losetup --show -f rootfs.raw)" $ sudo cfdisk "$DEVICE" @@ -77,7 +77,7 @@ $ export ROOT_HASH=ad86ff8492be2ee204cb54d70c84412c2dc89cefd34e263184f4e00295a41 Now we toggle a bit on the raw image (`CoCo` => `DoCo` in `/coco`). If the image is attached as a block device via dm-verity, there will be IO errors and respective entries in the kernel log, once we attempt to read the file. -```bash +```console $ hexdump -C rootfs.raw | grep CoCo 06000000 43 6f 43 6f 0a 00 00 00 00 00 00 00 00 00 00 00 |CoCo............| $ printf '\x44' | dd of=rootfs.raw bs=1 seek="$((16#06000000))" count=1 conv=notrunc @@ -113,7 +113,7 @@ To retrieve the expected measurements, for a dm-verity protected OS image, we ca In a TEE the vTPM would have to be isolated from both the Host and the Guest OS. We use `swtpm` to retrieve reference values here. {{% /alert %}} -```bash +```console $ swtpm socket \ --tpmstate dir=/tmp/vtpm \ --ctrl type=unixio,path=/tmp/vtpm/swtpm.sock \ @@ -123,8 +123,8 @@ $ swtpm socket \ We retrieve VM firmware from debian's repository and attach the vTPM socket as character device: -```bash -# retrieve vm firmware from debian's repo +```console +$ # retrieve vm firmware from debian's repo $ wget http://security.debian.org/debian-security/pool/updates/main/e/edk2/ovmf_2022.11-6+deb12u1_all.deb $ mkdir fw $ dpkg-deb -x ovmf_2022.11-6+deb12u1_all.deb fw/ @@ -146,7 +146,7 @@ $ qemu-system-x86_64 \ Once logged into the VM we can retrieve the relevant measurements in the form of PCRs (the package `tpm2_tools` needs to be available): -```bash +```console $ tpm2_pcrread sha256:0,1,2,3,4,5,6,7,8,9,10,11 sha256: 0 : 0x61E3B90D0862D052BF6C802E0FD2A44A671A37FE2EB67368D89CB56E5D23014E @@ -165,7 +165,7 @@ $ tpm2_pcrread sha256:0,1,2,3,4,5,6,7,8,9,10,11 If we boot the same image on a Confidential VM in Azure's cloud, we'll see different measurements. This is expected since the early boot stack does not match our reference setup: -```bash +```console $ tpm2_pcrread sha256:0,1,2,3,4,5,6,7,8,9,10,11 sha256: 0 : 0x782B20B10F55CC46E2142CC2145D548698073E5BEB82752C8D7F9279F0D8A273 @@ -184,7 +184,7 @@ $ tpm2_pcrread sha256:0,1,2,3,4,5,6,7,8,9,10,11 We can identify the common PCRs between the measurements in a cloud VM and those that we gathered in our reference setup. Those are good candidates to include them as [reference values](https://confidentialcontainers.org/docs/attestation/reference-values/) in a relying party against which a TEE's evidence can be verified. -```bash +```console $ grep -F -x -f pcr_reference.txt pcr_cloud.txt 3 : 0x3D458CFE55CC03EA1F443F1562BEEC8DF51C75E14A9FCF9A7234A13F198E7969 8 : 0x0000000000000000000000000000000000000000000000000000000000000000 diff --git a/content/en/blog/2024/coco-without-confidential-hardware.md b/content/en/blog/2024/coco-without-confidential-hardware.md index 7f8caa5..6b7d21f 100644 --- a/content/en/blog/2024/coco-without-confidential-hardware.md +++ b/content/en/blog/2024/coco-without-confidential-hardware.md @@ -54,7 +54,7 @@ In this section you will learn how to get the [CoCo operator](https://github.com First, you should have the `node.kubernetes.io/worker=` label on all the cluster nodes that you want the runtime installed on. This is how the cluster admin instructs the operator controller about what nodes, in a multi-node cluster, need the runtime. Use the command `kubectl label node NODE_NAME "node.kubernetes.io/worker="` as on the listing below to add the label: -```shell +```console $ kubectl get nodes NAME STATUS ROLES AGE VERSION coco-demo Ready control-plane 87s v1.30.1 @@ -64,7 +64,7 @@ node/coco-demo labeled Once the target worker nodes are properly labeled, the next step is to install the operator controller. You should first ensure that SELinux is disabled or in permissive mode, however, because the operator controller will attempt to restart services in your system and SELinux may deny that. Using the following sequence of commands we set SELinux to permissive and install the operator controller: -```shell +```console $ sudo setenforce 0 $ kubectl apply -k github.com/confidential-containers/operator/config/release?ref=v0.10.0 ``` @@ -72,7 +72,7 @@ $ kubectl apply -k github.com/confidential-containers/operator/config/release?re This will create a series of resources in the `confidential-containers-system` namespace. In particular, it creates a deployment with pods that all need to be running before you continue the installation, as shown below: -```shell +```console $ kubectl get pods -n confidential-containers-system NAME READY STATUS RESTARTS AGE cc-operator-controller-manager-557b5cbdc5-q7wk7 2/2 Running 0 2m42s @@ -85,7 +85,7 @@ The operator controller is capable of managing the installation of different [Co Now it is time to install the ccruntime runtime. You should run the following commands and wait a few minutes while it downloads and installs Kata Containers and configures your node for CoCo: -```shell +```console $ kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/default?ref=v0.10.0 ccruntime.confidentialcontainers.org/ccruntime-sample created $ kubectl get pods -n confidential-containers-system --watch @@ -97,7 +97,7 @@ cc-operator-pre-install-daemon-d55v2 1/1 Running 0 8m35s You can notice that it will get installed a couple of [Kubernetes runtimeclasses](https://kubernetes.io/docs/concepts/containers/runtime-class/) as shown on the listing below. Each class defines a container runtime configuration as, for example, **kata-qemu-tdx** should be used to launch QEMU/KVM for Intel TDX hardware (similarly **kata-qemu-snp** for AMD SEV-SNP). For the purpose of creating a confidential pod in a non-TEE environment we will be using the **kata-qemu-coco-dev** runtime class. -```shell +```console $ kubectl get runtimeclasses NAME HANDLER AGE kata kata-qemu 26m @@ -137,7 +137,7 @@ spec: Then you should apply that manifest and wait for the pod to be `RUNNING` as shown below: -```shell +```console $ kubectl apply -f coco-demo-01.yaml pod/coco-demo-01 created $ kubectl get pods @@ -157,7 +157,7 @@ Our confidential containers implementation is built on [Kata Containers](https:/ Currently CoCo supports launching pods with [QEMU](https://www.qemu.org/) only, despite Kata Containers supporting other hypervisors. An instance of QEMU was launched to run the coco-demo-01, as you can see below: -```shell +```console $ ps aux | grep /opt/kata/bin/qemu-system-x86_64 root 15892 0.8 3.6 2648004 295424 ? Sl 20:36 0:04 /opt/kata/bin/qemu-system-x86_64 -name sandbox-baabb31ff0c798a31bca7373f2abdbf2936375a5729a3599799c0a225f3b9612 -uuid e8a3fb26-eafa-4d6b-b74e-93d0314b6e35 -machine q35,accel=kvm,nvdimm=on -cpu host,pmu=off -qmp unix:fd=3,server=on,wait=off -m 2048M,slots=10,maxmem=8961M -device pci-bridge,bus=pcie.0,id=pci-bridge-0,chassis_nr=1,shpc=off,addr=2,io-reserve=4k,mem-reserve=1m,pref64-reserve=1m -device virtio-serial-pci,disable-modern=true,id=serial0 -device virtconsole,chardev=charconsole0,id=console0 -chardev socket,id=charconsole0,path=/run/vc/vm/baabb31ff0c798a31bca7373f2abdbf2936375a5729a3599799c0a225f3b9612/console.sock,server=on,wait=off -device nvdimm,id=nv0,memdev=mem0,unarmed=on -object memory-backend-file,id=mem0,mem-path=/opt/kata/share/kata-containers/kata-ubuntu-latest-confidential.image,size=268435456,readonly=on -device virtio-scsi-pci,id=scsi0,disable-modern=true -object rng-random,id=rng0,filename=/dev/urandom -device virtio-rng-pci,rng=rng0 -device vhost-vsock-pci,disable-modern=true,vhostfd=4,id=vsock-1515224306,guest-cid=1515224306 -netdev tap,id=network-0,vhost=on,vhostfds=5,fds=6 -device driver=virtio-net-pci,netdev=network-0,mac=6a:e6:eb:34:52:32,disable-modern=true,mq=on,vectors=4 -rtc base=utc,driftfix=slew,clock=host -global kvm-pit.lost_tick_policy=discard -vga none -no-user-config -nodefaults -nographic --no-reboot -object memory-backend-ram,id=dimm1,size=2048M -numa node,memdev=dimm1 -kernel /opt/kata/share/kata-containers/vmlinuz-6.7-136-confidential -append tsc=reliable no_timer_check rcupdate.rcu_expedited=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 i8042.noaux=1 noreplace-smp reboot=k cryptomgr.notests net.ifnames=0 pci=lastbus=0 root=/dev/pmem0p1 rootflags=dax,data=ordered,errors=remount-ro ro rootfstype=ext4 console=hvc0 console=hvc1 quiet systemd.show_status=false panic=1 nr_cpus=4 selinux=0 systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket scsi_mod.scan=none -pidfile /run/vc/vm/baabb31ff0c798a31bca7373f2abdbf2936375a5729a3599799c0a225f3b9612/pid -smp 1,cores=1,threads=1,sockets=4,maxcpus=4 ``` @@ -166,7 +166,7 @@ The launched kernel (`/opt/kata/share/kata-containers/vmlinuz-6.7-136-confidenti If you run `uname -a` inside the coco-demo-01 and compare with the value obtained from the host then you will notice the container is isolated by a different kernel, as shown below: -```shell +```console $ kubectl exec coco-demo-01 -- uname -a Linux 6.7.0 #1 SMP Mon Sep 9 09:48:13 UTC 2024 x86_64 GNU/Linux $ uname -a @@ -183,7 +183,7 @@ Oversimplifying, in a normal Kata Containers pod the container image is pulled b If you have the `ctr` command in your environment then you can check that only the **quay.io/prometheus/busybox**'s manifest was cached in containerd's storage as well as no rootfs directory exists in `/run/kata-containers/shared/sandboxes/` as shown below: -```shell +```console $ sudo ctr -n "k8s.io" image check name==quay.io/prometheus/busybox:latest REF TYPE DIGEST STATUS SIZE UNPACKED quay.io/prometheus/busybox:latest application/vnd.docker.distribution.manifest.list.v2+json sha256:dfa54ef35e438b9e71ac5549159074576b6382f95ce1a434088e05fd6b730bc4 incomplete (1/3) 1.0 KiB/1.2 MiB false @@ -205,7 +205,7 @@ Points if you noticed on the *coco-demo-01* pod example that the host owner can As an example, let’s show how to block the ExecProcessRequest endpoint of the kata-agent to deny the execution of commands in the container. First you need to encode in base64 a [Rego policy file](https://www.openpolicyagent.org/docs/latest/policy-language/) as shown below: -```shell +```console $ curl -s https://raw.githubusercontent.com/kata-containers/kata-containers/refs/heads/main/src/kata-opa/allow-all-except-exec-process.rego | base64 -w 0 IyBDb3B5cmlnaHQgKGMpIDIwMjMgTWljcm9zb2Z0IENvcnBvcmF0aW9uCiMKIyBTUERYLUxpY2Vuc2UtSWRlbnRpZmllcjogQXBhY2hlLTIuMAojCgpwYWNrYWdlIGFnZW50X3BvbGljeQoKZGVmYXVsdCBBZGRBUlBOZWlnaGJvcnNSZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBBZGRTd2FwUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgQ2xvc2VTdGRpblJlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IENvcHlGaWxlUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgQ3JlYXRlQ29udGFpbmVyUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgQ3JlYXRlU2FuZGJveFJlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IERlc3Ryb3lTYW5kYm94UmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgR2V0TWV0cmljc1JlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IEdldE9PTUV2ZW50UmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgR3Vlc3REZXRhaWxzUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgTGlzdEludGVyZmFjZXNSZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBMaXN0Um91dGVzUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgTWVtSG90cGx1Z0J5UHJvYmVSZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBPbmxpbmVDUFVNZW1SZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBQYXVzZUNvbnRhaW5lclJlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFB1bGxJbWFnZVJlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFJlYWRTdHJlYW1SZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBSZW1vdmVDb250YWluZXJSZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBSZW1vdmVTdGFsZVZpcnRpb2ZzU2hhcmVNb3VudHNSZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBSZXNlZWRSYW5kb21EZXZSZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBSZXN1bWVDb250YWluZXJSZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBTZXRHdWVzdERhdGVUaW1lUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgU2V0UG9saWN5UmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgU2lnbmFsUHJvY2Vzc1JlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFN0YXJ0Q29udGFpbmVyUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgU3RhcnRUcmFjaW5nUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgU3RhdHNDb250YWluZXJSZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBTdG9wVHJhY2luZ1JlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFR0eVdpblJlc2l6ZVJlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFVwZGF0ZUNvbnRhaW5lclJlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFVwZGF0ZUVwaGVtZXJhbE1vdW50c1JlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFVwZGF0ZUludGVyZmFjZVJlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFVwZGF0ZVJvdXRlc1JlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFdhaXRQcm9jZXNzUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgV3JpdGVTdHJlYW1SZXF1ZXN0IDo9IHRydWUKCmRlZmF1bHQgRXhlY1Byb2Nlc3NSZXF1ZXN0IDo9IGZhbHNlCg== ``` @@ -219,23 +219,23 @@ kind: Pod metadata: name: coco-demo-02 annotations: - "io.containerd.cri.runtime-handler": "kata-qemu-coco-dev" - io.katacontainers.config.agent.policy: IyBDb3B5cmlnaHQgKGMpIDIwMjMgTWljcm9zb2Z0IENvcnBvcmF0aW9uCiMKIyBTUERYLUxpY2Vuc2UtSWRlbnRpZmllcjogQXBhY2hlLTIuMAojCgpwYWNrYWdlIGFnZW50X3BvbGljeQoKZGVmYXVsdCBBZGRBUlBOZWlnaGJvcnNSZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBBZGRTd2FwUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgQ2xvc2VTdGRpblJlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IENvcHlGaWxlUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgQ3JlYXRlQ29udGFpbmVyUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgQ3JlYXRlU2FuZGJveFJlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IERlc3Ryb3lTYW5kYm94UmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgR2V0TWV0cmljc1JlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IEdldE9PTUV2ZW50UmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgR3Vlc3REZXRhaWxzUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgTGlzdEludGVyZmFjZXNSZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBMaXN0Um91dGVzUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgTWVtSG90cGx1Z0J5UHJvYmVSZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBPbmxpbmVDUFVNZW1SZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBQYXVzZUNvbnRhaW5lclJlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFB1bGxJbWFnZVJlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFJlYWRTdHJlYW1SZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBSZW1vdmVDb250YWluZXJSZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBSZW1vdmVTdGFsZVZpcnRpb2ZzU2hhcmVNb3VudHNSZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBSZXNlZWRSYW5kb21EZXZSZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBSZXN1bWVDb250YWluZXJSZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBTZXRHdWVzdERhdGVUaW1lUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgU2V0UG9saWN5UmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgU2lnbmFsUHJvY2Vzc1JlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFN0YXJ0Q29udGFpbmVyUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgU3RhcnRUcmFjaW5nUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgU3RhdHNDb250YWluZXJSZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBTdG9wVHJhY2luZ1JlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFR0eVdpblJlc2l6ZVJlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFVwZGF0ZUNvbnRhaW5lclJlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFVwZGF0ZUVwaGVtZXJhbE1vdW50c1JlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFVwZGF0ZUludGVyZmFjZVJlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFVwZGF0ZVJvdXRlc1JlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFdhaXRQcm9jZXNzUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgV3JpdGVTdHJlYW1SZXF1ZXN0IDo9IHRydWUKCmRlZmF1bHQgRXhlY1Byb2Nlc3NSZXF1ZXN0IDo9IGZhbHNlCg== + io.containerd.cri.runtime-handler: "kata-qemu-coco-dev" + io.katacontainers.config.agent.policy: IyBDb3B5cmlnaHQgKGMpIDIwMjMgTWljcm9zb2Z0IENvcnBvcmF0aW9uCiMKIyBTUERYLUxpY2Vuc2UtSWRlbnRpZmllcjogQXBhY2hlLTIuMAojCgpwYWNrYWdlIGFnZW50X3BvbGljeQoKZGVmYXVsdCBBZGRBUlBOZWlnaGJvcnNSZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBBZGRTd2FwUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgQ2xvc2VTdGRpblJlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IENvcHlGaWxlUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgQ3JlYXRlQ29udGFpbmVyUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgQ3JlYXRlU2FuZGJveFJlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IERlc3Ryb3lTYW5kYm94UmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgR2V0TWV0cmljc1JlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IEdldE9PTUV2ZW50UmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgR3Vlc3REZXRhaWxzUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgTGlzdEludGVyZmFjZXNSZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBMaXN0Um91dGVzUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgTWVtSG90cGx1Z0J5UHJvYmVSZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBPbmxpbmVDUFVNZW1SZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBQYXVzZUNvbnRhaW5lclJlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFB1bGxJbWFnZVJlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFJlYWRTdHJlYW1SZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBSZW1vdmVDb250YWluZXJSZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBSZW1vdmVTdGFsZVZpcnRpb2ZzU2hhcmVNb3VudHNSZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBSZXNlZWRSYW5kb21EZXZSZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBSZXN1bWVDb250YWluZXJSZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBTZXRHdWVzdERhdGVUaW1lUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgU2V0UG9saWN5UmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgU2lnbmFsUHJvY2Vzc1JlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFN0YXJ0Q29udGFpbmVyUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgU3RhcnRUcmFjaW5nUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgU3RhdHNDb250YWluZXJSZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBTdG9wVHJhY2luZ1JlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFR0eVdpblJlc2l6ZVJlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFVwZGF0ZUNvbnRhaW5lclJlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFVwZGF0ZUVwaGVtZXJhbE1vdW50c1JlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFVwZGF0ZUludGVyZmFjZVJlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFVwZGF0ZVJvdXRlc1JlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFdhaXRQcm9jZXNzUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgV3JpdGVTdHJlYW1SZXF1ZXN0IDo9IHRydWUKCmRlZmF1bHQgRXhlY1Byb2Nlc3NSZXF1ZXN0IDo9IGZhbHNlCg== spec: runtimeClassName: kata-qemu-coco-dev containers: - - name: busybox - image: quay.io/prometheus/busybox:latest - imagePullPolicy: Always - command: - - sleep - - "infinity" + - name: busybox + image: quay.io/prometheus/busybox:latest + imagePullPolicy: Always + command: + - sleep + - "infinity" restartPolicy: Never ``` Create the pod, wait for it to be RUNNING, then check that kubectl cannot exec in the container. As a matter of comparison, run exec on coco-demo-01 as shown below: -```shell +```console $ kubectl apply -f coco-demo-02.yaml pod/coco-demo-02 created $ kubectl get pod @@ -264,7 +264,7 @@ In this blog we will be deploying a development/test version of the Key Broker S The following instructions will end up with KBS installed on your cluster and having its service exposed via [nodeport](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport). For further information about deploying the KBS on Kubernetes, see this [README](https://github.com/confidential-containers/trustee/tree/v0.10.1/kbs/config/kubernetes). So do: -```shell +```console $ git clone https://github.com/confidential-containers/trustee --single-branch -b v0.10.1 $ cd trustee/kbs/config/kubernetes $ echo "somesecret" > overlays/$(uname -m)/key.bin @@ -276,7 +276,7 @@ $ export KBS_PRIVATE_KEY="${PWD}/base/kbs.key" Wait the KBS deployment be ready and running just like below: -```shell +```console $ kubectl -n coco-tenant get deployments NAME READY UP-TO-DATE AVAILABLE AGE kbs 1/1 1 1 26m @@ -284,14 +284,14 @@ kbs 1/1 1 1 26m You will need the KBS host and port to configure the pod. These values can be obtained like in below listing: -```shell +```console $ export KBS_HOST=$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[?(@.type=="InternalIP")].address}' -n coco-tenant) $ export KBS_PORT=$(kubectl get svc "kbs" -n "coco-tenant" -o jsonpath='{.spec.ports[0].nodePort}') ``` At this point KBS is up and running but lacks policies and resources. To facilitate its configuration we will be using the [kbs-client](https://github.com/confidential-containers/trustee/tree/v0.10.1/tools/kbs-client) tool. Use the [ORAS tool](https://oras.land) to download a build of kbs-client: -```shell +```console $ curl -LOs "https://github.com/oras-project/oras/releases/download/v1.2.0/oras_1.2.0_linux_amd64.tar.gz" $ tar xvzf oras_1.2.0_linux_amd64.tar.gz $ ./oras pull ghcr.io/confidential-containers/staged-images/kbs-client:sample_only-x86_64-linux-gnu-68607d4300dda5a8ae948e2562fd06d09cbd7eca @@ -309,17 +309,17 @@ kind: Pod metadata: name: coco-demo-03 annotations: - "io.containerd.cri.runtime-handler": "kata-qemu-coco-dev" - io.katacontainers.config.hypervisor.kernel_params: " agent.aa_kbc_params=cc_kbc::http://192.168.122.153:31491" + io.containerd.cri.runtime-handler: "kata-qemu-coco-dev" + io.katacontainers.config.hypervisor.kernel_params: " agent.aa_kbc_params=cc_kbc::http://192.168.122.153:31491" spec: runtimeClassName: kata-qemu-coco-dev containers: - - name: busybox - image: quay.io/prometheus/busybox:latest - imagePullPolicy: Always - command: - - sleep - - "infinity" + - name: busybox + image: quay.io/prometheus/busybox:latest + imagePullPolicy: Always + command: + - sleep + - "infinity" restartPolicy: Never ``` @@ -331,7 +331,7 @@ The [Confidential Data Hub](https://github.com/confidential-containers/guest-com Let’s add the to-be-fetched resource to the KBS first. Think of that resource as a secret key required to unencrypt an important file for data processing. Using `kbs-client`, do the following (`KBS_HOST`, `KBS_PORT` and `KBS_PRIVATE_KEY` are previously defined variables): -```shell +```console $ echo "MySecretKey" > secret.txt $ ./kbs-client --url "http://$KBS_HOST:$KBS_PORT" config --auth-private-key "$KBS_PRIVATE_KEY" set-resource --path default/secret/1 --resource-file secret.txt Set resource success @@ -347,25 +347,25 @@ kind: Pod metadata: name: coco-demo-04 annotations: - "io.containerd.cri.runtime-handler": "kata-qemu-coco-dev" - io.katacontainers.config.hypervisor.kernel_params: " agent.aa_kbc_params=cc_kbc::http://192.168.122.153:31491" + io.containerd.cri.runtime-handler: "kata-qemu-coco-dev" + io.katacontainers.config.hypervisor.kernel_params: " agent.aa_kbc_params=cc_kbc::http://192.168.122.153:31491" spec: runtimeClassName: kata-qemu-coco-dev containers: - - name: busybox - image: quay.io/prometheus/busybox:latest - imagePullPolicy: Always - command: - - sh - - -c - - | - wget -O- http://127.0.0.1:8006/cdh/resource/default/secret/1; sleep infinity + - name: busybox + image: quay.io/prometheus/busybox:latest + imagePullPolicy: Always + command: + - sh + - -c + - | + wget -O- http://127.0.0.1:8006/cdh/resource/default/secret/1; sleep infinity restartPolicy: Never ``` Apply **coco-demo-04.yaml** and wait for it to get into `RUNNING` state. Checking the pod logs you will notice that wget failed to fetch the secret: -```shell +```console $ kubectl apply -f coco-demo-04.yaml pod/coco-demo-04 created $ kubectl wait --for=condition=Ready pod/coco-demo-04 @@ -377,7 +377,7 @@ wget: server returned error: HTTP/1.1 500 Internal Server Error Looking at the KBS logs we can find that the problem was caused by `Resource not permitted` denial: -```shell +```console $ kubectl logs -l app=kbs -n coco-tenant Defaulted container "kbs" out of: kbs, copy-config (init) [2024-11-07T20:04:32Z INFO kbs::http::resource] Get resource from kbs:///default/secret/1 @@ -400,7 +400,7 @@ package policy default allow = false allow { - input["tee"] == "sample" + input["tee"] == "sample" } ``` @@ -408,7 +408,7 @@ The `GetResource` request to CDH is an attested operation. The policy in **resou Apply the resources_policy.rego policy to the KBS, then respin the coco-demo-04 pod, and you will see `MySecretKey` is now fetched: -```shell +```console $ ./kbs-client --url "http://$KBS_HOST:$KBS_PORT" config --auth-private-key "$KBS_PRIVATE_KEY" set-resource-policy --policy-file resources_policy.rego Set resource policy success policy: cGFja2FnZSBwb2xpY3kKCmRlZmF1bHQgYWxsb3cgPSBmYWxzZQoKYWxsb3cgewogICAgaW5wdXRbInRlZSJdID09ICJzYW1wbGUiCn0K @@ -426,7 +426,7 @@ MySecretKey In the KBS log messages below you can see that the Attestation Service (AS) was involved in the request. A sample verifier was invoked in the place of a real hardware-oriented one for the sake of emulating the verification process. The generated attestation token (see `Attestation Token (Simple) generated` message in the log) is passed all the way back to the CDH on the confidential VM, which then can finally request the resource (the `Get resource from kbs:///default/secret/1` message) from KBS. -```shell +```console $ kubectl logs -l app=kbs -n coco-tenant Defaulted container "kbs" out of: kbs, copy-config (init) [2024-11-07T22:04:22Z INFO actix_web::middleware::logger] 10.244.0.1 "POST /kbs/v0/auth HTTP/1.1" 200 74 "-" "attestation-agent-kbs-client/0.1.0" 0.000185 @@ -456,8 +456,8 @@ kind: Pod metadata: name: coco-demo-05 annotations: - "io.containerd.cri.runtime-handler": "kata-qemu-coco-dev" - io.katacontainers.config.hypervisor.kernel_params: " agent.aa_kbc_params=cc_kbc::http://192.168.122.153:31491" + io.containerd.cri.runtime-handler: "kata-qemu-coco-dev" + io.katacontainers.config.hypervisor.kernel_params: " agent.aa_kbc_params=cc_kbc::http://192.168.122.153:31491" spec: runtimeClassName: kata-qemu-coco-dev containers: @@ -472,7 +472,7 @@ spec: Apply the pod, wait a little bit and you will see it failed to start with `StartError` status: -```shell +```console $ kubectl describe pods/coco-demo-05 Name: coco-demo-05 Namespace: default @@ -597,7 +597,7 @@ Stack backtrace: The reason why it failed is because the decryption key wasn’t found in the KBS. So let’s insert the key: -```shell +```console $ echo "HUlOu8NWz8si11OZUzUJMnjiq/iZyHBJZMSD3BaqgMc=" | base64 -d > image_key.txt $ ./kbs-client --url "http://$KBS_HOST:$KBS_PORT" config --auth-private-key "$KBS_PRIVATE_KEY" set-resource --path default/key/ssh-demo --resource-file image_key.txt Set resource success @@ -608,7 +608,7 @@ Then restart the coco-demo-05 pod and it should get running just fine. As demonstrated by the listing below, you can inspect the image with skopeo. Note that each of its layers is encrypted (`MIMEType` is `tar+gzip+encrypted`) and annotated with `org.opencontainers.image.enc.*` tags. In particular, the `org.opencontainers.image.enc.keys.provider.attestation-agent` annotation encodes the decryption key path (e.g. `kbs:///default/key/ssh-demo`) in the KBS: -```shell +```console $ skopeo inspect --raw docker://ghcr.io/confidential-containers/test-container:multi-arch-encrypted { "schemaVersion": 2, diff --git a/content/en/blog/2024/policing-a-sandbox.md b/content/en/blog/2024/policing-a-sandbox.md index 1779fba..2af2306 100644 --- a/content/en/blog/2024/policing-a-sandbox.md +++ b/content/en/blog/2024/policing-a-sandbox.md @@ -23,7 +23,7 @@ For the implementers of such a solution, this choice comes with a few challenges This would be the sequence of RPC calls that are issued to a Kata agent in the Guest VM (for brevity we'll refer to it as _Agent_ in the text below), if we launch a simple Nginx Pod. There are 2 containers being launched, because a Pod includes the implicit `pause` container: -``` +```text create_sandbox get_guest_details copy_file @@ -62,7 +62,6 @@ In order to preserve the integrity of a Confidential Pod, we need to observe clo Kata-Containers currently features an implementation of a policy engine using the popular [Rego](https://www.openpolicyagent.org/docs/latest/policy-language) language. Convenience tooling can assist and automate aspects of authoring a policy for a workload. The following would be an example policy (hand-crafted for brevity, real policy bodies would be larger) in which we allow the launch of specific OCI images, the execution of certain commands, Kata management endpoints, but disallow pretty much everything else during runtime: ```rego -""" package agent_policy import future.keywords.in @@ -85,13 +84,13 @@ default ExecProcessRequest := false CreateContainerRequest if { every storage in input.storages { - some allowed_image in policy_data.allowed_images - storage.source == allowed_image - } + some allowed_image in policy_data.allowed_images + storage.source == allowed_image + } } ExecProcessRequest if { - input_command = concat(" ", input.process.Args) + input_command = concat(" ", input.process.Args) some allowed_command in policy_data.allowed_commands input_command == allowed_command } @@ -180,8 +179,8 @@ Host-Data is a field in a TEE’s evidence that is passed into a confidential Gu Example: Producing a SHA256 digest of the Init-Data file -```bash -openssl dgst -sha256 --binary init-data.toml | xxd -p -c32 +```console +$ openssl dgst -sha256 --binary init-data.toml | xxd -p -c32 bdc9a7390bb371258fb7fb8be5a8de5ced6a07dd077d1ce04ec26e06eaf68f60 ``` @@ -191,10 +190,10 @@ Instead of seeding the Init-Data hash into a Host-Data field at launch, we can a Example: Extending an empty SHA256 runtime measurement register with the digest of an Init-Data file -```bash -dd if=/dev/zero of=zeroes bs=32 count=1 -openssl dgst -sha256 --binary init-data.toml > init-data.digest -openssl dgst -sha256 --binary <(cat zeroes init-data.digest) | xxd -p -c32 +```console +$ dd if=/dev/zero of=zeroes bs=32 count=1 +$ openssl dgst -sha256 --binary init-data.toml > init-data.digest +$ openssl dgst -sha256 --binary <(cat zeroes init-data.digest) | xxd -p -c32 7aaf19294adabd752bf095e1f076baed85d4b088fa990cb575ad0f3e0569f292 ``` @@ -231,46 +230,46 @@ cat nginx-cc.yaml | jq \ If the Pod came up successfully, it passed the initial policy check for the image already. -```bash -kubectl get pod +```console +$ kubectl get pod NAME READY STATUS RESTARTS AGE nginx-cc-694cc48b65-lklj7 1/1 Running 0 83s ``` According to the policy only certain commands are allowed to be executed in the container. Executing `whoami` should be fine, while `ls` should be rejected: -```bash -kubectl exec -it deploy/nginx-cc -- whoami +```console +$ kubectl exec -it deploy/nginx-cc -- whoami root ``` -```bash -kubectl exec -it deploy/nginx-cc -- ls +```console +$ kubectl exec -it deploy/nginx-cc -- ls error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "e2d8bad68b64d6918e6bda08a43f457196b5f30d6616baa94a0be0f443238980": cannot enter container 914c589fe74d1fcac834d0dcfa3b6a45562996661278b4a8de5511366d6a4609, with err rpc error: code = PermissionDenied desc = "ExecProcessRequest is blocked by policy: ": unknown ``` -In our example we tie the Init-Data measurement to the TEE evidence using a Runtime Measurement into PCR8 of a vTPM. Assuming a 0-initalized SHA256 register, we can calculate the expected value by extend the zeroes with the SHA256 digest of the Init-Data file: +In our example we tie the Init-Data measurement to the TEE evidence using a Runtime Measurement into PCR8 of a vTPM. Assuming a 0-initialized SHA256 register, we can calculate the expected value by extend the zeroes with the SHA256 digest of the Init-Data file: -```bash -dd if=/dev/zero of=zeroes bs=32 count=1 -openssl dgst -sha256 --binary init-data.toml > init-data.digest -openssl dgst -sha256 --binary <(cat zeroes init-data.digest) | xxd -p -c32 +```console +$ dd if=/dev/zero of=zeroes bs=32 count=1 +$ openssl dgst -sha256 --binary init-data.toml > init-data.digest +$ openssl dgst -sha256 --binary <(cat zeroes init-data.digest) | xxd -p -c32 765156eda5fe806552610f2b6e828509a8b898ad014c76ad8600261eb7c5e63f ``` As part of the policy we also allow-listed a specific command that can request a KBS token using an endpoint that is exposed to a container by a [specific Guest Component](https://github.com/confidential-containers/guest-components/tree/41ad96d4b2e5e9dc205c6d41f7b550629cea677f/api-server-rest). Note: This is not something a user would want to typically enable, since this token is used to retrieve confidential secrets and we would not want it to leak outside the Guest. We are using it here to illustrate that we _could_ retrieve a secret in the container, since we passed remote attestation including the verification of the Init-Data digest. -```bash -kubectl exec deploy/nginx-cc -- curl -s http://127.0.0.1:8006/aa/token\?token_type=kbs | jq -c 'keys' +```console +$ kubectl exec deploy/nginx-cc -- curl -s http://127.0.0.1:8006/aa/token\?token_type=kbs | jq -c 'keys' ["tee_keypair","token"] ``` Since this has been successful we can inspect the logs of the Attestation Service (bundled into a KBS here) to confirm it has been considered in the appraisal. The first text block shows the claims from the (successfully verified) TEE evidence, the second block is displaying the acceptable reference values for a PCR8 measurement: -```bash -kubectl logs deploy/kbs -n coco-tenant | grep -C 2 765156eda5fe806552610f2b6e828509a8b898ad014c76ad8600261eb7c5e63f +```console +$ kubectl logs deploy/kbs -n coco-tenant | grep -C 2 765156eda5fe806552610f2b6e828509a8b898ad014c76ad8600261eb7c5e63f ... "aztdxvtpm.tpm.pcr06": String("65f0a56c41416fa82d573df151746dc1d6af7bd8d4a503b2ab07664305d01e59"), "aztdxvtpm.tpm.pcr07": String("124daf47b4d67179a77dc3c1bcca198ae1ee1d094a2a879974842e44ab98bb06"), @@ -282,7 +281,7 @@ kubectl logs deploy/kbs -n coco-tenant | grep -C 2 765156eda5fe806552610f2b6e828 "7aaf19294adabd752bf095e1f076baed85d4b088fa990cb575ad0f3e0569f292", "765156eda5fe806552610f2b6e828509a8b898ad014c76ad8600261eb7c5e63f", ], - "aztdxvtpm.tpm.pcr10": [], + "aztdxvtpm.tpm.pcr10": [], ``` ### Size Limitations diff --git a/content/en/blog/2025/coco-and-slsa.md b/content/en/blog/2025/coco-and-slsa.md index dca0b86..0b75293 100644 --- a/content/en/blog/2025/coco-and-slsa.md +++ b/content/en/blog/2025/coco-and-slsa.md @@ -90,7 +90,7 @@ Following is the current state of provenance generation of the CoCo projects: Formerly exclusively a place to host layers for container images, OCI registries today can serve a multitude of use cases, such as Helm Charts or Attestation data. A registry is a content-addressable store, which means that it is named after a digest of its content. -```sh +```console $ DIGEST="84ec2a70279219a45d327ec1f2f112d019bc9dcdd0e19f1ba7689b646c2de0c2" $ oras manifest fetch "quay.io/curl/curl@sha256:${DIGEST}" | sha256sum 84ec2a70279219a45d327ec1f2f112d019bc9dcdd0e19f1ba7689b646c2de0c2 - @@ -99,15 +99,15 @@ $ oras manifest fetch "quay.io/curl/curl@sha256:${DIGEST}" | sha256sum We also use OCI registries to distribute and cache artifacts in the CoCo project. There is a convention of specifying upstream dependencies in a `versions.yaml` file this: -```sh +```yaml oci: - ... + # ... kata-containers: - registry: ghcr.io/kata-containers/cached-artefacts - reference: 3.13.0 + registry: ghcr.io/kata-containers/cached-artefacts + reference: 3.13.0 guest-components: - registry: ghcr.io/confidential-containers/guest-components - reference: 3df6c412059f29127715c3fdbac9fa41f56cfce4 + registry: ghcr.io/confidential-containers/guest-components + reference: 3df6c412059f29127715c3fdbac9fa41f56cfce4 ``` Note that the `reference` in this case is a tag, sometimes a version, and sometimes a reference to the digest of a given git commit, not the digest of the OCI artefact. What do we express from this specification, and what do we want to verify? @@ -115,7 +115,7 @@ We might want to resolve a tag to a git digest first, so tag 3.13.0 resolves to This is what the SLSA attestation that we created during the artefact's build can substantiate. It wouldn't make sense to attest against an artifact using an OCI tag alias (those are not immutable, and one can move it to point to something else); the attestations are tied to an OCI artifact referenced by its digest and conveniently stored alongside this in the same repo. We can find it manually if we search for referrers of our OCI artifact. -```sh +```console $ GIT_DGST="2777b13db748f9ba785c7d2be4fcb6ac9c9af265" $ oras resolve "ghcr.io/kata-containers/cached-artefacts/agent:${GIT_DGST}-x86_64" sha256:c127db93af2fcefddebbe98013e359a7c30b9130317a96aab50093af0dbe8464 @@ -132,7 +132,7 @@ As mentioned previously, Github provides a command line option to verify the [pr The following snippet from the cloud-api-adaptor project's [Makefile](https://github.com/confidential-containers/cloud-api-adaptor/blob/main/src/cloud-api-adaptor/podvm/Makefile.inc) shows an example usage: -```sh +```bash define pull_agent_artifact $(eval $(call generate_tag,tag,$(KATA_REF),$(ARCH))) $(eval OCI_IMAGE := $(KATA_REGISTRY)/agent) diff --git a/content/en/blog/2026/trustee-deployment.md b/content/en/blog/2026/trustee-deployment.md index 2b69ba3..a3b4476 100644 --- a/content/en/blog/2026/trustee-deployment.md +++ b/content/en/blog/2026/trustee-deployment.md @@ -83,7 +83,7 @@ kubectl get csv -n operators We should expect something like: -```bash +```text NAME DISPLAY VERSION REPLACES PHASE trustee-operator.v0.17.0 Trustee Operator 0.17.0 trustee-operator.v0.5.0 Succeeded ``` @@ -178,7 +178,7 @@ EOF Permissive Mode TrusteeConfig CR creation: -```bash +```yaml apiVersion: confidentialcontainers.org/v1alpha1 kind: TrusteeConfig metadata: @@ -308,8 +308,8 @@ kubectl config set-context --current --namespace=operators ### Check if the PODs are running -```bash -kubectl get pods -n operators +```console +$ kubectl get pods -n operators NAME READY STATUS RESTARTS AGE trustee-deployment-7bdc6858d7-bdncx 1/1 Running 0 69s trustee-operator-controller-manager-6c584fc969-8dz2d 1/1 Running 0 4h7m @@ -317,9 +317,9 @@ trustee-operator-controller-manager-6c584fc969-8dz2d 1/1 Running 0 Also, the log should report something like: -```bash -POD_NAME=$(kubectl get pods -l app=kbs -o jsonpath='{.items[0].metadata.name}' -n operators) -kubectl logs -n operators $POD_NAME +```console +$ export POD_NAME=$(kubectl get pods -l app=kbs -o jsonpath='{.items[0].metadata.name}' -n operators) +$ kubectl logs -n operators $POD_NAME [2026-02-10T15:21:47Z INFO kbs] Using config file /etc/kbs-config/kbs-config.toml [2026-02-10T15:21:47Z INFO tracing::span] Initialize RVPS; [2026-02-10T15:21:47Z INFO attestation_service::rvps] launch a built-in RVPS. @@ -362,16 +362,16 @@ Finally we are able to test the entire attestation protocol, when fetching one o Note: Make sure the resource-policy is permissive for testing purposes. For example: -``` - package policy - default allow = true +```rego +package policy +default allow = true ``` -```bash -kubectl get secret trustee-tls-cert -n operators -o json | jq -r '.data."tls.crt"' | base64 --decode > https.crt -kubectl cp -n operators https.crt kbs-client:/ -kubectl exec -it -n operators kbs-client -- kbs-client --cert-file https.crt --url https://kbs-service:8080 get-resource --path default/kbsres1/key1 +```console +$ kubectl get secret trustee-tls-cert -n operators -o json | jq -r '.data."tls.crt"' | base64 --decode > https.crt +$ kubectl cp -n operators https.crt kbs-client:/ +$ kubectl exec -it -n operators kbs-client -- kbs-client --cert-file https.crt --url https://kbs-service:8080 get-resource --path default/kbsres1/key1 cmVzMXZhbDE= ``` @@ -385,4 +385,4 @@ We’ll get *res1val1*, the secret we created before. ## Summary -In this blog we have shown how to use the Trustee operator for deploying Trustee and run the attestation workflow with a sample attester. \ No newline at end of file +In this blog we have shown how to use the Trustee operator for deploying Trustee and run the attestation workflow with a sample attester. diff --git a/content/en/docs/attestation/client-tool/_index.md b/content/en/docs/attestation/client-tool/_index.md index 5c33970..9d69458 100644 --- a/content/en/docs/attestation/client-tool/_index.md +++ b/content/en/docs/attestation/client-tool/_index.md @@ -46,7 +46,7 @@ git clone https://github.com/confidential-containers/trustee.git ``` Build the client -``` +```bash cd kbs make CLI_FEATURES=sample_only cli sudo make install-cli diff --git a/content/en/docs/attestation/coco-setup/_index.md b/content/en/docs/attestation/coco-setup/_index.md index 4a37e40..1c57678 100644 --- a/content/en/docs/attestation/coco-setup/_index.md +++ b/content/en/docs/attestation/coco-setup/_index.md @@ -14,7 +14,7 @@ If you are using Trustee with Confidential Containers, you'll need to point your CoCo workload to your Trustee. In your pod definition, add the following annotation. -```bash +```yaml io.katacontainers.config.hypervisor.kernel_params: "agent.aa_kbc_params=cc_kbc::http://:" ``` diff --git a/content/en/docs/attestation/installation/kubernetes.md b/content/en/docs/attestation/installation/kubernetes.md index 2936c83..56ab337 100644 --- a/content/en/docs/attestation/installation/kubernetes.md +++ b/content/en/docs/attestation/installation/kubernetes.md @@ -26,7 +26,7 @@ kubectl get pods -n operators --watch ``` The operator controller should be running. -```bash +```text NAME READY STATUS RESTARTS AGE trustee-operator-controller-manager-77cb448dc-7vxck 1/1 Running 0 11m ``` @@ -57,7 +57,7 @@ kubectl get pods -n operators --selector=app=kbs ``` The Trustee deployment should be running. -```bash +```text NAME READY STATUS RESTARTS AGE trustee-deployment-f97fb74d6-w5qsm 1/1 Running 0 25m ``` diff --git a/content/en/docs/attestation/policies/_index.md b/content/en/docs/attestation/policies/_index.md index da525df..35d71f8 100644 --- a/content/en/docs/attestation/policies/_index.md +++ b/content/en/docs/attestation/policies/_index.md @@ -81,7 +81,7 @@ The built-in policies are `--allow-all`, `--deny-all`, `--default`, `--affirming These policies are described in more detail below. The simplest possible policies either allow or reject all requests. -```opa +```rego package policy default allow = true @@ -102,14 +102,14 @@ There are 4 tiers: Contraindicated, Warning, Affirming, and None. Ideally secrets should only be released when the token affirms the guest TCB. -```opa +```rego package policy import rego.v1 default allow = false allow if { - input["submods"]["cpu0"]["ear.status"] == "affirming" + input["submods"]["cpu0"]["ear.status"] == "affirming" } ``` @@ -119,15 +119,15 @@ attestation tokens that are not contraindicated. This is described in upcoming s A more advanced policy could check that the token is not contraindicated and that the enclave is of a certain type. For example, this policy will only allow requests if the evidence is not contraindicated and comes from an SNP guest. -```opa +```rego package policy import rego.v1 default allow = false allow if { - input["submods"]["cpu0"]["ear.status"] == "affirming" - input["submods"]["cpu0"]["ear.veraison.annotated-evidence"]["snp"] + input["submods"]["cpu0"]["ear.status"] == "affirming" + input["submods"]["cpu0"]["ear.veraison.annotated-evidence"]["snp"] } ``` @@ -146,16 +146,16 @@ See the next section for how these vectors are calculated. A resource policy can check each of these values. For instance this policy builds on the previous one to make sure that in addition to not being contraindicated, the executables trust vector has a particular claim. -```opa +```rego package policy import rego.v1 default allow = false allow if { - input["submods"]["cpu0"]["ear.status"] == "affirming" - input["submods"]["cpu0"]["ear.veraison.annotated-evidence"]["snp"] - input["submods"]["cpu0"]["ear.status.executables"] == 2 + input["submods"]["cpu0"]["ear.status"] == "affirming" + input["submods"]["cpu0"]["ear.veraison.annotated-evidence"]["snp"] + input["submods"]["cpu0"]["ear.status.executables"] == 2 } ``` @@ -168,16 +168,16 @@ The policy also takes the requested resource URI as input so the policy can have on which resource is requested. Here is a basic policy checking which resource is requested. -```opa +```rego package policy import rego.v1 default allowed = false allowed if { - data.plugin == "resource" - count(data["resource-path"]) == 3 - data["resource-path"][1] == "red" + data.plugin == "resource" + count(data["resource-path"]) == 3 + data["resource-path"][1] == "red" } ``` @@ -185,26 +185,26 @@ This policy only allows requests to certain repositories. This technique can be combined with those above. For instance, you could write a policy that allows different resources on different platforms, or requires different trust claims for different secrets. -```opa +```rego package policy import rego.v1 default allowed = false allowed if { - data.plugin == "resource" - count(data["resource-path"]) == 3 - data["resource-path"][1] == "red" - input["submods"]["cpu0"]["ear.status"] == "affirming" - input["submods"]["cpu0"]["ear.veraison.annotated-evidence"]["snp"] + data.plugin == "resource" + count(data["resource-path"]) == 3 + data["resource-path"][1] == "red" + input["submods"]["cpu0"]["ear.status"] == "affirming" + input["submods"]["cpu0"]["ear.veraison.annotated-evidence"]["snp"] } allowed if { - data.plugin == "resource" - count(data["resource-path"]) == 3 - data["resource-path"][1] == "blue" - input["submods"]["cpu0"]["ear.status"] == "affirming" - input["submods"]["cpu0"]["ear.veraison.annotated-evidence"]["tdx"] + data.plugin == "resource" + count(data["resource-path"]) == 3 + data["resource-path"][1] == "blue" + input["submods"]["cpu0"]["ear.status"] == "affirming" + input["submods"]["cpu0"]["ear.veraison.annotated-evidence"]["tdx"] } ``` diff --git a/content/en/docs/attestation/reference-values/_index.md b/content/en/docs/attestation/reference-values/_index.md index 45ebf17..604c9d3 100644 --- a/content/en/docs/attestation/reference-values/_index.md +++ b/content/en/docs/attestation/reference-values/_index.md @@ -119,4 +119,4 @@ data: [ ] EOF -``` +``` diff --git a/content/en/docs/attestation/resources/_index.md b/content/en/docs/attestation/resources/_index.md index 6bd414b..907c67a 100644 --- a/content/en/docs/attestation/resources/_index.md +++ b/content/en/docs/attestation/resources/_index.md @@ -26,8 +26,8 @@ but in practice the KBS Host and KBS Port are ignored to avoid coupling a resour to a particular IP and port. Instead, resources are typically expressed as -``` -kbs://///` +```text +kbs:///// ``` Often resources are referred to just as `//`. diff --git a/content/en/docs/attestation/resources/kbs-backed-by-akv.md b/content/en/docs/attestation/resources/kbs-backed-by-akv.md index 5f6a9d1..acfaea4 100644 --- a/content/en/docs/attestation/resources/kbs-backed-by-akv.md +++ b/content/en/docs/attestation/resources/kbs-backed-by-akv.md @@ -185,7 +185,7 @@ kubectl apply -k akv/ The KBS pod should be running, the pod events should give indication of possible errors. From a confidential pod the AKV secrets should be retrievable via Confidential Data Hub: -```bash +```console $ kubectl exec -it deploy/nginx-coco -- curl http://127.0.0.1:8006/cdh/resource/default/akv/coco_one a secret ``` diff --git a/content/en/docs/contributing/_index.md b/content/en/docs/contributing/_index.md index ae617af..eef2518 100644 --- a/content/en/docs/contributing/_index.md +++ b/content/en/docs/contributing/_index.md @@ -100,7 +100,7 @@ Every PR in every subproject is required to include a DCO. This is strictly enforced by the CI. Fortunately, it's easy to comply with this requirement. At the end of the commit message for each of your commits add something like -```bash +```text Signed-off-by: Alex Ample ``` You can add additional tags to credit other developers who worked on a commit @@ -127,8 +127,8 @@ Projects might have additional conventions that are not captured by these tools. You can install the above tools as follows. -```sh -$ rustup component add rustfmt clippy +```bash +rustup component add rustfmt clippy ``` @@ -143,7 +143,7 @@ what it does. This helps reviewers and future developers. The title of the commit should start with a subsystem. For example, -```bash +```text docs: update contributor guide ``` The "subsystem" describes the area of the code that the change applies to. diff --git a/content/en/docs/examples/alibaba-cloud-simple.md b/content/en/docs/examples/alibaba-cloud-simple.md index a52aded..d4265b1 100644 --- a/content/en/docs/examples/alibaba-cloud-simple.md +++ b/content/en/docs/examples/alibaba-cloud-simple.md @@ -37,7 +37,7 @@ If you want to build a pod VM image yourself, please follow the steps. 1. Create pod VM image. - ```sh + ```bash PODVM_DISTRO=alinux \ CLOUD_PROVIDER=alibabacloud \ IMAGE_URL=https://alinux3.oss-cn-hangzhou.aliyuncs.com/aliyun_3_x64_20G_nocloud_alibase_20250117.qcow2 \ @@ -50,7 +50,7 @@ If you want to build a pod VM image yourself, please follow the steps. 2. Upload to OSS storage and create ECS Image. You will then need to upload the Pod VM image to OSS (Object Storage Service). - ```sh + ```bash export REGION_ID= export IMAGE_FILE= export BUCKET= @@ -60,7 +60,7 @@ If you want to build a pod VM image yourself, please follow the steps. ``` Then, mark the image file as an ECS Image - ```sh + ```bash export IMAGE_NAME=$(basename ${IMAGE_FILE%.*}) aliyun ecs ImportImage --ImageName ${IMAGE_NAME} \ @@ -77,7 +77,7 @@ If you want to build a pod VM image yourself, please follow the steps. If you want to build CAA DaemonSet image yourself: - ```sh + ```bash export registry= export RELEASE_BUILD=true export CLOUD_PROVIDER=alibabacloud @@ -91,7 +91,7 @@ later. 1. Create ACK Managed Cluster. - ```sh + ```bash export CONTAINER_CIDR=172.18.0.0/16 export REGION_ID=cn-beijing export ZONES='["cn-beijing-i"]' @@ -117,12 +117,12 @@ later. Wait for the cluster to be created. Get the vSwitch id of the cluster. - ```sh + ```bash VSWITCH_IDS=$(aliyun cs DescribeClusterDetail --ClusterId ${CLUSTER_ID} | jq -r ".parameters.WorkerVSwitchIds" | sed 's/^/["/; s/$/"]/; s/,/","/g') ``` Then add one worker node to the cluster. - ```sh + ```bash WORKER_NODE_COUNT=1 WORKER_NODE_TYPE="[\"ecs.g8i.xlarge\",\"ecs.g7.xlarge\"]" aliyun cs POST /clusters/${CLUSTER_ID}/nodepools \ @@ -160,7 +160,7 @@ later. 2. Add Internet access for the cluster VPC - ```sh + ```bash export VPC_ID=$(aliyun cs DescribeClusterDetail --ClusterId ${CLUSTER_ID} | jq -r ".vpc_id") export VSWITCH_ID=$(echo ${VSWITCH_IDS} | sed 's/[][]//g' | sed 's/"//g') aliyun vpc CreateNatGateway \ @@ -202,7 +202,7 @@ later. 3. Grant role permissions Give role permission to the cluster to allow the worker to create ECS instances. - ```sh + ```bash export ROLE_NAME=caa-alibaba export RRSA_ISSUER=$(aliyun cs DescribeClusterDetail --ClusterId ${CLUSTER_ID} | jq -r ".rrsa_config.issuer" | cut -d',' -f1) export RRSA_ARN=$(aliyun cs DescribeClusterDetail --ClusterId ${CLUSTER_ID} | jq -r ".rrsa_config.oidc_arn" | cut -d',' -f1) @@ -293,7 +293,7 @@ later. ### Create the credentials file -```sh +```bash cat < install/overlays/alibabacloud/alibabacloud-cred.env # If the WorkerNode is on ACK, we use RRSA to authenticate ALIBABA_CLOUD_ROLE_ARN=${ROLE_ARN} @@ -316,7 +316,7 @@ in [`kustomization.yaml`](../install/overlays/alibabacloud/kustomization.yaml). Label the cluster nodes with `node.kubernetes.io/worker=` -```sh +```bash for NODE_NAME in $(kubectl get nodes -o jsonpath='{.items[*].metadata.name}'); do kubectl label node $NODE_NAME node.kubernetes.io/worker= done @@ -329,7 +329,7 @@ of CoCo Operator for Alibaba Cloud. Specifically, we enabled containerd 1.7+ installation and mirrored images from `quay.io` on Alibaba Cloud to accelerate. -```sh +```bash export COCO_OPERATOR_REPO="https://github.com/AliyunContainerService/coco-operator" export COCO_OPERATOR_REF="main" export RESOURCE_CTRL=false @@ -345,7 +345,7 @@ Generic CAA deployment instructions are also described [here](../install/README. Verify that the `runtimeclass` is created after deploying CAA: -```sh +```bash kubectl get runtimeclass ``` @@ -361,7 +361,7 @@ kata-remote kata-remote 7m18s Create an `nginx` deployment: -```yaml +```bash echo ' apiVersion: v1 kind: Pod @@ -377,13 +377,13 @@ spec: Ensure that the pod is up and running: -```sh +```bash kubectl get pods -n default ``` You can verify that the peer-pod VM was created by running the following command: -```sh +```bash aliyun ecs DescribeInstances --RegionId ${REGION_ID} --InstanceName 'podvm-*' ``` @@ -401,12 +401,12 @@ Delete all running pods using the `runtimeClass` `kata-remote`. Verify that all peer-pod VMs are deleted. You can use the following command to list all the peer-pod VMs (VMs having prefix `podvm`) and status: -```sh +```bash aliyun ecs DescribeInstances --RegionId ${REGION_ID} --InstanceName 'podvm-*' ``` Delete the ACK cluster by running the following command: -```sh +```bash aliyun cs DELETE /clusters/${CLUSTER_ID} --region ${REGION_ID} --keep_slb false --retain_all_resources false --header "Content-Type=application/json;" --body "{}" ``` diff --git a/content/en/docs/examples/aws-simple.md b/content/en/docs/examples/aws-simple.md index f31fd84..6042521 100644 --- a/content/en/docs/examples/aws-simple.md +++ b/content/en/docs/examples/aws-simple.md @@ -368,7 +368,7 @@ kata-remote kata-remote 7m18s Create an `nginx` deployment: -```yaml +```bash cat < ``` Restart containerd after making the change: @@ -238,7 +238,7 @@ If your CoCo Pod gets an error like the one shown below, then it is likely the i Therefore, you must ensure that the image pull policy is set to **Always** for any CoCo pod. This way the images are always handled entirely by the agent inside the VM. It is worth mentioning we recognize that this behavior is sub-optimal, so the community provides solutions to avoid constant image downloads for each workload. -```bash +```text Events: Type Reason Age From Message ---- ------ ---- ---- -------