Skip to content

Latest commit

 

History

History
215 lines (167 loc) · 7.78 KB

File metadata and controls

215 lines (167 loc) · 7.78 KB
title Confidential containers
weight 40
aliases /layered-zero-trust/lzt-confidential-containers/

Use case: Confidential containers

Confidential computing is a technology that protects data in use. Red Hat’s OpenShift sandboxed containers Confidential Containers (CoCo) feature uses Trusted Execution Environments (TEEs). TEEs are specialized CPU features from AMD, Intel, and others that create isolated, encrypted memory spaces (data in use) with cryptographic proof of integrity. These hardware guarantees mean workloads can prove they have not been tampered with, and secrets are protected, even from infrastructure administrators.

Confidential containers within the layered zero-trust pattern integrate zero-trust workload identity management. You get defense in-depth: cryptographic identity verification and hardware-rooted trust.

Using confidential containers in the layered zero-trust pattern is optional because it imposes specific hardware constraints.

Important

Using confidential containers restricts the platform to Microsoft Azure. You also need access to and quota for specific Azure instance types in the region where the cluster is deployed.

Ensure that you check the availability of appropriate confidential virtual machines before testing.

Confidential containers are intentionally not the default deployment option. Therefore, you must perform extra steps to deploy confidential containers.

Setting up an Azure cluster

Confidential containers on Azure use peer pods. This does not impose requirements on the base cluster type beyond sufficient capacity. This pattern has been tested with Azure Red Hat OpenShift clusters and OpenShift clusters installed using the openshift-install program.

Note

To provision peer pods, the OpenShift cluster must be able to communicate with Azure APIs. The pattern uses the same Azure service account used during cluster provisioning to create:

  • VM Templates

  • The peer pod VMs

  • A NAT gateway to allow outbound traffic from the peer pods

Setting up the repository

Prerequisites
Procedure
  1. Verify that you are using my-branch:

    $ git status
    On branch my-branch
    Your branch is up to date with 'origin/my-branch'.
    
    nothing to commit, working tree clean
  2. Change clusterGroupName to coco-dev in the values-global.yaml file:

    ...
    main:
      clusterGroupName: coco-dev
    ...
  3. Commit and push the change to your branch:

    $ git add values-global.yaml
    $ git commit -m 'Change to CoCo cluster group'
    $ git push origin my-branch

Configuring secrets

To secure confidential containers, the Key Broker Service, Red Hat build of Trustee, requires the configuration of extra secrets. Most credentials are automatically generated on the cluster where Trustee is deployed. However, you must generate the administrative credentials for Trustee off-cluster.

Note

Red Hat recommends reading the full instructions on configuring and deploying Red Hat build of Trustee. The Trustee role is security-sensitive.

Procedure
  1. Create a local copy of the secret values file that can safely include credentials:

    $ cp values-secret.yaml.template ~/values-secret-layered-zero-trust.yaml
  2. Uncomment the required additional secrets for the coco-dev chart. Each required secret has # Required for coco-dev clusterGroup comment above the secret.

    $ vim ~/values-secret-layered-zero-trust.yaml
  3. Generate the admin API authentication secret:

    $ cd ~/
    $ openssl genpkey -algorithm ed25519 > kbsPrivateKey
    $ openssl pkey -in kbsPrivateKey -pubout -out kbsPublicKey
    Note

    The values-secret.yaml.template file references the kbsPublicKey file name specified here. Using a different path requires changes to ~/values-secrets-layered-zero-trust.yaml.

Deploying the Confidential Containers variant

The deployment of the confidential containers variant is same as the default version:

  1. Log in to your {ocp} cluster:

    1. Using the oc CLI:

      • Get an API token by visiting https://oauth-openshift.apps../oauth/token/request.

      • Log in with the retrieved token:

        $ oc login --token=<retrieved_token> --server=https://api.<your_cluster>.<domain>:6443
    2. Using KUBECONFIG:

      $ export KUBECONFIG=~/<path_to_kubeconfig>
  2. Run the pattern deployment script:

    $ ./pattern.sh make install
Note

The deployment of the OpenShift sandboxed containers Operator takes time and might result in timeouts in the installation script. This is because the ./pattern.sh make install script waits for the Argo CD applications to become healthy.

Verifying the deployment

The Layered Zero-Trust pattern provisions and manages every component through {ocp} GitOps. After you deploy the pattern, verify that all components are running correctly.

The Layered Zero-Trust pattern installs the following two {ocp} GitOps instances on your hub cluster. You can view these instances in the {ocp} web console by using the Application Selector (the icon with nine small squares) in the top navigation bar.

  1. Cluster Argo CD: Deploys an app-of-apps application named layered-zero-trust-coco-dev. This application installs the pattern’s components.

  2. Coco-debugging Argo CD: Manages the Cluster Argo CD instance and the individual components that belong to the pattern on the hub {ocp} instance.

If every Argo CD application reports a Healthy status, the pattern has been deployed successfully.

Troubleshooting confidential containers workloads

If you encounter any issues with the confidential containers variant of the layered zero-trust pattern, first test deploying the default hub clusterGroup.

Procedure
  1. Run a health check of Argo CD applications:

    $ ./pattern.sh make argo-healthcheck
  2. If all applications except hello-coco are healthy, the Operators have deployed, but the peer pods have not.

  3. Check whether the pod is visible in the namespace:

    $ oc get pods -n zero-trust-workload-identity-manager spire-agent-cc -o yaml
  4. If the pod manifest is not visible, the Sandboxed Container Operator has not yet deployed.

  5. If the pod is visible, check for the existence of and events for the peer pods:

    $ oc get peerpods -A -o yaml
  6. The most likely cause of failure is insufficient Azure quota.