A Helm chart that installs the kata-deploy DaemonSet and its helper assets, enabling Kata Containers runtimes on your Kubernetes, K3s, RKE2, or K0s cluster.
# Install directly from the official ghcr.io OCI regitry
# update the VERSION X.YY.Z to your needs or just use the latest
export VERSION=$(curl -sSL https://api.github.com/repos/kata-containers/kata-containers/releases/latest | jq .tag_name | tr -d '"')
export CHART="oci://ghcr.io/kata-containers/kata-deploy-charts/kata-deploy"
$ helm install kata-deploy "${CHART}" --version "${VERSION}"
# See everything you can configure
$ helm show values "${CHART}" --version "${VERSION}"-
Kubernetes ≥ v1.22 – v1.22 is the first release where the CRI v1 API became the default and
RuntimeClassleft alpha. The chart depends on those stable interfaces; earlier clusters needfeature‑gatesor CRI shims that are out of scope. -
Kata Release 3.12 - v3.12.0 introduced publishing the helm-chart on the release page for easier consumption, since v3.8.0 we shipped the helm-chart via source code in the kata-containers
Githubrepository. -
CRI‑compatible runtime (containerd or CRI‑O). If one wants to use the
multiInstallSuffixfeature one needs at least containerd-2.0 which supports drop-in config files -
Nodes must allow loading kernel modules and installing Kata artifacts (the chart runs privileged containers to do so)
If Helm is not yet on your workstation or CI runner, install Helm v3 (v3.9 or newer recommended):
# Quick one‑liner (Linux/macOS)
$ curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
# Or via your package manager
$ sudo apt-get install helm # Debian/Ubuntu
$ brew install helm # Homebrew on macOS / LinuxbrewVerify the installation:
$ helm versionBefore attempting installing the chart one may first consult the table below
Configuration Reference for all the default values.
Some default values may not fit all use-cases so update as needed. A prime example
may be the k8sDistribution which per default is set to k8s.
To see which chart versions are available either use the CLI
$ helm show chart oci://ghcr.io/kata-containers/kata-deploy-charts/kata-deployor visit kata-deploy-charts
If one wants to wait until the Helm chart has deployed every object in the chart
one can use --wait --timeout 10m --atomic. If the timeout expires or anything
fails, Helm rolls the release back to its previous state.
$ helm install kata-deploy \ # release name
--namespace kube-system \ # recommended namespace
--wait --timeout 10m --atomic \
"${CHART}" --version "${VERSION}"If one does not want to wait for the object via Helm or one wants rather use
kubectl use Helm like this:
$ helm install kata-deploy \ # release name
--namespace kube-system \ # recommended namespace
"${CHART}" --version "${VERSION}"Forgot to enable an option? Re‑use the values already on the cluster and only mutate what you need:
# List existing releases
$ helm ls -A
# Upgrade in‑place, keeping everything else the same
$ helm upgrade kata-deploy -n kube-system \
--reuse-values \
--set env.defaultShim=qemu-runtime-rs \
"${CHART}" --version "${VERSION}"$ helm uninstall kata-deploy -n kube-systemAll values can be overridden with --set key=value or a custom -f myvalues.yaml.
| Key | Description | Default |
|---|---|---|
imagePullPolicy |
Set the DaemonSet pull policy | Always |
imagePullSecrets |
Enable pulling from a private registry via pull secret | "" |
image.reference |
Fully qualified image reference | quay.io/kata-containers/kata-deploy |
image.tag |
Tag of the image reference | "" |
k8sDistribution |
Set the k8s distribution to use: k8s, k0s, k3s, rke2, microk8s |
k8s |
nodeSelector |
Node labels for pod assignment. Allows restricting deployment to specific nodes | {} |
runtimeClasses.enabled |
Enable Helm-managed runtimeClass creation (recommended) |
true |
runtimeClasses.createDefault |
Create a default runtimeClass alias for the default shim |
false |
runtimeClasses.defaultName |
Name for the default runtimeClass |
kata |
env.debug |
Enable debugging in the configuration.toml |
false |
env.shims |
List of shims to deploy | clh cloud-hypervisor dragonball fc qemu qemu-coco-dev qemu-coco-dev-runtime-rs qemu-runtime-rs qemu-se-runtime-rs qemu-snp qemu-tdx stratovirt qemu-nvidia-gpu qemu-nvidia-gpu-snp qemu-nvidia-gpu-tdx qemu-cca |
env.shims_x86_64 |
List of shims to deploy for x86_64 (if set, overrides shims) |
"" |
env.shims_aarch64 |
List of shims to deploy for aarch64 (if set, overrides shims) |
"" |
env.shims_s390x |
List of shims to deploy for s390x (if set, overrides shims) |
"" |
env.shims_ppc64le |
List of shims to deploy for ppc64le (if set, overrides shims) |
"" |
env.defaultShim |
The default shim to use if none specified | qemu |
env.defaultShim_x86_64 |
The default shim to use if none specified for x86_64 (if set, overrides defaultShim) |
"" |
env.defaultShim_aarch64 |
The default shim to use if none specified for aarch64 (if set, overrides defaultShim) |
"" |
env.defaultShim_s390x |
The default shim to use if none specified for s390x (if set, overrides defaultShim) |
"" |
env.defaultShim_ppc64le |
The default shim to use if none specified for ppc64le (if set, overrides defaultShim) |
"" |
env.createRuntimeClasses |
DEPRECATED - Use runtimeClasses.enabled instead. Script-based runtimeClass creation |
false |
env.createDefaultRuntimeClass |
DEPRECATED - Use runtimeClasses.createDefault instead |
false |
env.allowedHypervisorAnnotations |
Enable the provided annotations to be enabled when launching a Container or Pod, per default the annotations are disabled | "" |
env.snapshotterHandlerMapping |
Provide the snapshotter handler for each shim | "" |
env.snapshotterHandlerMapping_x86_64 |
Provide the snapshotter handler for each shim for x86_64 (if set, overrides snapshotterHandlerMapping) |
"" |
env.snapshotterHandlerMapping_aarch64 |
Provide the snapshotter handler for each shim for aarch64 (if set, overrides snapshotterHandlerMapping) |
"" |
env.snapshotterHandlerMapping_s390x |
Provide the snapshotter handler for each shim for s390x (if set, overrides snapshotterHandlerMapping) |
"" |
env.snapshotterHandlerMapping_ppc64le |
Provide the snapshotter handler for each shim for ppc64le (if set, overrides snapshotterHandlerMapping) |
"" |
evn.agentHttpsProxy |
HTTPS_PROXY=... | "" |
env.agentHttpProxy |
specifies a list of addresses that should bypass a configured proxy server | "" |
env.pullTypeMapping |
Type of container image pulling, examples are guest-pull or default | "" |
env.pullTypeMapping_x86_64 |
Type of container image pulling for x86_64 (if set, overrides pullTypeMapping) |
"" |
env.pullTypeMapping_aarch64 |
Type of container image pulling for aarch64 (if set, overrides pullTypeMapping) |
"" |
env.pullTypeMapping_s390x |
Type of container image pulling for s390x (if set, overrides pullTypeMapping) |
"" |
env.pullTypeMapping_ppc64le |
Type of container image pulling for ppc64le (if set, overrides pullTypeMapping) |
"" |
env.installationPrefix |
Prefix where to install the Kata artifacts | /opt/kata |
env.hostOS |
Provide host-OS setting, e.g. cbl-mariner to do additional configurations |
"" |
env.multiInstallSuffix |
Enable multiple Kata installation on the same node with suffix e.g. /opt/kata-PR12232 |
"" |
env._experimentalSetupSnapshotter |
Deploys (nydus) and/or sets up (erofs, nydus) the snapshotter(s) specified as the value (supports multiple snapshotters, separated by commas; e.g., nydus,erofs) |
"" |
env._experimentalForceGuestPull |
Enables experimental_force_guest_pull for the shim(s) specified as the value (supports multiple shims, separated by commas; e.g., qemu-tdx,qemu-snp) |
"" |
env._experimentalForceGuestPull_x86_64 |
Enables experimental_force_guest_pull for the shim(s) specified as the value for x86_64 (if set, overrides _experimentalForceGuestPull) |
"" |
env._experimentalForceGuestPull_aarch64 |
Enables experimental_force_guest_pull for the shim(s) specified as the value for aarch64 (if set, overrides _experimentalForceGuestPull) |
"" |
env._experimentalForceGuestPull_s390x |
Enables experimental_force_guest_pull for the shim(s) specified as the value for s390x (if set, overrides _experimentalForceGuestPull) |
"" |
env._experimentalForceGuestPull_ppc64le |
Enables experimental_force_guest_pull for the shim(s) specified as the value for ppc64le (if set, overrides _experimentalForceGuestPull) |
"" |
NEW: Starting with Kata Containers v3.23.0, a new structured configuration format is available for configuring shims. This provides better type safety, clearer organization, and per-shim configuration options.
The legacy env.* configuration format is deprecated and will be removed in 2 releases. Users are encouraged to migrate to the new structured format.
Deprecated fields (will be removed in 2 releases):
env.shims,env.shims_x86_64,env.shims_aarch64,env.shims_s390x,env.shims_ppc64leenv.defaultShim,env.defaultShim_x86_64,env.defaultShim_aarch64,env.defaultShim_s390x,env.defaultShim_ppc64leenv.allowedHypervisorAnnotationsenv.snapshotterHandlerMapping,env.snapshotterHandlerMapping_x86_64, etc.env.pullTypeMapping,env.pullTypeMapping_x86_64, etc.env.agentHttpsProxy,env.agentNoProxyenv._experimentalSetupSnapshotterenv._experimentalForceGuestPull,env._experimentalForceGuestPull_x86_64, etc.env.debug
The new format uses a shims section where each shim can be configured individually:
# Enable debug mode globally
debug: false
# Configure snapshotter setup
snapshotter:
setup: [] # ["nydus", "erofs"] or []
# Configure shims
shims:
# Disable all shims at once (useful when enabling only specific shims or custom runtimes)
disableAll: false
qemu:
enabled: true
supportedArches:
- amd64
- arm64
- s390x
- ppc64le
allowedHypervisorAnnotations: []
containerd:
snapshotter: ""
qemu-snp:
enabled: true
supportedArches:
- amd64
allowedHypervisorAnnotations: []
containerd:
snapshotter: nydus
forceGuestPull: false
crio:
guestPull: true
agent:
httpsProxy: ""
noProxy: ""
# Default shim per architecture
defaultShim:
amd64: qemu
arm64: qemu
s390x: qemu
ppc64le: qemu- Per-shim configuration: Each shim can have its own settings for snapshotter, guest pull, agent proxy, etc.
- Architecture-aware: Shims declare which architectures they support
- Type safety: Structured format reduces configuration errors
- Easy to use: All shims are enabled by default in
values.yaml, so you can use the chart directly without modification - Disable all at once: Use
shims.disableAll: trueto disable all standard shims, useful when enabling only specific shims or using custom runtimes only
shims:
qemu:
enabled: true
supportedArches:
- amd64
- arm64
defaultShim:
amd64: qemu
arm64: qemuThe chart maintains full backward compatibility with the legacy env.* format. If legacy values are set, they take precedence over the new structured format. This allows for gradual migration.
The default values.yaml file has all shims enabled by default, making it easy to use the chart directly without modification:
helm install kata-deploy oci://ghcr.io/kata-containers/kata-deploy-charts/kata-deploy \
--version VERSIONThis includes all available Kata Containers shims:
- Standard shims:
qemu,qemu-runtime-rs,clh,cloud-hypervisor,dragonball,fc - TEE shims:
qemu-snp,qemu-snp-runtime-rs,qemu-tdx,qemu-tdx-runtime-rs,qemu-se,qemu-se-runtime-rs,qemu-cca,qemu-coco-dev,qemu-coco-dev-runtime-rs - NVIDIA GPU shims:
qemu-nvidia-gpu,qemu-nvidia-gpu-snp,qemu-nvidia-gpu-tdx - Remote shims:
remote(forpeer-pods/cloud-api-adaptor, disabled by default)
To enable only specific shims, you can override the configuration:
# Custom values file - enable only qemu shim
shims:
qemu:
enabled: true
clh:
enabled: false
cloud-hypervisor:
enabled: false
# ... disable other shims as neededFor convenience, we also provide example values files that demonstrate specific use cases:
This file enables only the TEE (Trusted Execution Environment) shims for confidential computing:
helm install kata-deploy oci://ghcr.io/kata-containers/kata-deploy-charts/kata-deploy \
--version VERSION \
-f try-kata-tee.values.yamlIncludes:
qemu-snp- AMD SEV-SNP (amd64)qemu-tdx- Intel TDX (amd64)qemu-se- IBM Secure Execution (s390x)qemu-se-runtime-rs- IBM Secure Execution Rust runtime (s390x)qemu-cca- Arm Confidential Compute Architecture (arm64)qemu-coco-dev- Confidential Containers development (amd64, s390x)qemu-coco-dev-runtime-rs- Confidential Containers development Rust runtime (amd64, s390x)
This file enables only the NVIDIA GPU-enabled shims:
helm install kata-deploy oci://ghcr.io/kata-containers/kata-deploy-charts/kata-deploy \
--version VERSION \
-f try-kata-nvidia-gpu.values.yamlIncludes:
qemu-nvidia-gpu- Standard NVIDIA GPU support (amd64, arm64)qemu-nvidia-gpu-snp- NVIDIA GPU with AMD SEV-SNP (amd64)qemu-nvidia-gpu-tdx- NVIDIA GPU with Intel TDX (amd64)
Note: These example files are located in the chart directory. When installing from the OCI registry, you'll need to download them separately or clone the repository to access them.
NEW: Starting with Kata Containers v3.23.0, runtimeClasses are managed by
Helm by default, providing better lifecycle management and integration.
- Automatic Creation:
runtimeClassesare automatically created for all configured shims - Lifecycle Management: Helm manages creation, updates, and deletion of
runtimeClasses
runtimeClasses:
enabled: true # Enable Helm-managed `runtimeClasses` (default)
createDefault: false # Create a default "kata" `runtimeClass`
defaultName: "kata" # Name for the default `runtimeClass`When runtimeClasses.enabled: true (default), the Helm chart creates
runtimeClass resources for all enabled shims (either from the new structured
shims configuration or from the legacy env.shims format).
The kata-deploy script will no longer create runtimeClasses
(env.createRuntimeClasses defaults to "false").
Use shims.disableAll=true to disable all shims at once, then enable only the ones you need:
# Using --set flags (disable all, then enable qemu)
$ helm install kata-deploy \
--set shims.disableAll=true \
--set shims.qemu.enabled=true \
--set debug=true \
"${CHART}" --version "${VERSION}"Or use a custom values file:
# custom-values.yaml
debug: true
shims:
disableAll: true
qemu:
enabled: true$ helm install kata-deploy \
-f custom-values.yaml \
"${CHART}" --version "${VERSION}"# First, label the nodes where you want kata-containers to be installed
$ kubectl label nodes worker-node-1 kata-containers=enabled
$ kubectl label nodes worker-node-2 kata-containers=enabled
# Then install the chart with `nodeSelector`
$ helm install kata-deploy \
--set nodeSelector.kata-containers="enabled" \
"${CHART}" --version "${VERSION}"You can also use a values file:
# values.yaml
nodeSelector:
kata-containers: "enabled"
node-type: "worker"$ helm install kata-deploy -f values.yaml "${CHART}" --version "${VERSION}"For debugging, testing and other use-case it is possible to deploy multiple
versions of Kata on the very same node. All the needed artifacts are getting the
mulitInstallSuffix appended to distinguish each installation. BEWARE that one
needs at least containerd-2.0 since this version has drop-in conf support
which is a prerequisite for the mulitInstallSuffix to work properly.
$ helm install kata-deploy-cicd \
-n kata-deploy-cicd \
--set env.multiInstallSuffix=cicd \
--set env.debug=true \
"${CHART}" --version "${VERSION}"Note: runtimeClasses are automatically created by Helm (via
runtimeClasses.enabled=true, which is the default).
Now verify the installation by examining the runtimeClasses:
$ kubectl get runtimeClasses
NAME HANDLER AGE
kata-clh-cicd kata-clh-cicd 77s
kata-cloud-hypervisor-cicd kata-cloud-hypervisor-cicd 77s
kata-dragonball-cicd kata-dragonball-cicd 77s
kata-fc-cicd kata-fc-cicd 77s
kata-qemu-cicd kata-qemu-cicd 77s
kata-qemu-coco-dev-cicd kata-qemu-coco-dev-cicd 77s
kata-qemu-nvidia-gpu-cicd kata-qemu-nvidia-gpu-cicd 77s
kata-qemu-nvidia-gpu-snp-cicd kata-qemu-nvidia-gpu-snp-cicd 77s
kata-qemu-nvidia-gpu-tdx-cicd kata-qemu-nvidia-gpu-tdx-cicd 76s
kata-qemu-runtime-rs-cicd kata-qemu-runtime-rs-cicd 77s
kata-qemu-se-runtime-rs-cicd kata-qemu-se-runtime-rs-cicd 77s
kata-qemu-snp-cicd kata-qemu-snp-cicd 77s
kata-qemu-tdx-cicd kata-qemu-tdx-cicd 77s
kata-stratovirt-cicd kata-stratovirt-cicd 77s