You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/aks/use-pod-sandboxing.md
+13-13Lines changed: 13 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,11 +19,11 @@ This article helps you understand this new feature, and how to implement it.
19
19
20
20
- The `aks-preview` Azure CLI extension version 0.5.123 or later to select the [Mariner operating system][mariner-cluster-config] generation 2 SKU.
21
21
22
-
-The `KataVMIsolationPreview` feature is registered in your Azure subscription.
22
+
-Register the `KataVMIsolationPreview` feature in your Azure subscription.
23
23
24
-
-Kubernetes version 1.24.0 and higher. Earlier versions of Kubernetes aren't supported.
24
+
-AKS supports Pod Sandboxing (preview) on version 1.24.0 and higher.
25
25
26
-
- To manage a Kubernetes cluster, use the Kubernetes command-line client [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell. You can install kubectl locally using the [az aks install-cli][az-aks-install-cmd] command.
26
+
- To manage a Kubernetes cluster, use the Kubernetes command-line client [kubectl][kubectl]. Azure Cloud Shell comes with `kubectl`. You can install kubectl locally using the [az aks install-cli][az-aks-install-cmd] command.
27
27
28
28
### Install the aks-preview Azure CLI extension
29
29
@@ -73,11 +73,11 @@ The following are constraints with this preview of Pod Sandboxing (preview):
*[Container Storage Interface drivers][csi-storage-driver] and [Secrets Store CSI driver][csi-secret-store driver]aren't supported in the preview release.
76
+
*AKS does not support [Container Storage Interface drivers][csi-storage-driver] and [Secrets Store CSI driver][csi-secret-store driver] in this preview release.
77
77
78
78
## How it works
79
79
80
-
To achieve this functionality on AKS, [Kata Containers][kata-containers-overview] running on Mariner AKS Container Host (MACH) stack delivers hardware-enforced isolation. Pod Sandboxing extends the benefits of hardware isolation such as a separate kernel for each Kata pod. Hardware isolation allocates resources for each pod that aren't shared with other Kata Containers or namespace containers that run on the same host.
80
+
To achieve this functionality on AKS, [Kata Containers][kata-containers-overview] running on Mariner AKS Container Host (MACH) stack delivers hardware-enforced isolation. Pod Sandboxing extends the benefits of hardware isolation such as a separate kernel for each Kata pod. Hardware isolation allocates resources for each pod and doesn't share them with other Kata Containers or namespace containers running on the same host.
81
81
82
82
The solution architecture is based on the following components:
83
83
@@ -87,19 +87,19 @@ The solution architecture is based on the following components:
* Integration with [Kata Container][kata-container] framework
89
89
90
-
Deploying Pod Sandboxing using Kata Containers is similar to the standard containerd workflow to deploy containers. The deployment includes kata-runtime options that can be defined in the pod template.
90
+
Deploying Pod Sandboxing using Kata Containers is similar to the standard containerd workflow to deploy containers. The deployment includes kata-runtime options that you can define in the pod template.
91
91
92
-
For a pod to use this feature, the only difference is to add **runtimeClassName***kata-mshv-vm-isolation* to the pod spec.
92
+
To use this feature with a pod, the only difference is to add **runtimeClassName***kata-mshv-vm-isolation* to the pod spec.
93
93
94
-
When a pod uses the *kata-mshv-vm-isolation* runtimeClass, a VM is created to serve as the pod sandbox to host the containers. The VM's default memory is 2 GB and the default CPU is one core if the [Container resource manifest][container-resource-manifest] (`containers[].resources.limits`) doesn't specify a limit for CPU and memory. When the Container resource manifest limit for CPU or memory is specified, the VM has `containers[].resources.limits.cpu` with the `1` argument to use one + xCPU, and `containers[].resources.limits.memory` with the `2` argument to specify 2 GB + yMemory. Containers can only use CPU and memory to the limits of the containers. The `containers[].resources.requests` are ignored in this preview while we work to reduce the CPU and memory overhead.
94
+
When a pod uses the *kata-mshv-vm-isolation* runtimeClass, it creates a VM to serve as the pod sandbox to host the containers. The VM's default memory is 2 GB and the default CPU is one core if the [Container resource manifest][container-resource-manifest] (`containers[].resources.limits`) doesn't specify a limit for CPU and memory. When you specify a limit for CPU or memory in the container resource manifest, the VM has `containers[].resources.limits.cpu` with the `1` argument to use *one + xCPU*, and `containers[].resources.limits.memory` with the `2` argument to specify *2 GB + yMemory*. Containers can only use CPU and memory to the limits of the containers. The `containers[].resources.requests` are ignored in this preview while we work to reduce the CPU and memory overhead.
95
95
96
96
## Deploy new cluster
97
97
98
98
Perform the following steps to deploy an AKS Mariner cluster using the Azure CLI.
99
99
100
100
1. Create an AKS cluster using the [az aks create][az-aks-create] command and specifying the following parameters:
101
101
102
-
***--workload-runtime**: *KataMshvVmIsolation*has to be specified to enable the Pod Sandboxing feature on the node pool. With this parameter, these other parameters must meet the following requirements. Otherwise, the command fails and reports an issue with the corresponding parameter(s).
102
+
***--workload-runtime**: Specify *KataMshvVmIsolation* to enable the Pod Sandboxing feature on the node pool. With this parameter, these other parameters shall satisfy the following requirements. Otherwise, the command fails and reports an issue with the corresponding parameter(s).
103
103
***--os-sku**: *mariner*. Only the Mariner os-sku supports this feature in this preview release.
104
104
***--node-vm-size**: Any Azure VM size that is a generation 2 VM and supports nested virtualization works. For example, [Dsv3][dv3-series] VMs.
105
105
@@ -134,7 +134,7 @@ Use the following command to enable Pod Sandboxing (preview) by creating a node
134
134
* **--resource-group**: Enter the name of an existing resource group to create the AKS cluster in.
135
135
* **--cluster-name**: Enter a unique name for the AKS cluster, such as *myAKSCluster*.
136
136
* **--name**: Enter a unique name for your clusters node pool, such as *nodepool2*.
137
-
* **--workload-runtime**: *KataMshvVmIsolation* has to be specified to enable the Pod Sandboxing feature on the node pool. Along with the `--workload-runtime` parameter, these other parameters are required. Otherwise, the command fails and reports an issue with the corresponding parameter(s).
137
+
* **--workload-runtime**: Specify *KataMshvVmIsolation* to enable the Pod Sandboxing feature on the node pool. Along with the `--workload-runtime` parameter, these other parameters shall satisfy the following requirements. Otherwise, the command fails and reports an issue with the corresponding parameter(s).
138
138
* **--os-sku**: *mariner*. Only the Mariner os-sku supports this feature in the preview release.
139
139
* **--node-vm-size**: Any Azure VM size that is a generation 2 VM and supports nested virtualization works. For example, [Dsv3][dv3-series] VMs.
140
140
@@ -215,15 +215,15 @@ To demonstrate the deployed application on the AKS cluster isn't isolated and is
215
215
216
216
## Verify Kernel Isolation configuration
217
217
218
-
1. To access a container inside the AKS cluster, start a shell session by running the [kubectl exec][kubectl-exec] command. In this example you're accessing the container inside the *untrusted* pod.
218
+
1. To access a container inside the AKS cluster, start a shell session by running the [kubectl exec][kubectl-exec] command. In this example, you're accessing the container inside the *untrusted* pod.
219
219
220
220
```bash
221
221
kubectl exec -it untrusted -- /bin/bash
222
222
```
223
223
224
224
Kubectl connects to your cluster, runs `/bin/sh` inside the first container within the *untrusted* pod, and forward your terminal's input and output streams to the container's process. You can also start a shell session to the container hosting the *trusted* pod.
225
225
226
-
2. After starting a shell session to the container of the *untrusted* pod, you can run commands to verify that the *untrusted* container is running in a pod sandboxthat has a different kernel version compared to the *trusted* container outside the sandbox.
226
+
2. After starting a shell session to the container of the *untrusted* pod, you can run commands to verify that the *untrusted* container is running in a pod sandbox. You'll notice that it has a different kernel version compared to the *trusted* container outside the sandbox.
227
227
228
228
To see the kernel version run the following command:
229
229
@@ -257,7 +257,7 @@ To demonstrate the deployed application on the AKS cluster isn't isolated and is
257
257
258
258
## Cleanup
259
259
260
-
If you're finished evaluating this feature, to avoid Azure charges, clean up your unnecessary resources. If you deployed a new cluster as part of your evaluation or testing, you can delete the cluster using the [az aks delete][az-aks-delete] command.
260
+
When you're finished evaluating this feature, to avoid Azure charges, clean up your unnecessary resources. If you deployed a new cluster as part of your evaluation or testing, you can delete the cluster using the [az aks delete][az-aks-delete] command.
261
261
262
262
```azurecli
263
263
az aks delete --resource-group myResourceGroup --name myAKSCluster
0 commit comments