You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
AppArmor is a Linux kernel security module that supplements the standard Linux user and group based
14
+
[AppArmor](https://apparmor.net/) is a Linux kernel security module that supplements the standard Linux user and group based
15
15
permissions to confine programs to a limited set of resources. AppArmor can be configured for any
16
16
application to reduce its potential attack surface and provide greater in-depth defense. It is
17
17
configured through profiles tuned to allow the access needed by a specific program or container,
18
18
such as Linux capabilities, network access, file permissions, etc. Each profile can be run in either
19
19
*enforcing* mode, which blocks access to disallowed resources, or *complain* mode, which only reports
20
20
violations.
21
21
22
-
AppArmor can help you to run a more secure deployment by restricting what containers are allowed to
22
+
On Kubernetes, AppArmor can help you to run a more secure deployment by restricting what containers are allowed to
23
23
do, and/or provide better auditing through system logs. However, it is important to keep in mind
24
24
that AppArmor is not a silver bullet and can only do so much to protect against exploits in your
25
25
application code. It is important to provide good, restrictive profiles, and harden your
@@ -41,23 +41,10 @@ applications and cluster from other angles as well.
41
41
## {{% heading "prerequisites" %}}
42
42
43
43
44
-
Make sure:
44
+
AppArmor is an optional kernel module and Kubernetes feature, so verify it is supported on your
45
+
nodes before proceeding:
45
46
46
-
1. Kubernetes version is at least v1.4 -- Kubernetes support for AppArmor was added in
47
-
v1.4. Kubernetes components older than v1.4 are not aware of the new AppArmor annotations, and
48
-
will **silently ignore** any AppArmor settings that are provided. To ensure that your Pods are
49
-
receiving the expected protections, it is important to verify the Kubelet version of your nodes:
50
-
51
-
```shell
52
-
kubectl get nodes -o=jsonpath=$'{range .items[*]}{@.metadata.name}: {@.status.nodeInfo.kubeletVersion}\n{end}'
53
-
```
54
-
```
55
-
gke-test-default-pool-239f5d02-gyn2: v1.4.0
56
-
gke-test-default-pool-239f5d02-x1kf: v1.4.0
57
-
gke-test-default-pool-239f5d02-xwux: v1.4.0
58
-
```
59
-
60
-
2. AppArmor kernel module is enabled -- For the Linux kernel to enforce an AppArmor profile, the
47
+
1. AppArmor kernel module is enabled -- For the Linux kernel to enforce an AppArmor profile, the
61
48
AppArmor kernel module must be installed and enabled. Several distributions enable the module by
62
49
default, such as Ubuntu and SUSE, and many others provide optional support. To check whether the
63
50
module is enabled, check the `/sys/module/apparmor/parameters/enabled` file:
@@ -67,24 +54,17 @@ Make sure:
67
54
Y
68
55
```
69
56
70
-
If the Kubelet contains AppArmor support (>= v1.4), it will refuse to run a Pod with AppArmor
71
-
options if the kernel module is not enabled.
72
-
73
-
{{< note >}}
74
-
Ubuntu carries many AppArmor patches that have not been merged into the upstream Linux
75
-
kernel, including patches that add additional hooks and features. Kubernetes has only been
76
-
tested with the upstream version, and does not promise support for other features.
77
-
{{< /note >}}
57
+
The Kubelet will verify that AppArmor is enabled on the host before admitting a pod with AppArmor
58
+
explicitly configured.
78
59
79
60
3. Container runtime supports AppArmor -- Currently all common Kubernetes-supported container
80
-
runtimes should support AppArmor, like {{< glossary_tooltip term_id="docker">}},
81
-
{{< glossary_tooltip term_id="cri-o" >}} or {{< glossary_tooltip term_id="containerd" >}}.
82
-
Please refer to the corresponding runtime documentation and verify that the cluster fulfills
83
-
the requirements to use AppArmor.
61
+
runtimes should support AppArmor, including {{< glossary_tooltip term_id="containerd" >}} and
62
+
{{< glossary_tooltip term_id="cri-o" >}}. Please refer to the corresponding runtime
63
+
documentation and verify that the cluster fulfills the requirements to use AppArmor.
84
64
85
65
4. Profile is loaded -- AppArmor is applied to a Pod by specifying an AppArmor profile that each
86
66
container should be run with. If any of the specified profiles is not already loaded in the
87
-
kernel, the Kubelet (>= v1.4) will reject the Pod. You can view which profiles are loaded on a
67
+
kernel, the Kubelet will reject the Pod. You can view which profiles are loaded on a
88
68
node by checking the `/sys/kernel/security/apparmor/profiles` file. For example:
89
69
90
70
```shell
@@ -100,22 +80,6 @@ Make sure:
100
80
For more details on loading profiles on nodes, see
101
81
[Setting up nodes with profiles](#setting-up-nodes-with-profiles).
102
82
103
-
As long as the Kubelet version includes AppArmor support (>= v1.4), the Kubelet will reject a Pod
104
-
with AppArmor options if any of the prerequisites are not met. You can also verify AppArmor support
105
-
on nodes by checking the node ready condition message (though this is likely to be removed in a
106
-
later release):
107
-
108
-
```shell
109
-
kubectl get nodes -o=jsonpath='{range .items[*]}{@.metadata.name}: {.status.conditions[?(@.reason=="KubeletReady")].message}{"\n"}{end}'
110
-
```
111
-
```
112
-
gke-test-default-pool-239f5d02-gyn2: kubelet is posting ready status. AppArmor enabled
113
-
gke-test-default-pool-239f5d02-x1kf: kubelet is posting ready status. AppArmor enabled
114
-
gke-test-default-pool-239f5d02-xwux: kubelet is posting ready status. AppArmor enabled
115
-
```
116
-
117
-
118
-
119
83
<!-- lessoncontent -->
120
84
121
85
## Securing a Pod
@@ -141,24 +105,15 @@ specifies the profile to apply. The `profile_ref` can be one of:
141
105
142
106
See the [API Reference](#api-reference) for the full details on the annotation and profile name formats.
143
107
144
-
Kubernetes AppArmor enforcement works by first checking that all the prerequisites have been
145
-
met, and then forwarding the profile selection to the container runtime for enforcement. If the
146
-
prerequisites have not been met, the Pod will be rejected, and will not run.
147
-
148
-
To verify that the profile was applied, you can look for the AppArmor security option listed in the container created event:
108
+
To verify that the profile was applied, you can check that the container's root process is
109
+
running with the correct profile by examining its proc attr:
149
110
150
111
```shell
151
-
kubectl get events | grep Created
152
-
```
153
-
```
154
-
22s 22s 1 hello-apparmor Pod spec.containers{hello} Normal Created {kubelet e2e-test-stclair-node-pool-31nt} Created container with docker id 269a53b202d3; Security:[seccomp=unconfined apparmor=k8s-apparmor-example-deny-write]
If we look at the pod events, we can see that the Pod container was created with the AppArmor
219
-
profile "k8s-apparmor-example-deny-write":
220
-
221
-
```shell
222
-
kubectl get events | grep hello-apparmor
223
-
```
224
-
```
225
-
14s 14s 1 hello-apparmor Pod Normal Scheduled {default-scheduler } Successfully assigned hello-apparmor to gke-test-default-pool-239f5d02-gyn2
226
-
14s 14s 1 hello-apparmor Pod spec.containers{hello} Normal Pulling {kubelet gke-test-default-pool-239f5d02-gyn2} pulling image "busybox"
227
-
13s 13s 1 hello-apparmor Pod spec.containers{hello} Normal Pulled {kubelet gke-test-default-pool-239f5d02-gyn2} Successfully pulled image "busybox"
228
-
13s 13s 1 hello-apparmor Pod spec.containers{hello} Normal Created {kubelet gke-test-default-pool-239f5d02-gyn2} Created container with docker id 06b6cd1c0989; Security:[seccomp=unconfined apparmor=k8s-apparmor-example-deny-write]
229
-
13s 13s 1 hello-apparmor Pod spec.containers{hello} Normal Started {kubelet gke-test-default-pool-239f5d02-gyn2} Started container with docker id 06b6cd1c0989
168
+
kubectl create -f hello-apparmor.yaml
230
169
```
231
170
232
171
We can verify that the container is actually running with that profile by checking its proc attr:
@@ -252,8 +191,6 @@ To wrap up, let's look at what happens if we try to specify a profile that hasn'
23s 23s 1 {default-scheduler } Normal Scheduled Successfully assigned hello-apparmor-2 to e2e-test-stclair-node-pool-t1f5
321
-
23s 23s 1 {kubelet e2e-test-stclair-node-pool-t1f5} Warning AppArmor Cannot enforce AppArmor: profile "k8s-apparmor-example-allow-write" is not loaded
226
+
Type Reason Age From Message
227
+
---- ------ ---- ---- -------
228
+
Normal Scheduled 10s default-scheduler Successfully assigned default/hello-apparmor to gke-test-default-pool-239f5d02-x1kf
229
+
Normal Pulled 8s kubelet Successfully pulled image "busybox:1.28" in 370.157088ms (370.172701ms including waiting)
230
+
Normal Pulling 7s (x2 over 9s) kubelet Pulling image "busybox:1.28"
231
+
Warning Failed 7s (x2 over 8s) kubelet Error: failed to get container spec opts: failed to generate apparmor spec opts: apparmor profile not found k8s-apparmor-example-allow-write
232
+
Normal Pulled 7s kubelet Successfully pulled image "busybox:1.28" in 90.980331ms (91.005869ms including waiting)
322
233
```
323
234
324
-
Note the pod status is Pending, with a helpful error message: `Pod Cannot enforce AppArmor: profile
325
-
"k8s-apparmor-example-allow-write" is not loaded`. An event was also recorded with the same message.
235
+
An event provides the error message with the reason. Note that the specific wording is runtime-dependent:
236
+
```
237
+
Warning Failed 7s (x2 over 8s) kubelet Error: failed to get container spec opts: failed to generate apparmor spec opts: apparmor profile not found
238
+
```
326
239
327
240
## Administration
328
241
329
242
### Setting up nodes with profiles
330
243
331
-
Kubernetes does not currently provide any native mechanisms for loading AppArmor profiles onto
332
-
nodes. There are lots of ways to set up the profiles though, such as:
333
-
334
-
* Through a [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) that runs a Pod on each node to
335
-
ensure the correct profiles are loaded. An example implementation can be found
0 commit comments