Skip to content

Commit 49ace75

Browse files
committed
Refresh AppArmor documentation
1 parent 5341afa commit 49ace75

File tree

2 files changed

+41
-154
lines changed

2 files changed

+41
-154
lines changed

content/en/docs/tutorials/security/apparmor.md

Lines changed: 41 additions & 153 deletions
Original file line numberDiff line numberDiff line change
@@ -11,15 +11,15 @@ weight: 30
1111
{{< feature-state for_k8s_version="v1.4" state="beta" >}}
1212

1313

14-
AppArmor is a Linux kernel security module that supplements the standard Linux user and group based
14+
[AppArmor](https://apparmor.net/) is a Linux kernel security module that supplements the standard Linux user and group based
1515
permissions to confine programs to a limited set of resources. AppArmor can be configured for any
1616
application to reduce its potential attack surface and provide greater in-depth defense. It is
1717
configured through profiles tuned to allow the access needed by a specific program or container,
1818
such as Linux capabilities, network access, file permissions, etc. Each profile can be run in either
1919
*enforcing* mode, which blocks access to disallowed resources, or *complain* mode, which only reports
2020
violations.
2121

22-
AppArmor can help you to run a more secure deployment by restricting what containers are allowed to
22+
On Kubernetes, AppArmor can help you to run a more secure deployment by restricting what containers are allowed to
2323
do, and/or provide better auditing through system logs. However, it is important to keep in mind
2424
that AppArmor is not a silver bullet and can only do so much to protect against exploits in your
2525
application code. It is important to provide good, restrictive profiles, and harden your
@@ -41,23 +41,10 @@ applications and cluster from other angles as well.
4141
## {{% heading "prerequisites" %}}
4242

4343

44-
Make sure:
44+
AppArmor is an optional kernel module and Kubernetes feature, so verify it is supported on your
45+
nodes before proceeding:
4546

46-
1. Kubernetes version is at least v1.4 -- Kubernetes support for AppArmor was added in
47-
v1.4. Kubernetes components older than v1.4 are not aware of the new AppArmor annotations, and
48-
will **silently ignore** any AppArmor settings that are provided. To ensure that your Pods are
49-
receiving the expected protections, it is important to verify the Kubelet version of your nodes:
50-
51-
```shell
52-
kubectl get nodes -o=jsonpath=$'{range .items[*]}{@.metadata.name}: {@.status.nodeInfo.kubeletVersion}\n{end}'
53-
```
54-
```
55-
gke-test-default-pool-239f5d02-gyn2: v1.4.0
56-
gke-test-default-pool-239f5d02-x1kf: v1.4.0
57-
gke-test-default-pool-239f5d02-xwux: v1.4.0
58-
```
59-
60-
2. AppArmor kernel module is enabled -- For the Linux kernel to enforce an AppArmor profile, the
47+
1. AppArmor kernel module is enabled -- For the Linux kernel to enforce an AppArmor profile, the
6148
AppArmor kernel module must be installed and enabled. Several distributions enable the module by
6249
default, such as Ubuntu and SUSE, and many others provide optional support. To check whether the
6350
module is enabled, check the `/sys/module/apparmor/parameters/enabled` file:
@@ -67,24 +54,17 @@ Make sure:
6754
Y
6855
```
6956

70-
If the Kubelet contains AppArmor support (>= v1.4), it will refuse to run a Pod with AppArmor
71-
options if the kernel module is not enabled.
72-
73-
{{< note >}}
74-
Ubuntu carries many AppArmor patches that have not been merged into the upstream Linux
75-
kernel, including patches that add additional hooks and features. Kubernetes has only been
76-
tested with the upstream version, and does not promise support for other features.
77-
{{< /note >}}
57+
The Kubelet will verify that AppArmor is enabled on the host before admitting a pod with AppArmor
58+
explicitly configured.
7859

7960
3. Container runtime supports AppArmor -- Currently all common Kubernetes-supported container
80-
runtimes should support AppArmor, like {{< glossary_tooltip term_id="docker">}},
81-
{{< glossary_tooltip term_id="cri-o" >}} or {{< glossary_tooltip term_id="containerd" >}}.
82-
Please refer to the corresponding runtime documentation and verify that the cluster fulfills
83-
the requirements to use AppArmor.
61+
runtimes should support AppArmor, including {{< glossary_tooltip term_id="containerd" >}} and
62+
{{< glossary_tooltip term_id="cri-o" >}}. Please refer to the corresponding runtime
63+
documentation and verify that the cluster fulfills the requirements to use AppArmor.
8464

8565
4. Profile is loaded -- AppArmor is applied to a Pod by specifying an AppArmor profile that each
8666
container should be run with. If any of the specified profiles is not already loaded in the
87-
kernel, the Kubelet (>= v1.4) will reject the Pod. You can view which profiles are loaded on a
67+
kernel, the Kubelet will reject the Pod. You can view which profiles are loaded on a
8868
node by checking the `/sys/kernel/security/apparmor/profiles` file. For example:
8969

9070
```shell
@@ -100,22 +80,6 @@ Make sure:
10080
For more details on loading profiles on nodes, see
10181
[Setting up nodes with profiles](#setting-up-nodes-with-profiles).
10282

103-
As long as the Kubelet version includes AppArmor support (>= v1.4), the Kubelet will reject a Pod
104-
with AppArmor options if any of the prerequisites are not met. You can also verify AppArmor support
105-
on nodes by checking the node ready condition message (though this is likely to be removed in a
106-
later release):
107-
108-
```shell
109-
kubectl get nodes -o=jsonpath='{range .items[*]}{@.metadata.name}: {.status.conditions[?(@.reason=="KubeletReady")].message}{"\n"}{end}'
110-
```
111-
```
112-
gke-test-default-pool-239f5d02-gyn2: kubelet is posting ready status. AppArmor enabled
113-
gke-test-default-pool-239f5d02-x1kf: kubelet is posting ready status. AppArmor enabled
114-
gke-test-default-pool-239f5d02-xwux: kubelet is posting ready status. AppArmor enabled
115-
```
116-
117-
118-
11983
<!-- lessoncontent -->
12084

12185
## Securing a Pod
@@ -141,24 +105,15 @@ specifies the profile to apply. The `profile_ref` can be one of:
141105

142106
See the [API Reference](#api-reference) for the full details on the annotation and profile name formats.
143107

144-
Kubernetes AppArmor enforcement works by first checking that all the prerequisites have been
145-
met, and then forwarding the profile selection to the container runtime for enforcement. If the
146-
prerequisites have not been met, the Pod will be rejected, and will not run.
147-
148-
To verify that the profile was applied, you can look for the AppArmor security option listed in the container created event:
108+
To verify that the profile was applied, you can check that the container's root process is
109+
running with the correct profile by examining its proc attr:
149110

150111
```shell
151-
kubectl get events | grep Created
152-
```
153-
```
154-
22s 22s 1 hello-apparmor Pod spec.containers{hello} Normal Created {kubelet e2e-test-stclair-node-pool-31nt} Created container with docker id 269a53b202d3; Security:[seccomp=unconfined apparmor=k8s-apparmor-example-deny-write]
112+
kubectl exec <pod_name> -- cat /proc/1/attr/current
155113
```
156114

157-
You can also verify directly that the container's root process is running with the correct profile by checking its proc attr:
115+
The output should look something like this:
158116

159-
```shell
160-
kubectl exec <pod_name> -- cat /proc/1/attr/current
161-
```
162117
```
163118
k8s-apparmor-example-deny-write (enforce)
164119
```
@@ -169,7 +124,7 @@ k8s-apparmor-example-deny-write (enforce)
169124

170125
First, we need to load the profile we want to use onto our nodes. This profile denies all file writes:
171126

172-
```shell
127+
```
173128
#include <tunables/global>
174129
175130
profile k8s-apparmor-example-deny-write flags=(attach_disconnected) {
@@ -187,11 +142,9 @@ nodes. For this example we'll use SSH to install the profiles, but other approac
187142
discussed in [Setting up nodes with profiles](#setting-up-nodes-with-profiles).
188143

189144
```shell
190-
NODES=(
191-
# The SSH-accessible domain names of your nodes
192-
gke-test-default-pool-239f5d02-gyn2.us-central1-a.my-k8s
193-
gke-test-default-pool-239f5d02-x1kf.us-central1-a.my-k8s
194-
gke-test-default-pool-239f5d02-xwux.us-central1-a.my-k8s)
145+
# This example assumes that node names match host names, and are reachable via SSH.
146+
NODES=($(kubectl get nodes -o name))
147+
195148
for NODE in ${NODES[*]}; do ssh $NODE 'sudo apparmor_parser -q <<EOF
196149
#include <tunables/global>
197150
@@ -212,21 +165,7 @@ Next, we'll run a simple "Hello AppArmor" pod with the deny-write profile:
212165
{{% code_sample file="pods/security/hello-apparmor.yaml" %}}
213166

214167
```shell
215-
kubectl create -f ./hello-apparmor.yaml
216-
```
217-
218-
If we look at the pod events, we can see that the Pod container was created with the AppArmor
219-
profile "k8s-apparmor-example-deny-write":
220-
221-
```shell
222-
kubectl get events | grep hello-apparmor
223-
```
224-
```
225-
14s 14s 1 hello-apparmor Pod Normal Scheduled {default-scheduler } Successfully assigned hello-apparmor to gke-test-default-pool-239f5d02-gyn2
226-
14s 14s 1 hello-apparmor Pod spec.containers{hello} Normal Pulling {kubelet gke-test-default-pool-239f5d02-gyn2} pulling image "busybox"
227-
13s 13s 1 hello-apparmor Pod spec.containers{hello} Normal Pulled {kubelet gke-test-default-pool-239f5d02-gyn2} Successfully pulled image "busybox"
228-
13s 13s 1 hello-apparmor Pod spec.containers{hello} Normal Created {kubelet gke-test-default-pool-239f5d02-gyn2} Created container with docker id 06b6cd1c0989; Security:[seccomp=unconfined apparmor=k8s-apparmor-example-deny-write]
229-
13s 13s 1 hello-apparmor Pod spec.containers{hello} Normal Started {kubelet gke-test-default-pool-239f5d02-gyn2} Started container with docker id 06b6cd1c0989
168+
kubectl create -f hello-apparmor.yaml
230169
```
231170

232171
We can verify that the container is actually running with that profile by checking its proc attr:
@@ -252,8 +191,6 @@ To wrap up, let's look at what happens if we try to specify a profile that hasn'
252191

253192
```shell
254193
kubectl create -f /dev/stdin <<EOF
255-
```
256-
```yaml
257194
apiVersion: v1
258195
kind: Pod
259196
metadata:
@@ -266,102 +203,54 @@ spec:
266203
image: busybox:1.28
267204
command: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ]
268205
EOF
206+
```
207+
```
269208
pod/hello-apparmor-2 created
270209
```
271210

211+
Although the pod was created successfully, further examination will show that it is stuck in pending:
212+
272213
```shell
273214
kubectl describe pod hello-apparmor-2
274215
```
275216
```
276217
Name: hello-apparmor-2
277218
Namespace: default
278-
Node: gke-test-default-pool-239f5d02-x1kf/
219+
Node: gke-test-default-pool-239f5d02-x1kf/10.128.0.27
279220
Start Time: Tue, 30 Aug 2016 17:58:56 -0700
280221
Labels: <none>
281222
Annotations: container.apparmor.security.beta.kubernetes.io/hello=localhost/k8s-apparmor-example-allow-write
282223
Status: Pending
283-
Reason: AppArmor
284-
Message: Pod Cannot enforce AppArmor: profile "k8s-apparmor-example-allow-write" is not loaded
285-
IP:
286-
Controllers: <none>
287-
Containers:
288-
hello:
289-
Container ID:
290-
Image: busybox
291-
Image ID:
292-
Port:
293-
Command:
294-
sh
295-
-c
296-
echo 'Hello AppArmor!' && sleep 1h
297-
State: Waiting
298-
Reason: Blocked
299-
Ready: False
300-
Restart Count: 0
301-
Environment: <none>
302-
Mounts:
303-
/var/run/secrets/kubernetes.io/serviceaccount from default-token-dnz7v (ro)
304-
Conditions:
305-
Type Status
306-
Initialized True
307-
Ready False
308-
PodScheduled True
309-
Volumes:
310-
default-token-dnz7v:
311-
Type: Secret (a volume populated by a Secret)
312-
SecretName: default-token-dnz7v
313-
Optional: false
314-
QoS Class: BestEffort
315-
Node-Selectors: <none>
316-
Tolerations: <none>
224+
...
317225
Events:
318-
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
319-
--------- -------- ----- ---- ------------- -------- ------ -------
320-
23s 23s 1 {default-scheduler } Normal Scheduled Successfully assigned hello-apparmor-2 to e2e-test-stclair-node-pool-t1f5
321-
23s 23s 1 {kubelet e2e-test-stclair-node-pool-t1f5} Warning AppArmor Cannot enforce AppArmor: profile "k8s-apparmor-example-allow-write" is not loaded
226+
Type Reason Age From Message
227+
---- ------ ---- ---- -------
228+
Normal Scheduled 10s default-scheduler Successfully assigned default/hello-apparmor to gke-test-default-pool-239f5d02-x1kf
229+
Normal Pulled 8s kubelet Successfully pulled image "busybox:1.28" in 370.157088ms (370.172701ms including waiting)
230+
Normal Pulling 7s (x2 over 9s) kubelet Pulling image "busybox:1.28"
231+
Warning Failed 7s (x2 over 8s) kubelet Error: failed to get container spec opts: failed to generate apparmor spec opts: apparmor profile not found k8s-apparmor-example-allow-write
232+
Normal Pulled 7s kubelet Successfully pulled image "busybox:1.28" in 90.980331ms (91.005869ms including waiting)
322233
```
323234

324-
Note the pod status is Pending, with a helpful error message: `Pod Cannot enforce AppArmor: profile
325-
"k8s-apparmor-example-allow-write" is not loaded`. An event was also recorded with the same message.
235+
An event provides the error message with the reason. Note that the specific wording is runtime-dependent:
236+
```
237+
Warning Failed 7s (x2 over 8s) kubelet Error: failed to get container spec opts: failed to generate apparmor spec opts: apparmor profile not found
238+
```
326239

327240
## Administration
328241

329242
### Setting up nodes with profiles
330243

331-
Kubernetes does not currently provide any native mechanisms for loading AppArmor profiles onto
332-
nodes. There are lots of ways to set up the profiles though, such as:
333-
334-
* Through a [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) that runs a Pod on each node to
335-
ensure the correct profiles are loaded. An example implementation can be found
336-
[here](https://git.k8s.io/kubernetes/test/images/apparmor-loader).
337-
* At node initialization time, using your node initialization scripts (e.g. Salt, Ansible, etc.) or
338-
image.
339-
* By copying the profiles to each node and loading them through SSH, as demonstrated in the
340-
[Example](#example).
244+
Kubernetes does not currently provide any built-in mechanisms for loading AppArmor profiles onto
245+
nodes. Profiles can be loaded through custom infrastructure or tools like the
246+
[Kubernetes Security Profiles Operator](https://github.com/kubernetes-sigs/security-profiles-operator).
341247

342248
The scheduler is not aware of which profiles are loaded onto which node, so the full set of profiles
343249
must be loaded onto every node. An alternative approach is to add a node label for each profile (or
344250
class of profiles) on the node, and use a
345251
[node selector](/docs/concepts/scheduling-eviction/assign-pod-node/) to ensure the Pod is run on a
346252
node with the required profile.
347253

348-
### Disabling AppArmor
349-
350-
If you do not want AppArmor to be available on your cluster, it can be disabled by a command-line flag:
351-
352-
```
353-
--feature-gates=AppArmor=false
354-
```
355-
356-
When disabled, any Pod that includes an AppArmor profile will fail validation with a "Forbidden"
357-
error.
358-
359-
{{<note>}}
360-
Even if the Kubernetes feature is disabled, runtimes may still enforce the default profile. The
361-
option to disable the AppArmor feature will be removed when AppArmor graduates to general
362-
availability (GA).
363-
{{</note>}}
364-
365254
## Authoring Profiles
366255

367256
Getting AppArmor profiles specified correctly can be a tricky business. Fortunately there are some
@@ -393,8 +282,7 @@ Specifying the profile a container will run with:
393282
### Profile Reference
394283

395284
- `runtime/default`: Refers to the default runtime profile.
396-
- Equivalent to not specifying a profile, except it still
397-
requires AppArmor to be enabled.
285+
- Equivalent to not specifying a profile, except it still requires AppArmor to be enabled.
398286
- In practice, many container runtimes use the same OCI default profile, defined here:
399287
https://github.com/containers/common/blob/main/pkg/apparmor/apparmor_linux_template.go
400288
- `localhost/<profile_name>`: Refers to a profile loaded on the node (localhost) by name.

content/en/examples/pods/security/hello-apparmor.yaml

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,6 @@ metadata:
44
name: hello-apparmor
55
annotations:
66
# Tell Kubernetes to apply the AppArmor profile "k8s-apparmor-example-deny-write".
7-
# Note that this is ignored if the Kubernetes node is not running version 1.4 or greater.
87
container.apparmor.security.beta.kubernetes.io/hello: localhost/k8s-apparmor-example-deny-write
98
spec:
109
containers:

0 commit comments

Comments
 (0)