Skip to content

Commit e6be053

Browse files
committed
Make AppArmor docs conform to style guide
1 parent 49ace75 commit e6be053

File tree

1 file changed

+23
-21
lines changed

1 file changed

+23
-21
lines changed

content/en/docs/tutorials/security/apparmor.md

Lines changed: 23 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ applications and cluster from other angles as well.
3030
## {{% heading "objectives" %}}
3131

3232

33-
* See an example of how to load a profile on a node
33+
* See an example of how to load a profile on a Node
3434
* Learn how to enforce the profile on a Pod
3535
* Learn how to check that the profile is loaded
3636
* See what happens when a profile is violated
@@ -42,7 +42,7 @@ applications and cluster from other angles as well.
4242

4343

4444
AppArmor is an optional kernel module and Kubernetes feature, so verify it is supported on your
45-
nodes before proceeding:
45+
Nodes before proceeding:
4646

4747
1. AppArmor kernel module is enabled -- For the Linux kernel to enforce an AppArmor profile, the
4848
AppArmor kernel module must be installed and enabled. Several distributions enable the module by
@@ -54,16 +54,16 @@ nodes before proceeding:
5454
Y
5555
```
5656

57-
The Kubelet will verify that AppArmor is enabled on the host before admitting a pod with AppArmor
57+
The Kubelet verifies that AppArmor is enabled on the host before admitting a pod with AppArmor
5858
explicitly configured.
5959

60-
3. Container runtime supports AppArmor -- Currently all common Kubernetes-supported container
60+
3. Container runtime supports AppArmor -- All common Kubernetes-supported container
6161
runtimes should support AppArmor, including {{< glossary_tooltip term_id="containerd" >}} and
6262
{{< glossary_tooltip term_id="cri-o" >}}. Please refer to the corresponding runtime
6363
documentation and verify that the cluster fulfills the requirements to use AppArmor.
6464

6565
4. Profile is loaded -- AppArmor is applied to a Pod by specifying an AppArmor profile that each
66-
container should be run with. If any of the specified profiles is not already loaded in the
66+
container should be run with. If any of the specified profiles are not loaded in the
6767
kernel, the Kubelet will reject the Pod. You can view which profiles are loaded on a
6868
node by checking the `/sys/kernel/security/apparmor/profiles` file. For example:
6969

@@ -97,7 +97,7 @@ container.apparmor.security.beta.kubernetes.io/<container_name>: <profile_ref>
9797
```
9898
9999
Where `<container_name>` is the name of the container to apply the profile to, and `<profile_ref>`
100-
specifies the profile to apply. The `profile_ref` can be one of:
100+
specifies the profile to apply. The `<profile_ref>` can be one of:
101101

102102
* `runtime/default` to apply the runtime's default profile
103103
* `localhost/<profile_name>` to apply the profile loaded on the host with the name `<profile_name>`
@@ -122,7 +122,7 @@ k8s-apparmor-example-deny-write (enforce)
122122

123123
*This example assumes you have already set up a cluster with AppArmor support.*
124124

125-
First, we need to load the profile we want to use onto our nodes. This profile denies all file writes:
125+
First, load the profile you want to use onto your Nodes. This profile denies all file writes:
126126

127127
```
128128
#include <tunables/global>
@@ -137,8 +137,8 @@ profile k8s-apparmor-example-deny-write flags=(attach_disconnected) {
137137
}
138138
```
139139

140-
Since we don't know where the Pod will be scheduled, we'll need to load the profile on all our
141-
nodes. For this example we'll use SSH to install the profiles, but other approaches are
140+
The profile needs to loaded onto all nodes, since you don't know where the pod will be scheduled.
141+
For this example you can use SSH to install the profiles, but other approaches are
142142
discussed in [Setting up nodes with profiles](#setting-up-nodes-with-profiles).
143143

144144
```shell
@@ -160,24 +160,26 @@ EOF'
160160
done
161161
```
162162

163-
Next, we'll run a simple "Hello AppArmor" pod with the deny-write profile:
163+
Next, run a simple "Hello AppArmor" Pod with the deny-write profile:
164164

165165
{{% code_sample file="pods/security/hello-apparmor.yaml" %}}
166166

167167
```shell
168168
kubectl create -f hello-apparmor.yaml
169169
```
170170

171-
We can verify that the container is actually running with that profile by checking its proc attr:
171+
You can verify that the container is actually running with that profile by checking `/proc/1/attr/current`:
172172

173173
```shell
174174
kubectl exec hello-apparmor -- cat /proc/1/attr/current
175175
```
176+
177+
The output should be:
176178
```
177179
k8s-apparmor-example-deny-write (enforce)
178180
```
179181

180-
Finally, we can see what happens if we try to violate the profile by writing to a file:
182+
Finally, you can see what happens if you violate the profile by writing to a file:
181183

182184
```shell
183185
kubectl exec hello-apparmor -- touch /tmp/test
@@ -187,7 +189,7 @@ touch: /tmp/test: Permission denied
187189
error: error executing remote command: command terminated with non-zero exit code: Error executing in Docker Container: 1
188190
```
189191
190-
To wrap up, let's look at what happens if we try to specify a profile that hasn't been loaded:
192+
To wrap up, see what happens if you try to specify a profile that hasn't been loaded:
191193
192194
```shell
193195
kubectl create -f /dev/stdin <<EOF
@@ -208,7 +210,7 @@ EOF
208210
pod/hello-apparmor-2 created
209211
```
210212

211-
Although the pod was created successfully, further examination will show that it is stuck in pending:
213+
Although the Pod was created successfully, further examination will show that it is stuck in pending:
212214

213215
```shell
214216
kubectl describe pod hello-apparmor-2
@@ -232,24 +234,24 @@ Events:
232234
Normal Pulled 7s kubelet Successfully pulled image "busybox:1.28" in 90.980331ms (91.005869ms including waiting)
233235
```
234236

235-
An event provides the error message with the reason. Note that the specific wording is runtime-dependent:
237+
An Event provides the error message with the reason, the specific wording is runtime-dependent:
236238
```
237239
Warning Failed 7s (x2 over 8s) kubelet Error: failed to get container spec opts: failed to generate apparmor spec opts: apparmor profile not found
238240
```
239241

240242
## Administration
241243

242-
### Setting up nodes with profiles
244+
### Setting up Nodes with profiles
243245

244246
Kubernetes does not currently provide any built-in mechanisms for loading AppArmor profiles onto
245-
nodes. Profiles can be loaded through custom infrastructure or tools like the
247+
Nodes. Profiles can be loaded through custom infrastructure or tools like the
246248
[Kubernetes Security Profiles Operator](https://github.com/kubernetes-sigs/security-profiles-operator).
247249

248-
The scheduler is not aware of which profiles are loaded onto which node, so the full set of profiles
249-
must be loaded onto every node. An alternative approach is to add a node label for each profile (or
250-
class of profiles) on the node, and use a
250+
The scheduler is not aware of which profiles are loaded onto which Node, so the full set of profiles
251+
must be loaded onto every Node. An alternative approach is to add a Node label for each profile (or
252+
class of profiles) on the Node, and use a
251253
[node selector](/docs/concepts/scheduling-eviction/assign-pod-node/) to ensure the Pod is run on a
252-
node with the required profile.
254+
Node with the required profile.
253255

254256
## Authoring Profiles
255257

0 commit comments

Comments
 (0)