@@ -7,7 +7,7 @@ slug: container-image-signature-verification
7
7
8
8
** Author** : Sascha Grunert
9
9
10
- The Kubernetes community has been signing their container image based artifacts
10
+ The Kubernetes community has been signing their container image- based artifacts
11
11
since release v1.24. While the graduation of the [ corresponding enhancement] [ kep ]
12
12
from ` alpha ` to ` beta ` in v1.26 introduced signatures for the binary artifacts,
13
13
other projects followed the approach by providing image signatures for their
@@ -25,10 +25,10 @@ infrastructure for pushing images into staging buckets.
25
25
26
26
Assuming that a project now produces signed container image artifacts, how can
27
27
one actually verify the signatures? It is possible to do it manually like
28
- outlined in the [ official Kubernetes documentation] [ docs ] . The problem with that
28
+ outlined in the [ official Kubernetes documentation] [ docs ] . The problem with this
29
29
approach is that it involves no automation at all and should be only done for
30
30
testing purposes. In production environments, tools like the [ sigstore
31
- policy-controller] [ policy-controller ] can help with the automation. They
31
+ policy-controller] [ policy-controller ] can help with the automation. These tools
32
32
provide a higher level API by using [ Custom Resource Definitions (CRD)] [ crd ] as
33
33
well as an integrated [ admission controller and webhook] [ admission ] to verify
34
34
the signatures.
@@ -42,27 +42,26 @@ The general usage flow for an admission controller based verification is:
42
42
43
43
![ flow] ( flow.png " Admission controller flow ")
44
44
45
- Key benefit of this architecture is simplicity: A single instance within the
45
+ A key benefit of this architecture is simplicity: A single instance within the
46
46
cluster validates the signatures before any image pull can happen in the
47
- container runtimes on the nodes, which gets initiated by the kubelet. This
48
- benefit also incorporates the drawback of separation: The node which should pull
49
- the container image is not necessarily the same which does the admission. This
47
+ container runtime on the nodes, which gets initiated by the kubelet. This
48
+ benefit also brings along the issue of separation: The node which should pull
49
+ the container image is not necessarily the same node that performs the admission. This
50
50
means that if the controller is compromised, then a cluster-wide policy
51
- enforcement could not be possible any more .
51
+ enforcement can no longer be possible.
52
52
53
- One way to solve that issue is doing the policy evaluation directly within the
53
+ One way to solve this issue is doing the policy evaluation directly within the
54
54
[ Container Runtime Interface (CRI)] [ cri ] compatible container runtime. The
55
55
runtime is directly connected to the [ kubelet] [ kubelet ] on a node and does all
56
56
the tasks like pulling images. [ CRI-O] [ cri-o ] is one of those available runtimes
57
- and will feature full support for container image signature verification in the
58
- upcoming v1.28 release.
57
+ and will feature full support for container image signature verification in v1.28.
59
58
60
59
[ cri ] : /docs/concepts/architecture/cri
61
60
[ kubelet ] : /docs/reference/command-line-tools-reference/kubelet
62
61
[ cri-o ] : https://github.com/cri-o/cri-o
63
62
64
63
How does it work? CRI-O reads a file called [ ` policy.json ` ] [ policy.json ] , which
65
- contains all the rules defined for container images. For example, I can define a
64
+ contains all the rules defined for container images. For example, you can define a
66
65
policy which only allows signed images ` quay.io/crio/signed ` for any tag or
67
66
digest like this:
68
67
@@ -90,7 +89,7 @@ digest like this:
90
89
}
91
90
```
92
91
93
- CRI-O has to be started to use that policy as global source of truth:
92
+ CRI-O has to be started to use that policy as the global source of truth:
94
93
95
94
``` console
96
95
> sudo crio --log-level debug --signature-policy ./policy.json
@@ -135,36 +134,36 @@ keys from the upstream [fulcio (OIDC PKI)][fulcio] and [rekor
135
134
[ fulcio ] : https://github.com/sigstore/fulcio
136
135
[ rekor ] : https://github.com/sigstore/rekor
137
136
138
- This means if I now invalidate the ` subjectEmail ` of the policy, for example to
137
+ This means that if you now invalidate the ` subjectEmail ` of the policy, for example to
139
138
140
139
141
140
``` console
142
141
>
jq ' .transports.docker."quay.io/crio/signed"[0].fulcio.subjectEmail = "[email protected] "' policy.json > new-policy.json
143
142
> mv new-policy.json policy.json
144
143
```
145
144
146
- Then removing the image, because it already exists locally:
145
+ Then remove the image, since it already exists locally:
147
146
148
147
``` console
149
148
> sudo crictl rmi quay.io/crio/signed
150
149
```
151
150
152
- Now when pulling the image, CRI-O complains that the required email is wrong:
151
+ Now when you pull the image, CRI-O complains that the required email is wrong:
153
152
154
153
``` console
155
154
> sudo crictl pull quay.io/crio/signed
156
155
FATA[…] pulling image: rpc error: code = Unknown desc = Source image rejected: Required email [email protected] not found (got []string{"[email protected] "})
157
156
```
158
157
159
- It is also possible to test an unsigned image against the policy. For that we
158
+ It is also possible to test an unsigned image against the policy. For that you
160
159
have to modify the key ` quay.io/crio/signed ` to something like
161
160
` quay.io/crio/unsigned ` :
162
161
163
162
``` console
164
163
> sed -i ' s;quay.io/crio/signed;quay.io/crio/unsigned;' policy.json
165
164
```
166
165
167
- If I now pull the container image, CRI-O will complain that no signature exists
166
+ If you now pull the container image, CRI-O will complain that no signature exists
168
167
for it:
169
168
170
169
``` console
@@ -175,7 +174,7 @@ FATA[…] pulling image: rpc error: code = Unknown desc = SignatureValidationFai
175
174
The error code ` SignatureValidationFailed ` got [ recently added to
176
175
Kubernetes] [ pr-117717 ] and will be available from v1.28. This error code allows
177
176
end-users to understand image pull failures directly from the kubectl CLI. For
178
- example, if I run CRI-O together with Kubernetes using the policy which requires
177
+ example, if you run CRI-O together with Kubernetes using the policy which requires
179
178
` quay.io/crio/unsigned ` to be signed, then a pod definition like this:
180
179
181
180
[ pr-117717 ] : https://github.com/kubernetes/kubernetes/pull/117717
@@ -219,41 +218,41 @@ pod 0/1 SignatureValidationFailed 0 4s
219
218
This overall behavior provides a more Kubernetes native experience and does not
220
219
rely on third party software to be installed in the cluster.
221
220
222
- There are still a few corner cases to consider : For example, what if we want to
221
+ There are still a few corner cases to consider : For example, what if you want to
223
222
allow policies per namespace in the same way the policy-controller supports it?
224
223
Well, there is an upcoming CRI-O feature in v1.28 for that! CRI-O will support
225
224
the `--signature-policy-dir` / `signature_policy_dir` option, which defines the
226
225
root path for pod namespace-separated signature policies. This means that CRI-O
227
226
will lookup that path and assemble a policy like `<SIGNATURE_POLICY_DIR>/<NAMESPACE>.json`,
228
- which will be used on image pull if existing. If no pod namespace is being
227
+ which will be used on image pull if existing. If no pod namespace is
229
228
provided on image pull ([via the sandbox config][sandbox-config]), or the
230
229
concatenated path is non-existent, then CRI-O's global policy will be used as
231
230
fallback.
232
231
233
232
[sandbox-config] : https://github.com/kubernetes/cri-api/blob/e5515a5/pkg/apis/runtime/v1/api.proto#L1448
234
233
235
- Another corner case to consider is cricital for the correct signature
234
+ Another corner case to consider is critical for the correct signature
236
235
verification within container runtimes : The kubelet only invokes container image
237
- pulls if the image does not already exist on disk. This means, that a
236
+ pulls if the image does not already exist on disk. This means that an
238
237
unrestricted policy from Kubernetes namespace A can allow pulling an image,
239
238
while namespace B is not able to enforce the policy because it already exits on
240
239
the node. Finally, CRI-O has to verify the policy not only on image pull, but
241
240
also on container creation. This fact makes things even a bit more complicated,
242
241
because the CRI does not really pass down the user specified image reference on
243
- container creation, but more an already resolved iamge ID or digest. A [small
242
+ container creation, but an already resolved image ID, or digest. A [small
244
243
change to the CRI][pr-118652] can help with that.
245
244
246
245
[pr-118652] : https://github.com/kubernetes/kubernetes/pull/118652
247
246
248
247
Now that everything happens within the container runtime, someone has to
249
248
maintain and define the policies to provide a good user experience around that
250
- feature. The CRDs of the policy-controller are great, while I could imagine that
249
+ feature. The CRDs of the policy-controller are great, while we could imagine that
251
250
a daemon within the cluster can write the policies for CRI-O per namespace. This
252
251
would make any additional hook obsolete and moves the responsibility of
253
- verifying the image signature to the actual instance which pulls the image. [I
254
- was evaluating ][thread] other possible paths towards a better container image
255
- signature verification within plain Kubernetes, but I could not find a great fit
256
- for a native API. This means that I believe that a CRD is the way to go, but we
252
+ verifying the image signature to the actual instance which pulls the image. [We
253
+ evaluated ][thread] other possible paths toward a better container image
254
+ signature verification within plain Kubernetes, but we could not find a great fit
255
+ for a native API. This means that we believe that a CRD is the way to go, but we
257
256
still need an instance which actually serves it.
258
257
259
258
[thread] : https://groups.google.com/g/kubernetes-sig-node/c/kgpxqcsJ7Vc/m/7X7t_ElsAgAJ
0 commit comments