Skip to content

Commit 4897928

Browse files
Apply suggestions from code review
Co-authored-by: Marcelo Giles <[email protected]> Signed-off-by: Sascha Grunert <[email protected]>
1 parent e51a45d commit 4897928

File tree

1 file changed

+28
-29
lines changed
  • content/en/blog/_posts/2023-06-29-container-image-signature-verification

1 file changed

+28
-29
lines changed

content/en/blog/_posts/2023-06-29-container-image-signature-verification/index.md

Lines changed: 28 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ slug: container-image-signature-verification
77

88
**Author**: Sascha Grunert
99

10-
The Kubernetes community has been signing their container image based artifacts
10+
The Kubernetes community has been signing their container image-based artifacts
1111
since release v1.24. While the graduation of the [corresponding enhancement][kep]
1212
from `alpha` to `beta` in v1.26 introduced signatures for the binary artifacts,
1313
other projects followed the approach by providing image signatures for their
@@ -25,10 +25,10 @@ infrastructure for pushing images into staging buckets.
2525

2626
Assuming that a project now produces signed container image artifacts, how can
2727
one actually verify the signatures? It is possible to do it manually like
28-
outlined in the [official Kubernetes documentation][docs]. The problem with that
28+
outlined in the [official Kubernetes documentation][docs]. The problem with this
2929
approach is that it involves no automation at all and should be only done for
3030
testing purposes. In production environments, tools like the [sigstore
31-
policy-controller][policy-controller] can help with the automation. They
31+
policy-controller][policy-controller] can help with the automation. These tools
3232
provide a higher level API by using [Custom Resource Definitions (CRD)][crd] as
3333
well as an integrated [admission controller and webhook][admission] to verify
3434
the signatures.
@@ -42,27 +42,26 @@ The general usage flow for an admission controller based verification is:
4242

4343
![flow](flow.png "Admission controller flow")
4444

45-
Key benefit of this architecture is simplicity: A single instance within the
45+
A key benefit of this architecture is simplicity: A single instance within the
4646
cluster validates the signatures before any image pull can happen in the
47-
container runtimes on the nodes, which gets initiated by the kubelet. This
48-
benefit also incorporates the drawback of separation: The node which should pull
49-
the container image is not necessarily the same which does the admission. This
47+
container runtime on the nodes, which gets initiated by the kubelet. This
48+
benefit also brings along the issue of separation: The node which should pull
49+
the container image is not necessarily the same node that performs the admission. This
5050
means that if the controller is compromised, then a cluster-wide policy
51-
enforcement could not be possible any more.
51+
enforcement can no longer be possible.
5252

53-
One way to solve that issue is doing the policy evaluation directly within the
53+
One way to solve this issue is doing the policy evaluation directly within the
5454
[Container Runtime Interface (CRI)][cri] compatible container runtime. The
5555
runtime is directly connected to the [kubelet][kubelet] on a node and does all
5656
the tasks like pulling images. [CRI-O][cri-o] is one of those available runtimes
57-
and will feature full support for container image signature verification in the
58-
upcoming v1.28 release.
57+
and will feature full support for container image signature verification in v1.28.
5958

6059
[cri]: /docs/concepts/architecture/cri
6160
[kubelet]: /docs/reference/command-line-tools-reference/kubelet
6261
[cri-o]: https://github.com/cri-o/cri-o
6362

6463
How does it work? CRI-O reads a file called [`policy.json`][policy.json], which
65-
contains all the rules defined for container images. For example, I can define a
64+
contains all the rules defined for container images. For example, you can define a
6665
policy which only allows signed images `quay.io/crio/signed` for any tag or
6766
digest like this:
6867

@@ -90,7 +89,7 @@ digest like this:
9089
}
9190
```
9291

93-
CRI-O has to be started to use that policy as global source of truth:
92+
CRI-O has to be started to use that policy as the global source of truth:
9493

9594
```console
9695
> sudo crio --log-level debug --signature-policy ./policy.json
@@ -135,36 +134,36 @@ keys from the upstream [fulcio (OIDC PKI)][fulcio] and [rekor
135134
[fulcio]: https://github.com/sigstore/fulcio
136135
[rekor]: https://github.com/sigstore/rekor
137136

138-
This means if I now invalidate the `subjectEmail` of the policy, for example to
137+
This means that if you now invalidate the `subjectEmail` of the policy, for example to
139138
140139

141140
```console
142141
> jq '.transports.docker."quay.io/crio/signed"[0].fulcio.subjectEmail = "[email protected]"' policy.json > new-policy.json
143142
> mv new-policy.json policy.json
144143
```
145144

146-
Then removing the image, because it already exists locally:
145+
Then remove the image, since it already exists locally:
147146

148147
```console
149148
> sudo crictl rmi quay.io/crio/signed
150149
```
151150

152-
Now when pulling the image, CRI-O complains that the required email is wrong:
151+
Now when you pull the image, CRI-O complains that the required email is wrong:
153152

154153
```console
155154
> sudo crictl pull quay.io/crio/signed
156155
FATA[…] pulling image: rpc error: code = Unknown desc = Source image rejected: Required email [email protected] not found (got []string{"[email protected]"})
157156
```
158157

159-
It is also possible to test an unsigned image against the policy. For that we
158+
It is also possible to test an unsigned image against the policy. For that you
160159
have to modify the key `quay.io/crio/signed` to something like
161160
`quay.io/crio/unsigned`:
162161

163162
```console
164163
> sed -i 's;quay.io/crio/signed;quay.io/crio/unsigned;' policy.json
165164
```
166165

167-
If I now pull the container image, CRI-O will complain that no signature exists
166+
If you now pull the container image, CRI-O will complain that no signature exists
168167
for it:
169168

170169
```console
@@ -175,7 +174,7 @@ FATA[…] pulling image: rpc error: code = Unknown desc = SignatureValidationFai
175174
The error code `SignatureValidationFailed` got [recently added to
176175
Kubernetes][pr-117717] and will be available from v1.28. This error code allows
177176
end-users to understand image pull failures directly from the kubectl CLI. For
178-
example, if I run CRI-O together with Kubernetes using the policy which requires
177+
example, if you run CRI-O together with Kubernetes using the policy which requires
179178
`quay.io/crio/unsigned` to be signed, then a pod definition like this:
180179

181180
[pr-117717]: https://github.com/kubernetes/kubernetes/pull/117717
@@ -219,41 +218,41 @@ pod 0/1 SignatureValidationFailed 0 4s
219218
This overall behavior provides a more Kubernetes native experience and does not
220219
rely on third party software to be installed in the cluster.
221220

222-
There are still a few corner cases to consider: For example, what if we want to
221+
There are still a few corner cases to consider: For example, what if you want to
223222
allow policies per namespace in the same way the policy-controller supports it?
224223
Well, there is an upcoming CRI-O feature in v1.28 for that! CRI-O will support
225224
the `--signature-policy-dir` / `signature_policy_dir` option, which defines the
226225
root path for pod namespace-separated signature policies. This means that CRI-O
227226
will lookup that path and assemble a policy like `<SIGNATURE_POLICY_DIR>/<NAMESPACE>.json`,
228-
which will be used on image pull if existing. If no pod namespace is being
227+
which will be used on image pull if existing. If no pod namespace is
229228
provided on image pull ([via the sandbox config][sandbox-config]), or the
230229
concatenated path is non-existent, then CRI-O's global policy will be used as
231230
fallback.
232231

233232
[sandbox-config]: https://github.com/kubernetes/cri-api/blob/e5515a5/pkg/apis/runtime/v1/api.proto#L1448
234233

235-
Another corner case to consider is cricital for the correct signature
234+
Another corner case to consider is critical for the correct signature
236235
verification within container runtimes: The kubelet only invokes container image
237-
pulls if the image does not already exist on disk. This means, that a
236+
pulls if the image does not already exist on disk. This means that an
238237
unrestricted policy from Kubernetes namespace A can allow pulling an image,
239238
while namespace B is not able to enforce the policy because it already exits on
240239
the node. Finally, CRI-O has to verify the policy not only on image pull, but
241240
also on container creation. This fact makes things even a bit more complicated,
242241
because the CRI does not really pass down the user specified image reference on
243-
container creation, but more an already resolved iamge ID or digest. A [small
242+
container creation, but an already resolved image ID, or digest. A [small
244243
change to the CRI][pr-118652] can help with that.
245244

246245
[pr-118652]: https://github.com/kubernetes/kubernetes/pull/118652
247246

248247
Now that everything happens within the container runtime, someone has to
249248
maintain and define the policies to provide a good user experience around that
250-
feature. The CRDs of the policy-controller are great, while I could imagine that
249+
feature. The CRDs of the policy-controller are great, while we could imagine that
251250
a daemon within the cluster can write the policies for CRI-O per namespace. This
252251
would make any additional hook obsolete and moves the responsibility of
253-
verifying the image signature to the actual instance which pulls the image. [I
254-
was evaluating][thread] other possible paths towards a better container image
255-
signature verification within plain Kubernetes, but I could not find a great fit
256-
for a native API. This means that I believe that a CRD is the way to go, but we
252+
verifying the image signature to the actual instance which pulls the image. [We
253+
evaluated][thread] other possible paths toward a better container image
254+
signature verification within plain Kubernetes, but we could not find a great fit
255+
for a native API. This means that we believe that a CRD is the way to go, but we
257256
still need an instance which actually serves it.
258257

259258
[thread]: https://groups.google.com/g/kubernetes-sig-node/c/kgpxqcsJ7Vc/m/7X7t_ElsAgAJ

0 commit comments

Comments
 (0)