Skip to content

Commit 15f730c

Browse files
fsmunozTim Bannisterreylejano
committed
Registry redirect blog article.
Co-authored-by: Tim Bannister <[email protected]> Co-authored-by: Rey Lejano <[email protected]>
1 parent 9a804d2 commit 15f730c

File tree

2 files changed

+193
-0
lines changed

2 files changed

+193
-0
lines changed
Lines changed: 190 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,190 @@
1+
---
2+
layout: blog
3+
title: "k8s.gcr.io Redirect to registry.k8s.io - What You Need to Know"
4+
date: 2023-03-10T17:00:00.000Z
5+
slug: image-registry-redirect
6+
---
7+
8+
**Authors**: Bob Killen (Google), Davanum Srinivas (AWS), Chris Short (AWS), Frederico Muñoz (SAS
9+
Institute), Tim Bannister (The Scale Factory), Ricky Sadowski (AWS), Grace Nguyen (Expo), Mahamed
10+
Ali (Rackspace Technology), Mars Toktonaliev (independent), Laura Santamaria (Dell), Kat Cosgrove
11+
(Dell)
12+
13+
14+
On Monday, March 20th, the k8s.gcr.io registry [will be redirected to the community owned
15+
registry](https://kubernetes.io/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/),
16+
**registry.k8s.io** .
17+
18+
19+
## TL;DR: What you need to know about this change
20+
21+
- On Monday, March 20th, traffic from the older k8s.gcr.io registry will be redirected to
22+
registry.k8s.io with the eventual goal of sunsetting k8s.gcr.io.
23+
- If you run in a restricted environment, and apply strict domain name or IP address access policies
24+
limited to k8s.gcr.io, **the image pulls will not function** after k8s.gcr.io starts redirecting
25+
to the new registry. 
26+
- A small subset of non-standard clients do not handle HTTP redirects by image registries, and will
27+
need to be pointed directly at registry.k8s.io.
28+
- The redirect is a stopgap to assist users in making the switch. The deprecated k8s.gcr.io registry
29+
will be phased out at some point. **Please update your manifests as soon as possible to point to
30+
registry.k8s.io**.
31+
- If you host your own image registry, you can copy images you need there as well to reduce traffic
32+
to community owned registries.
33+
34+
If you think you may be impacted, or would like to know more about this change, please keep reading.
35+
36+
## Why did Kubernetes change to a different image registry?
37+
38+
k8s.gcr.io is hosted on a custom [Google Container Registry
39+
(GCR)](https://cloud.google.com/container-registry) domain that was set up solely for the Kubernetes
40+
project. This has worked well since the inception of the project, and we thank Google for providing
41+
these resources, but today, there are other cloud providers and vendors that would like to host
42+
images to provide a better experience for the people on their platforms. In addition to Google’s
43+
[renewed commitment to donate $3
44+
million](https://www.cncf.io/google-cloud-recommits-3m-to-kubernetes/) to support the project's
45+
infrastructure last year, Amazon Web Services announced a matching donation [during their Kubecon NA
46+
2022 keynote in Detroit](https://youtu.be/PPdimejomWo?t=236). This will provide a better experience
47+
for users (closer servers = faster downloads) and will reduce the egress bandwidth and costs from
48+
GCR at the same time.
49+
50+
For more details on this change, check out [registry.k8s.io: faster, cheaper and Generally Available
51+
(GA)](/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/).
52+
53+
## Why is a redirect being put in place?
54+
55+
The project switched to [registry.k8s.io last year with the 1.25
56+
release](https://kubernetes.io/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/); however, most of
57+
the image pull traffic is still directed at the old endpoint k8s.gcr.io. This has not been
58+
sustainable for us as a project, as it is not utilizing the resources that have been donated to the
59+
project from other providers, and we are in the danger of running out of funds due to the cost of
60+
serving this traffic.
61+
62+
A redirect will enable the project to take advantage of these new resources, significantly reducing
63+
our egress bandwidth costs. We only expect this change to impact a small subset of users running in
64+
restricted environments or using very old clients that do not respect redirects properly.
65+
66+
## What images will be impacted?
67+
68+
**ALL** images on k8s.gcr.io will be impacted by this change. k8s.gcr.io hosts many images beyond
69+
Kubernetes releases. A large number of Kubernetes subprojects host their images there as well. Some
70+
examples include the `dns/k8s-dns-node-cache`, `ingress-nginx/controller`, and
71+
`node-problem-detector/node-problem-detector` images.
72+
73+
## What will happen to k8s.gcr.io?
74+
75+
Separate from the the redirect, k8s.gcr.io will be frozen [and will not be updated with new images
76+
after April 3rd, 2023](https://kubernetes.io/blog/2023/02/06/k8s-gcr-io-freeze-announcement/). `k8s.gcr.io`
77+
will not get any new releases, patches, or security updates. It will continue to remain available to
78+
help people migrate, but it **WILL** be phased out entirely in the future.
79+
80+
## I run in a restricted environment. What should I do?
81+
82+
For impacted users that run in a restricted environment, the best option is to copy over the
83+
required images to a private registry or configure a pull-through cache in their registry.
84+
85+
There are several tools to copy images between registries;
86+
[crane](https://github.com/google/go-containerregistry/blob/main/cmd/crane/doc/crane_copy.md) is one
87+
of those tools, and images can be copied to a private registry by using `crane copy SRC DST`. There
88+
are also vendor-specific tools, like e.g. Google’s
89+
[gcrane](https://cloud.google.com/container-registry/docs/migrate-external-containers#copy), that
90+
perform a similar function but are streamlined for their platform.
91+
92+
## How can I check registry.k8s.io is accessible from my cluster?
93+
94+
To test connectivity to registry.k8s.io and being able to pull images from there, here is a sample
95+
command that can be executed in the namespace of your choosing:
96+
97+
```
98+
kubectl run hello-world --tty --rm -i --image=registry.k8s.io/busybox:latest sh
99+
```
100+
101+
When you run the command above, here’s what to expect when things work correctly:
102+
103+
```
104+
$ kubectl run hello-world --tty --rm -i --image=registry.k8s.io/busybox:latest sh
105+
If you don't see a command prompt, try pressing enter.
106+
/ # exit
107+
Session ended, resume using 'kubectl attach hello-world -c hello-world -i -t' command when the pod is running
108+
pod "hello-world" deleted
109+
```
110+
111+
112+
## What kind of errors will I see if I’m impacted?
113+
114+
Errors may depend on what kind of container runtime you are using, and what endpoint you are routed
115+
to, but it should present such as `ErrImagePull`, `ImagePullBackOff`, or a container failing to be
116+
created with the warning `FailedCreatePodSandBox`.
117+
118+
Below is an example error message showing a proxied deployment failing to pull due to an unknown
119+
certificate:
120+
121+
```
122+
FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = Error response from daemon: Head “https://us-west1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.8”: x509: certificate signed by unknown authority
123+
```
124+
125+
## How can I find which images are using the legacy registry, and fix them?
126+
127+
**Option 1**: See the one line kubectl command in our [earlier blog
128+
post](https://kubernetes.io/blog/2023/02/06/k8s-gcr-io-freeze-announcement/#what-s-next):
129+
130+
```
131+
kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" |\
132+
tr -s '[[:space:]]' '\n' |\
133+
sort |\
134+
uniq -c
135+
```
136+
137+
**Option 2**: A `kubectl` [krew](https://krew.sigs.k8s.io/) plugin has been developed called
138+
[`community-images`](https://github.com/kubernetes-sigs/community-images#kubectl-community-images),
139+
that will scan and report any images using the k8s.gcr.io endpoint.
140+
141+
If you have krew installed, you can install it with:
142+
143+
```
144+
kubectl krew install community-images
145+
```
146+
147+
and generate a report with:
148+
149+
```
150+
kubectl community-images
151+
```
152+
153+
For alternate methods of install and example output, check out the repo:
154+
[kubernetes-sigs/community-images](https://github.com/kubernetes-sigs/community-image).
155+
156+
**Option 3**: If you do not have access to a cluster directly, or manage many clusters - the best
157+
way is to run a search over your manifests and charts for _"k8s.gcr.io"_.
158+
159+
**Option 4**: If you wish to prevent k8s.gcr.io based images from running in your cluster, example
160+
policies for [Gatekeeper](https://open-policy-agent.github.io/gatekeeper-library/website/) and
161+
[Kyverno](https://kyverno.io/) are available in the [AWS EKS Best Practices
162+
repository](https://github.com/aws/aws-eks-best-practices/tree/master/policies/k8s-registry-deprecation)
163+
that will block them from being pulled. You can use these third-party policies with any Kubernetes
164+
cluster.
165+
166+
**Option 5**: As a **LAST** possible option, you can use a [Mutating
167+
Admission Webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks)
168+
to change the image address dynamically. This should only be
169+
considered a stopgap till your manifests have been updated. You can
170+
find a (third party) Mutating Webhook and Kyverno policy in
171+
[k8s-gcr-quickfix](https://github.com/abstractinfrastructure/k8s-gcr-quickfix).
172+
173+
## I still have questions, where should I go?
174+
175+
For more information on registry.k8s.io and why it was developed, see [registry.k8s.io: faster,
176+
cheaper and Generally Available](/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/).
177+
178+
If you would like to know more about the image freeze and the last images that will be available
179+
there, see the blog post: [k8s.gcr.io Image Registry Will Be Frozen From the 3rd of April
180+
2023](/blog/2023/02/06/k8s-gcr-io-freeze-announcement/).
181+
182+
Information on the architecture of registry.k8s.io and its [request handling decision
183+
tree](https://github.com/kubernetes/registry.k8s.io/blob/8408d0501a88b3d2531ff54b14eeb0e3c900a4f3/cmd/archeio/docs/request-handling.md)
184+
can be found in the [kubernetes/registry.k8s.io
185+
repo](https://github.com/kubernetes/registry.k8s.io).
186+
187+
If you believe you have encountered a bug with the new registry or the redirect, please open an
188+
issue in the [kubernetes/registry.k8s.io
189+
repo](https://github.com/kubernetes/registry.k8s.io/issues/new/choose). **Please check if there is an issue already
190+
open similar to what you are seeing before you create a new issue**.

static/_redirects

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -369,6 +369,9 @@
369369
/dockershim /blog/2022/02/17/dockershim-faq/ 302
370370
/dockershim/ /blog/2022/02/17/dockershim-faq/ 302
371371

372+
/image-registry-redirect /blog/2023/03/10/image-registry-redirect/ 302
373+
/image-registry-redirect/ /blog/2022/02/10/image-registry-redirect/ 302
374+
372375
/docs/setup/release/notes/ /releases/notes/ 302
373376
/docs/setup/release/ /releases/ 301
374377
/docs/setup/version-skew-policy/ /releases/version-skew-policy/ 301

0 commit comments

Comments
 (0)