Skip to content

Commit 7ea13b6

Browse files
authored
Merge branch 'kubernetes:main' into update-assign-pod-node
2 parents 7698554 + 87103a1 commit 7ea13b6

File tree

18 files changed

+436
-308
lines changed

18 files changed

+436
-308
lines changed
Lines changed: 187 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,187 @@
1+
---
2+
layout: blog
3+
title: "k8s.gcr.io Redirect to registry.k8s.io - What You Need to Know"
4+
date: 2023-03-10T17:00:00.000Z
5+
slug: image-registry-redirect
6+
---
7+
8+
**Authors**: Bob Killen (Google), Davanum Srinivas (AWS), Chris Short (AWS), Frederico Muñoz (SAS
9+
Institute), Tim Bannister (The Scale Factory), Ricky Sadowski (AWS), Grace Nguyen (Expo), Mahamed
10+
Ali (Rackspace Technology), Mars Toktonaliev (independent), Laura Santamaria (Dell), Kat Cosgrove
11+
(Dell)
12+
13+
14+
On Monday, March 20th, the k8s.gcr.io registry [will be redirected to the community owned
15+
registry](https://kubernetes.io/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/),
16+
**registry.k8s.io** .
17+
18+
19+
## TL;DR: What you need to know about this change
20+
21+
- On Monday, March 20th, traffic from the older k8s.gcr.io registry will be redirected to
22+
registry.k8s.io with the eventual goal of sunsetting k8s.gcr.io.
23+
- If you run in a restricted environment, and apply strict domain name or IP address access policies
24+
limited to k8s.gcr.io, **the image pulls will not function** after k8s.gcr.io starts redirecting
25+
to the new registry. 
26+
- A small subset of non-standard clients do not handle HTTP redirects by image registries, and will
27+
need to be pointed directly at registry.k8s.io.
28+
- The redirect is a stopgap to assist users in making the switch. The deprecated k8s.gcr.io registry
29+
will be phased out at some point. **Please update your manifests as soon as possible to point to
30+
registry.k8s.io**.
31+
- If you host your own image registry, you can copy images you need there as well to reduce traffic
32+
to community owned registries.
33+
34+
If you think you may be impacted, or would like to know more about this change, please keep reading.
35+
36+
## How can I check if I am impacted?
37+
38+
To test connectivity to registry.k8s.io and being able to pull images from there, here is a sample
39+
command that can be executed in the namespace of your choosing:
40+
41+
```
42+
kubectl run hello-world -ti --rm --image=registry.k8s.io/busybox:latest --restart=Never -- date
43+
```
44+
45+
When you run the command above, here’s what to expect when things work correctly:
46+
47+
```
48+
$ kubectl run hello-world -ti --rm --image=registry.k8s.io/busybox:latest --restart=Never -- date
49+
Fri Feb 31 07:07:07 UTC 2023
50+
pod "hello-world" deleted
51+
```
52+
53+
## What kind of errors will I see if I’m impacted?
54+
55+
Errors may depend on what kind of container runtime you are using, and what endpoint you are routed
56+
to, but it should present such as `ErrImagePull`, `ImagePullBackOff`, or a container failing to be
57+
created with the warning `FailedCreatePodSandBox`.
58+
59+
Below is an example error message showing a proxied deployment failing to pull due to an unknown
60+
certificate:
61+
62+
```
63+
FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = Error response from daemon: Head “https://us-west1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.8”: x509: certificate signed by unknown authority
64+
```
65+
66+
## What images will be impacted?
67+
68+
**ALL** images on k8s.gcr.io will be impacted by this change. k8s.gcr.io hosts many images beyond
69+
Kubernetes releases. A large number of Kubernetes subprojects host their images there as well. Some
70+
examples include the `dns/k8s-dns-node-cache`, `ingress-nginx/controller`, and
71+
`node-problem-detector/node-problem-detector` images.
72+
73+
## I am impacted. What should I do?
74+
75+
For impacted users that run in a restricted environment, the best option is to copy over the
76+
required images to a private registry or configure a pull-through cache in their registry.
77+
78+
There are several tools to copy images between registries;
79+
[crane](https://github.com/google/go-containerregistry/blob/main/cmd/crane/doc/crane_copy.md) is one
80+
of those tools, and images can be copied to a private registry by using `crane copy SRC DST`. There
81+
are also vendor-specific tools, like e.g. Google’s
82+
[gcrane](https://cloud.google.com/container-registry/docs/migrate-external-containers#copy), that
83+
perform a similar function but are streamlined for their platform.
84+
85+
## How can I find which images are using the legacy registry, and fix them?
86+
87+
**Option 1**: See the one line kubectl command in our [earlier blog
88+
post](https://kubernetes.io/blog/2023/02/06/k8s-gcr-io-freeze-announcement/#what-s-next):
89+
90+
```
91+
kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" |\
92+
tr -s '[[:space:]]' '\n' |\
93+
sort |\
94+
uniq -c
95+
```
96+
97+
**Option 2**: A `kubectl` [krew](https://krew.sigs.k8s.io/) plugin has been developed called
98+
[`community-images`](https://github.com/kubernetes-sigs/community-images#kubectl-community-images),
99+
that will scan and report any images using the k8s.gcr.io endpoint.
100+
101+
If you have krew installed, you can install it with:
102+
103+
```
104+
kubectl krew install community-images
105+
```
106+
107+
and generate a report with:
108+
109+
```
110+
kubectl community-images
111+
```
112+
113+
For alternate methods of install and example output, check out the repo:
114+
[kubernetes-sigs/community-images](https://github.com/kubernetes-sigs/community-image).
115+
116+
**Option 3**: If you do not have access to a cluster directly, or manage many clusters - the best
117+
way is to run a search over your manifests and charts for _"k8s.gcr.io"_.
118+
119+
**Option 4**: If you wish to prevent k8s.gcr.io based images from running in your cluster, example
120+
policies for [Gatekeeper](https://open-policy-agent.github.io/gatekeeper-library/website/) and
121+
[Kyverno](https://kyverno.io/) are available in the [AWS EKS Best Practices
122+
repository](https://github.com/aws/aws-eks-best-practices/tree/master/policies/k8s-registry-deprecation)
123+
that will block them from being pulled. You can use these third-party policies with any Kubernetes
124+
cluster.
125+
126+
**Option 5**: As a **LAST** possible option, you can use a [Mutating
127+
Admission Webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks)
128+
to change the image address dynamically. This should only be
129+
considered a stopgap till your manifests have been updated. You can
130+
find a (third party) Mutating Webhook and Kyverno policy in
131+
[k8s-gcr-quickfix](https://github.com/abstractinfrastructure/k8s-gcr-quickfix).
132+
133+
## Why did Kubernetes change to a different image registry?
134+
135+
k8s.gcr.io is hosted on a custom [Google Container Registry
136+
(GCR)](https://cloud.google.com/container-registry) domain that was set up solely for the Kubernetes
137+
project. This has worked well since the inception of the project, and we thank Google for providing
138+
these resources, but today, there are other cloud providers and vendors that would like to host
139+
images to provide a better experience for the people on their platforms. In addition to Google’s
140+
[renewed commitment to donate $3
141+
million](https://www.cncf.io/google-cloud-recommits-3m-to-kubernetes/) to support the project's
142+
infrastructure last year, Amazon Web Services announced a matching donation [during their Kubecon NA
143+
2022 keynote in Detroit](https://youtu.be/PPdimejomWo?t=236). This will provide a better experience
144+
for users (closer servers = faster downloads) and will reduce the egress bandwidth and costs from
145+
GCR at the same time.
146+
147+
For more details on this change, check out [registry.k8s.io: faster, cheaper and Generally Available
148+
(GA)](/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/).
149+
150+
## Why is a redirect being put in place?
151+
152+
The project switched to [registry.k8s.io last year with the 1.25
153+
release](https://kubernetes.io/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/); however, most of
154+
the image pull traffic is still directed at the old endpoint k8s.gcr.io. This has not been
155+
sustainable for us as a project, as it is not utilizing the resources that have been donated to the
156+
project from other providers, and we are in the danger of running out of funds due to the cost of
157+
serving this traffic.
158+
159+
A redirect will enable the project to take advantage of these new resources, significantly reducing
160+
our egress bandwidth costs. We only expect this change to impact a small subset of users running in
161+
restricted environments or using very old clients that do not respect redirects properly.
162+
163+
## What will happen to k8s.gcr.io?
164+
165+
Separate from the the redirect, k8s.gcr.io will be frozen [and will not be updated with new images
166+
after April 3rd, 2023](https://kubernetes.io/blog/2023/02/06/k8s-gcr-io-freeze-announcement/). `k8s.gcr.io`
167+
will not get any new releases, patches, or security updates. It will continue to remain available to
168+
help people migrate, but it **WILL** be phased out entirely in the future.
169+
170+
## I still have questions, where should I go?
171+
172+
For more information on registry.k8s.io and why it was developed, see [registry.k8s.io: faster,
173+
cheaper and Generally Available](/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/).
174+
175+
If you would like to know more about the image freeze and the last images that will be available
176+
there, see the blog post: [k8s.gcr.io Image Registry Will Be Frozen From the 3rd of April
177+
2023](/blog/2023/02/06/k8s-gcr-io-freeze-announcement/).
178+
179+
Information on the architecture of registry.k8s.io and its [request handling decision
180+
tree](https://github.com/kubernetes/registry.k8s.io/blob/8408d0501a88b3d2531ff54b14eeb0e3c900a4f3/cmd/archeio/docs/request-handling.md)
181+
can be found in the [kubernetes/registry.k8s.io
182+
repo](https://github.com/kubernetes/registry.k8s.io).
183+
184+
If you believe you have encountered a bug with the new registry or the redirect, please open an
185+
issue in the [kubernetes/registry.k8s.io
186+
repo](https://github.com/kubernetes/registry.k8s.io/issues/new/choose). **Please check if there is an issue already
187+
open similar to what you are seeing before you create a new issue**.

content/en/docs/reference/labels-annotations-taints/_index.md

Lines changed: 14 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -168,6 +168,7 @@ Automanaged APIService objects are deleted by kube-apiserver when it has no buil
168168
{{< /note >}}
169169

170170
There are two possible values:
171+
171172
- `onstart`: The APIService should be reconciled when an API server starts up, but not otherwise.
172173
- `true`: The API server should reconcile this APIService continuously.
173174

@@ -191,7 +192,6 @@ The Kubelet populates this label with the hostname. Note that the hostname can b
191192

192193
This label is also used as part of the topology hierarchy. See [topology.kubernetes.io/zone](#topologykubernetesiozone) for more information.
193194

194-
195195
### kubernetes.io/change-cause {#change-cause}
196196

197197
Example: `kubernetes.io/change-cause: "kubectl edit --record deployment foo"`
@@ -409,6 +409,7 @@ A zone represents a logical failure domain. It is common for Kubernetes cluster
409409
A region represents a larger domain, made up of one or more zones. It is uncommon for Kubernetes clusters to span multiple regions, While the exact definition of a zone or region is left to infrastructure implementations, common properties of a region include higher network latency between them than within them, non-zero cost for network traffic between them, and failure independence from other zones or regions. For example, nodes within a region might share power infrastructure (e.g. a UPS or generator), but nodes in different regions typically would not.
410410

411411
Kubernetes makes a few assumptions about the structure of zones and regions:
412+
412413
1) regions and zones are hierarchical: zones are strict subsets of regions and no zone can be in 2 regions
413414
2) zone names are unique across regions; for example region "africa-east-1" might be comprised of zones "africa-east-1a" and "africa-east-1b"
414415

@@ -539,7 +540,6 @@ a request where the client authenticated using the service account token.
539540
If a legacy token was last used before the cluster gained the feature (added in Kubernetes v1.26), then
540541
the label isn't set.
541542

542-
543543
### endpointslice.kubernetes.io/managed-by {#endpointslicekubernetesiomanaged-by}
544544

545545
Example: `endpointslice.kubernetes.io/managed-by: "controller"`
@@ -625,6 +625,17 @@ Example: `kubectl.kubernetes.io/default-container: "front-end-app"`
625625

626626
The value of the annotation is the container name that is default for this Pod. For example, `kubectl logs` or `kubectl exec` without `-c` or `--container` flag will use this default container.
627627

628+
### kubectl.kubernetes.io/default-logs-container (deprecated)
629+
630+
Example: `kubectl.kubernetes.io/default-logs-container: "front-end-app"`
631+
632+
The value of the annotation is the container name that is the default logging container for this Pod. For example, `kubectl logs` without `-c` or `--container` flag will use this default container.
633+
634+
{{< note >}}
635+
This annotation is deprecated. You should use the [`kubectl.kubernetes.io/default-container`](#kubectl-kubernetes-io-default-container) annotation instead.
636+
Kubernetes versions 1.25 and newer ignore this annotation.
637+
{{< /note >}}
638+
628639
### endpoints.kubernetes.io/over-capacity
629640

630641
Example: `endpoints.kubernetes.io/over-capacity:truncated`
@@ -645,7 +656,7 @@ The presence of this annotation on a Job indicates that the control plane is
645656
[tracking the Job status using finalizers](/docs/concepts/workloads/controllers/job/#job-tracking-with-finalizers).
646657
The control plane uses this annotation to safely transition to tracking Jobs
647658
using finalizers, while the feature is in development.
648-
You should **not** manually add or remove this annotation.
659+
You should **not** manually add or remove this annotation.
649660

650661
{{< note >}}
651662
Starting from Kubernetes 1.26, this annotation is deprecated.
@@ -727,7 +738,6 @@ Refer to
727738
for further details about when and how to use this taint.
728739
{{< /caution >}}
729740

730-
731741
### node.cloudprovider.kubernetes.io/uninitialized
732742

733743
Example: `node.cloudprovider.kubernetes.io/uninitialized: "NoSchedule"`

content/en/docs/tasks/manage-gpus/scheduling-gpus.md

Lines changed: 2 additions & 62 deletions
Original file line numberDiff line numberDiff line change
@@ -88,65 +88,5 @@ If you're using AMD GPU devices, you can deploy
8888
Node Labeller is a {{< glossary_tooltip text="controller" term_id="controller" >}} that automatically
8989
labels your nodes with GPU device properties.
9090

91-
At the moment, that controller can add labels for:
92-
93-
* Device ID (-device-id)
94-
* VRAM Size (-vram)
95-
* Number of SIMD (-simd-count)
96-
* Number of Compute Unit (-cu-count)
97-
* Firmware and Feature Versions (-firmware)
98-
* GPU Family, in two letters acronym (-family)
99-
* SI - Southern Islands
100-
* CI - Sea Islands
101-
* KV - Kaveri
102-
* VI - Volcanic Islands
103-
* CZ - Carrizo
104-
* AI - Arctic Islands
105-
* RV - Raven
106-
107-
```shell
108-
kubectl describe node cluster-node-23
109-
```
110-
111-
```
112-
Name: cluster-node-23
113-
Roles: <none>
114-
Labels: beta.amd.com/gpu.cu-count.64=1
115-
beta.amd.com/gpu.device-id.6860=1
116-
beta.amd.com/gpu.family.AI=1
117-
beta.amd.com/gpu.simd-count.256=1
118-
beta.amd.com/gpu.vram.16G=1
119-
kubernetes.io/arch=amd64
120-
kubernetes.io/os=linux
121-
kubernetes.io/hostname=cluster-node-23
122-
Annotations: node.alpha.kubernetes.io/ttl: 0
123-
124-
```
125-
126-
With the Node Labeller in use, you can specify the GPU type in the Pod spec:
127-
128-
```yaml
129-
apiVersion: v1
130-
kind: Pod
131-
metadata:
132-
name: cuda-vector-add
133-
spec:
134-
restartPolicy: OnFailure
135-
containers:
136-
- name: cuda-vector-add
137-
# https://github.com/kubernetes/kubernetes/blob/v1.7.11/test/images/nvidia-cuda/Dockerfile
138-
image: "registry.k8s.io/cuda-vector-add:v0.1"
139-
resources:
140-
limits:
141-
nvidia.com/gpu: 1
142-
affinity:
143-
nodeAffinity:
144-
requiredDuringSchedulingIgnoredDuringExecution:
145-
nodeSelectorTerms:
146-
– matchExpressions:
147-
– key: beta.amd.com/gpu.family.AI # Arctic Islands GPU family
148-
operator: Exist
149-
```
150-
151-
This ensures that the Pod will be scheduled to a node that has the GPU type
152-
you specified.
91+
Similar functionality for NVIDIA is provied by
92+
[GPU feature discovery](https://github.com/NVIDIA/gpu-feature-discovery/blob/main/README.md).

content/it/docs/tutorials/hello-minikube.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ modalità raccomandata per gestire la creazione e lo scaling dei Pods.
7676
eseguirà un Container basato sulla Docker image specificata.
7777
7878
```shell
79-
kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4
79+
kubectl create deployment hello-node --image=registry.k8s.io/echoserver:1.4
8080
```
8181
8282
2. Visualizza il Deployment:

content/it/examples/admin/logging/two-files-counter-pod-agent-sidecar.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ spec:
2222
- name: varlog
2323
mountPath: /var/log
2424
- name: count-agent
25-
image: k8s.gcr.io/fluentd-gcp:1.30
25+
image: registry.k8s.io/fluentd-gcp:1.30
2626
env:
2727
- name: FLUENTD_ARGS
2828
value: -c /etc/fluentd-config/fluentd.conf

content/pt-br/docs/concepts/configuration/secret.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -617,7 +617,7 @@ metadata:
617617
spec:
618618
containers:
619619
- name: test-container
620-
image: k8s.gcr.io/busybox
620+
image: registry.k8s.io/busybox
621621
command: [ "/bin/sh", "-c", "env" ]
622622
envFrom:
623623
- secretRef:
@@ -855,7 +855,7 @@ spec:
855855
secretName: dotfile-secret
856856
containers:
857857
- name: dotfile-test-container
858-
image: k8s.gcr.io/busybox
858+
image: registry.k8s.io/busybox
859859
command:
860860
- ls
861861
- "-l"

content/pt-br/docs/concepts/storage/persistent-volumes.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -148,7 +148,7 @@ spec:
148148
path: /any/path/it/will/be/replaced
149149
containers:
150150
- name: pv-recycler
151-
image: "k8s.gcr.io/busybox"
151+
image: "registry.k8s.io/busybox"
152152
command: ["/bin/sh", "-c", "test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \"$(ls -A /scrub)\" || exit 1"]
153153
volumeMounts:
154154
- name: vol

0 commit comments

Comments
 (0)