Skip to content

Commit 42f001b

Browse files
authored
Merge pull request #47036 from chanieljdan/merged-main-dev-1.31
Merged main dev 1.31
2 parents 780f166 + f9aaed4 commit 42f001b

File tree

33 files changed

+7193
-434
lines changed

33 files changed

+7193
-434
lines changed

assets/scss/_custom.scss

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -341,6 +341,12 @@ footer {
341341

342342
/* DOCS */
343343

344+
table tr.cve-status-open, table tr.cve-status-unknown {
345+
> td.cve-item-summary {
346+
font-weight: bold;
347+
}
348+
}
349+
344350
.launch-cards {
345351
padding: 0;
346352
display: grid;

content/en/docs/concepts/configuration/secret.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -335,7 +335,6 @@ You can create an `Opaque` type for credentials used for basic authentication.
335335
However, using the defined and public Secret type (`kubernetes.io/basic-auth`) helps other
336336
people to understand the purpose of your Secret, and sets a convention for what key names
337337
to expect.
338-
The Kubernetes API verifies that the required keys are set for a Secret of this type.
339338

340339
### SSH authentication Secrets
341340

content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -103,9 +103,9 @@ percentageOfNodesToScore: 50
103103
104104
`percentageOfNodesToScore` must be a value between 1 and 100 with the default
105105
value being calculated based on the cluster size. There is also a hardcoded
106-
minimum value of 50 nodes.
106+
minimum value of 100 nodes.
107107

108-
{{< note >}}In clusters with less than 50 feasible nodes, the scheduler still
108+
{{< note >}}In clusters with less than 100 feasible nodes, the scheduler still
109109
checks all the nodes because there are not enough feasible nodes to stop
110110
the scheduler's search early.
111111

content/en/docs/concepts/services-networking/service.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -725,16 +725,16 @@ Select one of the tabs.
725725
metadata:
726726
name: my-service
727727
annotations:
728-
networking.gke.io/load-balancer-type: "Internal"
728+
networking.gke.io/load-balancer-type: "Internal"
729729
```
730730
{{% /tab %}}
731731
{{% tab name="AWS" %}}
732732

733733
```yaml
734734
metadata:
735-
name: my-service
736-
annotations:
737-
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
735+
name: my-service
736+
annotations:
737+
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
738738
```
739739

740740
{{% /tab %}}
@@ -744,7 +744,7 @@ metadata:
744744
metadata:
745745
name: my-service
746746
annotations:
747-
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
747+
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
748748
```
749749

750750
{{% /tab %}}
@@ -754,7 +754,7 @@ metadata:
754754
metadata:
755755
name: my-service
756756
annotations:
757-
service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: "private"
757+
service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: "private"
758758
```
759759

760760
{{% /tab %}}
@@ -802,7 +802,7 @@ metadata:
802802
metadata:
803803
name: my-service
804804
annotations:
805-
service.beta.kubernetes.io/oci-load-balancer-internal: true
805+
service.beta.kubernetes.io/oci-load-balancer-internal: true
806806
```
807807
{{% /tab %}}
808808
{{< /tabs >}}

content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md

Lines changed: 46 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -14,62 +14,68 @@ weight: 270
1414

1515
## {{% heading "prerequisites" %}}
1616

17-
You need to have a Kubernetes cluster, and the kubectl command-line tool must
18-
be configured to communicate with your cluster. It is recommended to follow this
19-
guide on a cluster with at least two nodes that are not acting as control plane
20-
nodes. If you do not already have a cluster, you can create one by using
21-
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/).
17+
Before you follow steps in this page to deploy, manage, back up or restore etcd,
18+
you need to understand the typical expectations for operating an etcd cluster.
19+
Refer to the [etcd documentation](https://etcd.io/docs/) for more context.
2220

23-
### Understanding etcdctl and etcdutl
21+
Key details include:
2422

25-
`etcdctl` and `etcdutl` are command-line tools used to interact with etcd clusters, but they serve different purposes:
26-
27-
- `etcdctl`: This is the primary command-line client for interacting with etcd over a
28-
network. It is used for day-to-day operations such as managing keys and values,
29-
administering the cluster, checking health, and more.
30-
31-
- `etcdutl`: This is an administration utility designed to operate directly on etcd data
32-
files, including migrating data between etcd versions, defragmenting the database,
33-
restoring snapshots, and validating data consistency. For network operations, `etcdctl`
34-
should be used.
35-
36-
For more information on `etcdutl`, you can refer to the [etcd recovery documentation](https://etcd.io/docs/v3.5/op-guide/recovery/).
37-
38-
<!-- steps -->
39-
40-
## Prerequisites
41-
42-
* Run etcd as a cluster of odd members.
23+
* The minimum recommended etcd versions to run in production are `3.4.22+` and `3.5.6+`.
4324

4425
* etcd is a leader-based distributed system. Ensure that the leader
4526
periodically send heartbeats on time to all followers to keep the cluster
4627
stable.
4728

48-
* Ensure that no resource starvation occurs.
29+
* You should run etcd as a cluster with an odd number of members.
30+
31+
* Aim to ensure that no resource starvation occurs.
4932

5033
Performance and stability of the cluster is sensitive to network and disk
5134
I/O. Any resource starvation can lead to heartbeat timeout, causing instability
5235
of the cluster. An unstable etcd indicates that no leader is elected. Under
5336
such circumstances, a cluster cannot make any changes to its current state,
5437
which implies no new pods can be scheduled.
5538

56-
* Keeping etcd clusters stable is critical to the stability of Kubernetes
57-
clusters. Therefore, run etcd clusters on dedicated machines or isolated
58-
environments for [guaranteed resource requirements](https://etcd.io/docs/current/op-guide/hardware/).
59-
60-
* The minimum recommended etcd versions to run in production are `3.4.22+` and `3.5.6+`.
61-
62-
## Resource requirements
39+
### Resource requirements for etcd
6340

6441
Operating etcd with limited resources is suitable only for testing purposes.
6542
For deploying in production, advanced hardware configuration is required.
6643
Before deploying etcd in production, see
6744
[resource requirement reference](https://etcd.io/docs/current/op-guide/hardware/#example-hardware-configurations).
6845

46+
Keeping etcd clusters stable is critical to the stability of Kubernetes
47+
clusters. Therefore, run etcd clusters on dedicated machines or isolated
48+
environments for [guaranteed resource requirements](https://etcd.io/docs/current/op-guide/hardware/).
49+
50+
### Tools
51+
52+
Depending on which specific outcome you're working on, you will need the `etcdctl` tool or the
53+
`etcdutl` tool (you may need both).
54+
55+
<!-- steps -->
56+
57+
## Understanding etcdctl and etcdutl
58+
59+
`etcdctl` and `etcdutl` are command-line tools used to interact with etcd clusters, but they serve different purposes:
60+
61+
- `etcdctl`: This is the primary command-line client for interacting with etcd over a
62+
network. It is used for day-to-day operations such as managing keys and values,
63+
administering the cluster, checking health, and more.
64+
65+
- `etcdutl`: This is an administration utility designed to operate directly on etcd data
66+
files, including migrating data between etcd versions, defragmenting the database,
67+
restoring snapshots, and validating data consistency. For network operations, `etcdctl`
68+
should be used.
69+
70+
For more information on `etcdutl`, you can refer to the [etcd recovery documentation](https://etcd.io/docs/v3.5/op-guide/recovery/).
71+
72+
6973
## Starting etcd clusters
7074

7175
This section covers starting a single-node and multi-node etcd cluster.
7276

77+
This guide assumes that `etcd` is already installed.
78+
7379
### Single-node etcd cluster
7480

7581
Use a single-node etcd cluster only for testing purposes.
@@ -93,7 +99,14 @@ production and back it up periodically. A five-member cluster is recommended
9399
in production. For more information, see
94100
[FAQ documentation](https://etcd.io/docs/current/faq/#what-is-failure-tolerance).
95101

96-
Configure an etcd cluster either by static member information or by dynamic
102+
As you're using Kubernetes, you have the option to run etcd as a container inside
103+
one or more Pods. The `kubeadm` tool sets up etcd
104+
{{< glossary_tooltip text="static pods" term_id="static-pod" >}} by default, or
105+
you can deploy a
106+
[separate cluster](/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/)
107+
and instruct kubeadm to use that etcd cluster as the control plane's backing store.
108+
109+
You configure an etcd cluster either by static member information or by dynamic
97110
discovery. For more information on clustering, see
98111
[etcd clustering documentation](https://etcd.io/docs/current/op-guide/clustering/).
99112

content/en/docs/tasks/configure-pod-container/configure-service-account.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -263,7 +263,7 @@ token: ...
263263
```
264264

265265
{{< note >}}
266-
The content of `token` is elided here.
266+
The content of `token` is omitted here.
267267

268268
Take care not to display the contents of a `kubernetes.io/service-account-token`
269269
Secret somewhere that your terminal / computer screen could be seen by an onlooker.

content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md

Lines changed: 49 additions & 52 deletions
Original file line numberDiff line numberDiff line change
@@ -29,13 +29,13 @@ Kompose is released via GitHub on a three-week cycle, you can see all current re
2929

3030
```sh
3131
# Linux
32-
curl -L https://github.com/kubernetes/kompose/releases/download/v1.26.0/kompose-linux-amd64 -o kompose
32+
curl -L https://github.com/kubernetes/kompose/releases/download/v1.34.0/kompose-linux-amd64 -o kompose
3333

3434
# macOS
35-
curl -L https://github.com/kubernetes/kompose/releases/download/v1.26.0/kompose-darwin-amd64 -o kompose
35+
curl -L https://github.com/kubernetes/kompose/releases/download/v1.34.0/kompose-darwin-amd64 -o kompose
3636

3737
# Windows
38-
curl -L https://github.com/kubernetes/kompose/releases/download/v1.26.0/kompose-windows-amd64.exe -o kompose.exe
38+
curl -L https://github.com/kubernetes/kompose/releases/download/v1.34.0/kompose-windows-amd64.exe -o kompose.exe
3939

4040
chmod +x kompose
4141
sudo mv ./kompose /usr/local/bin/kompose
@@ -93,26 +93,27 @@ you need is an existing `docker-compose.yml` file.
9393
1. Go to the directory containing your `docker-compose.yml` file. If you don't have one, test using this one.
9494

9595
```yaml
96-
version: "2"
9796

9897
services:
9998

100-
redis-master:
101-
image: registry.k8s.io/redis:e2e
99+
redis-leader:
100+
container_name: redis-leader
101+
image: redis
102102
ports:
103103
- "6379"
104104

105-
redis-slave:
106-
image: gcr.io/google_samples/gb-redisslave:v3
105+
redis-replica:
106+
container_name: redis-replica
107+
image: redis
107108
ports:
108109
- "6379"
109-
environment:
110-
- GET_HOSTS_FROM=dns
110+
command: redis-server --replicaof redis-leader 6379 --dir /tmp
111111

112-
frontend:
113-
image: gcr.io/google-samples/gb-frontend:v4
112+
web:
113+
container_name: web
114+
image: quay.io/kompose/web
114115
ports:
115-
- "80:80"
116+
- "8080:8080"
116117
environment:
117118
- GET_HOSTS_FROM=dns
118119
labels:
@@ -129,27 +130,27 @@ you need is an existing `docker-compose.yml` file.
129130
The output is similar to:
130131

131132
```none
132-
INFO Kubernetes file "frontend-tcp-service.yaml" created
133-
INFO Kubernetes file "redis-master-service.yaml" created
134-
INFO Kubernetes file "redis-slave-service.yaml" created
135-
INFO Kubernetes file "frontend-deployment.yaml" created
136-
INFO Kubernetes file "redis-master-deployment.yaml" created
137-
INFO Kubernetes file "redis-slave-deployment.yaml" created
133+
INFO Kubernetes file "redis-leader-service.yaml" created
134+
INFO Kubernetes file "redis-replica-service.yaml" created
135+
INFO Kubernetes file "web-tcp-service.yaml" created
136+
INFO Kubernetes file "redis-leader-deployment.yaml" created
137+
INFO Kubernetes file "redis-replica-deployment.yaml" created
138+
INFO Kubernetes file "web-deployment.yaml" created
138139
```
139140

140141
```bash
141-
kubectl apply -f frontend-tcp-service.yaml,redis-master-service.yaml,redis-slave-service.yaml,frontend-deployment.yaml,redis-master-deployment.yaml,redis-slave-deployment.yaml
142+
kubectl apply -f web-tcp-service.yaml,redis-leader-service.yaml,redis-replica-service.yaml,web-deployment.yaml,redis-leader-deployment.yaml,redis-replica-deployment.yaml
142143
```
143144

144145
The output is similar to:
145146

146147
```none
147-
service/frontend-tcp created
148-
service/redis-master created
149-
service/redis-slave created
150-
deployment.apps/frontend created
151-
deployment.apps/redis-master created
152-
deployment.apps/redis-slave created
148+
deployment.apps/redis-leader created
149+
deployment.apps/redis-replica created
150+
deployment.apps/web created
151+
service/redis-leader created
152+
service/redis-replica created
153+
service/web-tcp created
153154
```
154155

155156
Your deployments are running in Kubernetes.
@@ -159,39 +160,35 @@ you need is an existing `docker-compose.yml` file.
159160
If you're already using `minikube` for your development process:
160161

161162
```bash
162-
minikube service frontend
163+
minikube service web-tcp
163164
```
164165

165166
Otherwise, let's look up what IP your service is using!
166167

167168
```sh
168-
kubectl describe svc frontend
169+
kubectl describe svc web-tcp
169170
```
170171

171172
```none
172-
Name: frontend-tcp
173-
Namespace: default
174-
Labels: io.kompose.service=frontend-tcp
175-
Annotations: kompose.cmd: kompose convert
176-
kompose.service.type: LoadBalancer
177-
kompose.version: 1.26.0 (40646f47)
178-
Selector: io.kompose.service=frontend
179-
Type: LoadBalancer
180-
IP Family Policy: SingleStack
181-
IP Families: IPv4
182-
IP: 10.43.67.174
183-
IPs: 10.43.67.174
184-
Port: 80 80/TCP
185-
TargetPort: 80/TCP
186-
NodePort: 80 31254/TCP
187-
Endpoints: 10.42.0.25:80
188-
Session Affinity: None
189-
External Traffic Policy: Cluster
190-
Events:
191-
Type Reason Age From Message
192-
---- ------ ---- ---- -------
193-
Normal EnsuringLoadBalancer 62s service-controller Ensuring load balancer
194-
Normal AppliedDaemonSet 62s service-controller Applied LoadBalancer DaemonSet kube-system/svclb-frontend-tcp-9362d276
173+
Name: web-tcp
174+
Namespace: default
175+
Labels: io.kompose.service=web-tcp
176+
Annotations: kompose.cmd: kompose convert
177+
kompose.service.type: LoadBalancer
178+
kompose.version: 1.33.0 (3ce457399)
179+
Selector: io.kompose.service=web
180+
Type: LoadBalancer
181+
IP Family Policy: SingleStack
182+
IP Families: IPv4
183+
IP: 10.102.30.3
184+
IPs: 10.102.30.3
185+
Port: 8080 8080/TCP
186+
TargetPort: 8080/TCP
187+
NodePort: 8080 31624/TCP
188+
Endpoints: 10.244.0.5:8080
189+
Session Affinity: None
190+
External Traffic Policy: Cluster
191+
Events: <none>
195192
```
196193

197194
If you're using a cloud provider, your IP will be listed next to `LoadBalancer Ingress`.
@@ -206,7 +203,7 @@ you need is an existing `docker-compose.yml` file.
206203
resources used.
207204

208205
```sh
209-
kubectl delete -f frontend-tcp-service.yaml,redis-master-service.yaml,redis-slave-service.yaml,frontend-deployment.yaml,redis-master-deployment.yaml,redis-slave-deployment.yaml
206+
kubectl delete -f web-tcp-service.yaml,redis-leader-service.yaml,redis-replica-service.yaml,web-deployment.yaml,redis-leader-deployment.yaml,redis-replica-deployment.yaml
210207
```
211208

212209
<!-- discussion -->

content/en/docs/tasks/manage-kubernetes-objects/storage-version-migration.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,15 @@ Install [`kubectl`](/docs/tasks/tools/#kubectl).
2626

2727
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
2828

29+
Ensure that your cluster has the `StorageVersionMigrator` and `InformerResourceVersion`
30+
[feature gates](/docs/reference/command-line-tools-reference/feature-gates/)
31+
enabled. You will need control plane administrator access to make that change.
32+
33+
Enable storage version migration REST api by setting runtime config
34+
`storagemigration.k8s.io/v1alpha1` to `true` for the API server. For more information on
35+
how to do that,
36+
read [enable or disable a Kubernetes API](/docs/tasks/administer-cluster/enable-disable-api/).
37+
2938
<!-- steps -->
3039

3140
## Re-encrypt Kubernetes secrets using storage version migration

0 commit comments

Comments
 (0)