Skip to content

Commit c612009

Browse files
authored
Merge pull request #44909 from drewhagen/merged-main-dev-1.30
Merged main dev 1.30
2 parents 0e7f074 + 02ab6ec commit c612009

File tree

56 files changed

+733
-526
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

56 files changed

+733
-526
lines changed

OWNERS

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,6 @@ approvers:
77
- sig-docs-en-owners # Defined in OWNERS_ALIASES
88

99
emeritus_approvers:
10-
# - celestehorgan, commented out to disable PR assignments
1110
# - chenopis, commented out to disable PR assignments
1211
# - irvifa, commented out to disable PR assignments
1312
# - jaredbhatti, commented out to disable PR assignments

OWNERS_ALIASES

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,9 +26,10 @@ aliases:
2626
- bene2k1
2727
- rlenferink
2828
sig-docs-en-owners: # Admins for English content
29+
- celestehorgan
2930
- divya-mohan0209
30-
- katcosgrove # RT 1.30 Lead
3131
- drewhagen # RT 1.30 Docs Lead
32+
- katcosgrove # RT 1.30 Lead
3233
- natalisucks
3334
- nate-double-u
3435
- onlydole

content/en/docs/concepts/storage/persistent-volumes.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -358,7 +358,7 @@ spec:
358358
...
359359
```
360360

361-
This is useful if you want to consume PersistentVolumes that have their `claimPolicy` set
361+
This is useful if you want to consume PersistentVolumes that have their `persistentVolumeReclaimPolicy` set
362362
to `Retain`, including cases where you are reusing an existing PV.
363363

364364
### Expanding Persistent Volumes Claims

content/en/docs/concepts/workloads/controllers/cron-jobs.md

Lines changed: 24 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ A _CronJob_ creates {{< glossary_tooltip term_id="job" text="Jobs" >}} on a repe
1919

2020
CronJob is meant for performing regular scheduled actions such as backups, report generation,
2121
and so on. One CronJob object is like one line of a _crontab_ (cron table) file on a
22-
Unix system. It runs a job periodically on a given schedule, written in
22+
Unix system. It runs a Job periodically on a given schedule, written in
2323
[Cron](https://en.wikipedia.org/wiki/Cron) format.
2424

2525
CronJobs have limitations and idiosyncrasies.
@@ -101,41 +101,43 @@ You can specify common metadata for the templated Jobs, such as
101101
{{< glossary_tooltip text="annotations" term_id="annotation" >}}.
102102
For information about writing a Job `.spec`, see [Writing a Job Spec](/docs/concepts/workloads/controllers/job/#writing-a-job-spec).
103103

104-
### Deadline for delayed job start {#starting-deadline}
104+
### Deadline for delayed Job start {#starting-deadline}
105105

106106
The `.spec.startingDeadlineSeconds` field is optional.
107107
This field defines a deadline (in whole seconds) for starting the Job, if that Job misses its scheduled time
108108
for any reason.
109109

110110
After missing the deadline, the CronJob skips that instance of the Job (future occurrences are still scheduled).
111-
For example, if you have a backup job that runs twice a day, you might allow it to start up to 8 hours late,
111+
For example, if you have a backup Job that runs twice a day, you might allow it to start up to 8 hours late,
112112
but no later, because a backup taken any later wouldn't be useful: you would instead prefer to wait for
113113
the next scheduled run.
114114

115115
For Jobs that miss their configured deadline, Kubernetes treats them as failed Jobs.
116116
If you don't specify `startingDeadlineSeconds` for a CronJob, the Job occurrences have no deadline.
117117

118118
If the `.spec.startingDeadlineSeconds` field is set (not null), the CronJob
119-
controller measures the time between when a job is expected to be created and
119+
controller measures the time between when a Job is expected to be created and
120120
now. If the difference is higher than that limit, it will skip this execution.
121121

122-
For example, if it is set to `200`, it allows a job to be created for up to 200
122+
For example, if it is set to `200`, it allows a Job to be created for up to 200
123123
seconds after the actual schedule.
124124

125125
### Concurrency policy
126126

127127
The `.spec.concurrencyPolicy` field is also optional.
128-
It specifies how to treat concurrent executions of a job that is created by this CronJob.
128+
It specifies how to treat concurrent executions of a Job that is created by this CronJob.
129129
The spec may specify only one of the following concurrency policies:
130130

131-
* `Allow` (default): The CronJob allows concurrently running jobs
132-
* `Forbid`: The CronJob does not allow concurrent runs; if it is time for a new job run and the
133-
previous job run hasn't finished yet, the CronJob skips the new job run
134-
* `Replace`: If it is time for a new job run and the previous job run hasn't finished yet, the
135-
CronJob replaces the currently running job run with a new job run
131+
* `Allow` (default): The CronJob allows concurrently running Jobs
132+
* `Forbid`: The CronJob does not allow concurrent runs; if it is time for a new Job run and the
133+
previous Job run hasn't finished yet, the CronJob skips the new Job run. Also note that when the
134+
previous Job run finishes, `.spec.startingDeadlineSeconds` is still taken into account and may
135+
result in a new Job run.
136+
* `Replace`: If it is time for a new Job run and the previous Job run hasn't finished yet, the
137+
CronJob replaces the currently running Job run with a new Job run
136138

137-
Note that concurrency policy only applies to the jobs created by the same cron job.
138-
If there are multiple CronJobs, their respective jobs are always allowed to run concurrently.
139+
Note that concurrency policy only applies to the Jobs created by the same CronJob.
140+
If there are multiple CronJobs, their respective Jobs are always allowed to run concurrently.
139141

140142
### Schedule suspension
141143

@@ -149,19 +151,19 @@ scheduled, but the CronJob controller does not start the Jobs to run the tasks)
149151
you unsuspend the CronJob.
150152

151153
{{< caution >}}
152-
Executions that are suspended during their scheduled time count as missed jobs.
154+
Executions that are suspended during their scheduled time count as missed Jobs.
153155
When `.spec.suspend` changes from `true` to `false` on an existing CronJob without a
154-
[starting deadline](#starting-deadline), the missed jobs are scheduled immediately.
156+
[starting deadline](#starting-deadline), the missed Jobs are scheduled immediately.
155157
{{< /caution >}}
156158

157159
### Jobs history limits
158160

159161
The `.spec.successfulJobsHistoryLimit` and `.spec.failedJobsHistoryLimit` fields are optional.
160-
These fields specify how many completed and failed jobs should be kept.
162+
These fields specify how many completed and failed Jobs should be kept.
161163
By default, they are set to 3 and 1 respectively. Setting a limit to `0` corresponds to keeping
162-
none of the corresponding kind of jobs after they finish.
164+
none of the corresponding kind of Jobs after they finish.
163165

164-
For another way to clean up jobs automatically, see [Clean up finished jobs automatically](/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically).
166+
For another way to clean up Jobs automatically, see [Clean up finished Jobs automatically](/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically).
165167

166168
### Time zones
167169

@@ -207,27 +209,27 @@ Kubernetes tries to avoid those situations, but does not completely prevent them
207209
the Jobs that you define should be _idempotent_.
208210

209211
If `startingDeadlineSeconds` is set to a large value or left unset (the default)
210-
and if `concurrencyPolicy` is set to `Allow`, the jobs will always run
212+
and if `concurrencyPolicy` is set to `Allow`, the Jobs will always run
211213
at least once.
212214

213215
{{< caution >}}
214216
If `startingDeadlineSeconds` is set to a value less than 10 seconds, the CronJob may not be scheduled. This is because the CronJob controller checks things every 10 seconds.
215217
{{< /caution >}}
216218

217219

218-
For every CronJob, the CronJob {{< glossary_tooltip term_id="controller" >}} checks how many schedules it missed in the duration from its last scheduled time until now. If there are more than 100 missed schedules, then it does not start the job and logs the error.
220+
For every CronJob, the CronJob {{< glossary_tooltip term_id="controller" >}} checks how many schedules it missed in the duration from its last scheduled time until now. If there are more than 100 missed schedules, then it does not start the Job and logs the error.
219221

220222
```
221223
Cannot determine if job needs to be started. Too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew.
222224
```
223225

224-
It is important to note that if the `startingDeadlineSeconds` field is set (not `nil`), the controller counts how many missed jobs occurred from the value of `startingDeadlineSeconds` until now rather than from the last scheduled time until now. For example, if `startingDeadlineSeconds` is `200`, the controller counts how many missed jobs occurred in the last 200 seconds.
226+
It is important to note that if the `startingDeadlineSeconds` field is set (not `nil`), the controller counts how many missed Jobs occurred from the value of `startingDeadlineSeconds` until now rather than from the last scheduled time until now. For example, if `startingDeadlineSeconds` is `200`, the controller counts how many missed Jobs occurred in the last 200 seconds.
225227

226228
A CronJob is counted as missed if it has failed to be created at its scheduled time. For example, if `concurrencyPolicy` is set to `Forbid` and a CronJob was attempted to be scheduled when there was a previous schedule still running, then it would count as missed.
227229

228230
For example, suppose a CronJob is set to schedule a new Job every one minute beginning at `08:30:00`, and its
229231
`startingDeadlineSeconds` field is not set. If the CronJob controller happens to
230-
be down from `08:29:00` to `10:21:00`, the job will not start as the number of missed jobs which missed their schedule is greater than 100.
232+
be down from `08:29:00` to `10:21:00`, the Job will not start as the number of missed Jobs which missed their schedule is greater than 100.
231233

232234
To illustrate this concept further, suppose a CronJob is set to schedule a new Job every one minute beginning at `08:30:00`, and its
233235
`startingDeadlineSeconds` is set to 200 seconds. If the CronJob controller happens to

content/en/docs/contribute/localization.md

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -187,8 +187,11 @@ script and use it in the theme. Assign "language name in latin script" to
187187
`languageNameLatinScript`. For example, `languageNameLatinScript ="Korean"` or
188188
`languageNameLatinScript = "Deutsch"`.
189189

190-
When assigning a `weight` parameter for your block, find the language block with
191-
the highest weight and add 1 to that value.
190+
The `weight` parameter determines the order of languages in the language selection bar.
191+
A lower weight takes precedence, resulting in the language appearing first.
192+
When assigning the `weight` parameter, it is important to examine the existing languages
193+
block and adjust their weights to ensure they are in a sorted order relative to all languages,
194+
including any newly added language.
192195

193196
For more information about Hugo's multilingual support, see
194197
"[Multilingual Mode](https://gohugo.io/content-management/multilingual/)".

content/en/docs/contribute/style/style-guide.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -156,7 +156,7 @@ Run the process as a DaemonSet in the `kube-system` namespace. | Run the process
156156
{{< table caption = "Do and Don't - Use code style for Kubernetes command tool and component names" >}}
157157
Do | Don't
158158
:--| :-----
159-
The kubelet preserves node stability. | The `kubelet` preserves node stability.
159+
The `kubelet` preserves node stability. | The kubelet preserves node stability.
160160
The `kubectl` handles locating and authenticating to the API server. | The kubectl handles locating and authenticating to the apiserver.
161161
Run the process with the certificate, `kube-apiserver --client-ca-file=FILENAME`. | Run the process with the certificate, kube-apiserver --client-ca-file=FILENAME. |
162162
{{< /table >}}

content/en/docs/reference/using-api/deprecation-guide.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -35,11 +35,11 @@ The **flowcontrol.apiserver.k8s.io/v1beta3** API version of FlowSchema and Prior
3535

3636
### v1.29
3737

38-
The **v1.29** release will stop serving the following deprecated API versions:
38+
The **v1.29** release stopped serving the following deprecated API versions:
3939

4040
#### Flow control resources {#flowcontrol-resources-v129}
4141

42-
The **flowcontrol.apiserver.k8s.io/v1beta2** API version of FlowSchema and PriorityLevelConfiguration will no longer be served in v1.29.
42+
The **flowcontrol.apiserver.k8s.io/v1beta2** API version of FlowSchema and PriorityLevelConfiguration is no longer served as of v1.29.
4343

4444
* Migrate manifests and API clients to use the **flowcontrol.apiserver.k8s.io/v1** API version, available since v1.29, or the **flowcontrol.apiserver.k8s.io/v1beta3** API version, available since v1.26.
4545
* All existing persisted objects are accessible via the new API
@@ -54,7 +54,7 @@ The **v1.27** release stopped serving the following deprecated API versions:
5454

5555
#### CSIStorageCapacity {#csistoragecapacity-v127}
5656

57-
The **storage.k8s.io/v1beta1** API version of CSIStorageCapacity will no longer be served in v1.27.
57+
The **storage.k8s.io/v1beta1** API version of CSIStorageCapacity is no longer served as of v1.27.
5858

5959
* Migrate manifests and API clients to use the **storage.k8s.io/v1** API version, available since v1.24.
6060
* All existing persisted objects are accessible via the new API

content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md

Lines changed: 3 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -344,13 +344,7 @@ Before starting the restore operation, a snapshot file must be present. It can
344344
either be a snapshot file from a previous backup operation, or from a remaining
345345
[data directory](https://etcd.io/docs/current/op-guide/configuration/#--data-dir).
346346

347-
Here is an example:
348-
349-
```shell
350-
ETCDCTL_API=3 etcdctl --endpoints 10.2.0.9:2379 snapshot restore snapshot.db
351-
```
352-
353-
Another example for restoring using `etcdctl` options:
347+
When restoring the cluster, use the `--data-dir` option to specify to which folder the cluster should be restored:
354348

355349
```shell
356350
ETCDCTL_API=3 etcdctl --data-dir <data-dir-location> snapshot restore snapshot.db
@@ -364,6 +358,8 @@ export ETCDCTL_API=3
364358
etcdctl --data-dir <data-dir-location> snapshot restore snapshot.db
365359
```
366360

361+
If `<data-dir-location>` is the same folder as before, delete it and stop etcd process before restoring the cluster. Else change etcd configuration and restart the etcd process after restoration to make it use the new data directory.
362+
367363
For more information and examples on restoring a cluster from a snapshot file, see
368364
[etcd disaster recovery documentation](https://etcd.io/docs/current/op-guide/recovery/#restoring-a-cluster).
369365

content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md

Lines changed: 20 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -33,12 +33,12 @@ kubectl get deployment --namespace=kube-system
3333

3434
The output is similar to this:
3535

36-
NAME READY UP-TO-DATE AVAILABLE AGE
36+
NAME READY UP-TO-DATE AVAILABLE AGE
3737
...
38-
dns-autoscaler 1/1 1 1 ...
38+
kube-dns-autoscaler 1/1 1 1 ...
3939
...
4040

41-
If you see "dns-autoscaler" in the output, DNS horizontal autoscaling is
41+
If you see "kube-dns-autoscaler" in the output, DNS horizontal autoscaling is
4242
already enabled, and you can skip to
4343
[Tuning autoscaling parameters](#tuning-autoscaling-parameters).
4444

@@ -99,13 +99,13 @@ kubectl apply -f dns-horizontal-autoscaler.yaml
9999

100100
The output of a successful command is:
101101

102-
deployment.apps/dns-autoscaler created
102+
deployment.apps/kube-dns-autoscaler created
103103

104104
DNS horizontal autoscaling is now enabled.
105105

106106
## Tune DNS autoscaling parameters {#tuning-autoscaling-parameters}
107107

108-
Verify that the dns-autoscaler {{< glossary_tooltip text="ConfigMap" term_id="configmap" >}} exists:
108+
Verify that the kube-dns-autoscaler {{< glossary_tooltip text="ConfigMap" term_id="configmap" >}} exists:
109109

110110
```shell
111111
kubectl get configmap --namespace=kube-system
@@ -115,13 +115,13 @@ The output is similar to this:
115115

116116
NAME DATA AGE
117117
...
118-
dns-autoscaler 1 ...
118+
kube-dns-autoscaler 1 ...
119119
...
120120

121121
Modify the data in the ConfigMap:
122122

123123
```shell
124-
kubectl edit configmap dns-autoscaler --namespace=kube-system
124+
kubectl edit configmap kube-dns-autoscaler --namespace=kube-system
125125
```
126126

127127
Look for this line:
@@ -151,17 +151,17 @@ There are other supported scaling patterns. For details, see
151151
There are a few options for tuning DNS horizontal autoscaling. Which option to
152152
use depends on different conditions.
153153

154-
### Option 1: Scale down the dns-autoscaler deployment to 0 replicas
154+
### Option 1: Scale down the kube-dns-autoscaler deployment to 0 replicas
155155

156156
This option works for all situations. Enter this command:
157157

158158
```shell
159-
kubectl scale deployment --replicas=0 dns-autoscaler --namespace=kube-system
159+
kubectl scale deployment --replicas=0 kube-dns-autoscaler --namespace=kube-system
160160
```
161161

162162
The output is:
163163

164-
deployment.apps/dns-autoscaler scaled
164+
deployment.apps/kube-dns-autoscaler scaled
165165

166166
Verify that the replica count is zero:
167167

@@ -171,37 +171,37 @@ kubectl get rs --namespace=kube-system
171171

172172
The output displays 0 in the DESIRED and CURRENT columns:
173173

174-
NAME DESIRED CURRENT READY AGE
174+
NAME DESIRED CURRENT READY AGE
175175
...
176-
dns-autoscaler-6b59789fc8 0 0 0 ...
176+
kube-dns-autoscaler-6b59789fc8 0 0 0 ...
177177
...
178178

179-
### Option 2: Delete the dns-autoscaler deployment
179+
### Option 2: Delete the kube-dns-autoscaler deployment
180180

181-
This option works if dns-autoscaler is under your own control, which means
181+
This option works if kube-dns-autoscaler is under your own control, which means
182182
no one will re-create it:
183183

184184
```shell
185-
kubectl delete deployment dns-autoscaler --namespace=kube-system
185+
kubectl delete deployment kube-dns-autoscaler --namespace=kube-system
186186
```
187187

188188
The output is:
189189

190-
deployment.apps "dns-autoscaler" deleted
190+
deployment.apps "kube-dns-autoscaler" deleted
191191

192-
### Option 3: Delete the dns-autoscaler manifest file from the master node
192+
### Option 3: Delete the kube-dns-autoscaler manifest file from the master node
193193

194-
This option works if dns-autoscaler is under control of the (deprecated)
194+
This option works if kube-dns-autoscaler is under control of the (deprecated)
195195
[Addon Manager](https://git.k8s.io/kubernetes/cluster/addons/README.md),
196196
and you have write access to the master node.
197197

198198
Sign in to the master node and delete the corresponding manifest file.
199-
The common path for this dns-autoscaler is:
199+
The common path for this kube-dns-autoscaler is:
200200

201201
/etc/kubernetes/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml
202202

203203
After the manifest file is deleted, the Addon Manager will delete the
204-
dns-autoscaler Deployment.
204+
kube-dns-autoscaler Deployment.
205205

206206

207207

content/en/docs/tasks/administer-cluster/encrypt-data.md

Lines changed: 19 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -50,12 +50,27 @@ to either:
5050

5151
<!-- steps -->
5252

53-
## Configuration and determining whether encryption at rest is already enabled
53+
## Determine whether encryption at rest is already enabled {#determining-whether-encryption-at-rest-is-already-enabled}
54+
55+
By default, the API server stores plain-text representations of resources into etcd, with
56+
no at-rest encryption.
5457

5558
The `kube-apiserver` process accepts an argument `--encryption-provider-config`
56-
that controls how API data is encrypted in etcd.
57-
The configuration is provided as an API named
58-
[`EncryptionConfiguration`](/docs/reference/config-api/apiserver-encryption.v1/). An example configuration is provided below.
59+
that specifies a path to a configuration file. The contents of that file, if you specify one,
60+
control how Kubernetes API data is encrypted in etcd.
61+
If you are running the kube-apiserver without the `--encryption-provider-config` command line
62+
argument, you do not have encryption at rest enabled. If you are running the kube-apiserver
63+
with the `--encryption-provider-config` command line argument, and the file that it references
64+
specifies the `identity` provider as the first encryption provider in the list, then you
65+
do not have at-rest encryption enabled
66+
(**the default `identity` provider does not provide any confidentiality protection.**)
67+
68+
If you are running the kube-apiserver
69+
with the `--encryption-provider-config` command line argument, and the file that it references
70+
specifies a provider other than `identity` as the first encryption provider in the list, then
71+
you already have at-rest encryption enabled. However, that check does not tell you whether
72+
a previous migration to encrypted storage has succeeded. If you are not sure, see
73+
[ensure all relevant data are encrypted](#ensure-all-secrets-are-encrypted).
5974

6075
{{< caution >}}
6176
**IMPORTANT:** For high-availability configurations (with two or more control plane nodes), the

0 commit comments

Comments
 (0)