@@ -46,7 +46,7 @@ allows the clean up of resources like the following:
46
46
<!--
47
47
## Owners and dependents {#owners-dependents}
48
48
49
- Many objects in Kubernetes link to each other through [*owner references*](/docs/concepts/overview/working-with-objects/owners-dependents/).
49
+ Many objects in Kubernetes link to each other through [*owner references*](/docs/concepts/overview/working-with-objects/owners-dependents/).
50
50
Owner references tell the control plane which objects are dependent on others.
51
51
Kubernetes uses owner references to give the control plane, and other API
52
52
clients, the opportunity to clean up related resources before deleting an
@@ -98,7 +98,7 @@ it is treated as having an unresolvable owner reference, and is not able to be g
98
98
99
99
<!--
100
100
In v1.20+, if the garbage collector detects an invalid cross-namespace `ownerReference`,
101
- or a cluster-scoped dependent with an `ownerReference` referencing a namespaced kind, a warning Event
101
+ or a cluster-scoped dependent with an `ownerReference` referencing a namespaced kind, a warning Event
102
102
with a reason of `OwnerRefInvalidNamespace` and an `involvedObject` of the invalid dependent is reported.
103
103
You can check for that kind of Event by running
104
104
`kubectl get events -A --field-selector=reason=OwnerRefInvalidNamespace`.
@@ -118,7 +118,7 @@ Kubernetes checks for and deletes objects that no longer have owner
118
118
references, like the pods left behind when you delete a ReplicaSet. When you
119
119
delete an object, you can control whether Kubernetes deletes the object's
120
120
dependents automatically, in a process called *cascading deletion*. There are
121
- two types of cascading deletion, as follows:
121
+ two types of cascading deletion, as follows:
122
122
123
123
* Foreground cascading deletion
124
124
* Background cascading deletion
@@ -135,7 +135,7 @@ Kubernetes 会检查并删除那些不再拥有属主引用的对象,例如在
135
135
136
136
<!--
137
137
You can also control how and when garbage collection deletes resources that have
138
- owner references using Kubernetes {{<glossary_tooltip text="finalizers" term_id="finalizer">}}.
138
+ owner references using Kubernetes {{<glossary_tooltip text="finalizers" term_id="finalizer">}}.
139
139
-->
140
140
你也可以使用 Kubernetes {{<glossary_tooltip text="Finalizers" term_id="finalizer">}}
141
141
来控制垃圾收集机制如何以及何时删除包含属主引用的资源。
@@ -145,7 +145,7 @@ owner references using Kubernetes {{<glossary_tooltip text="finalizers" term_id=
145
145
146
146
In foreground cascading deletion, the owner object you're deleting first enters
147
147
a *deletion in progress* state. In this state, the following happens to the
148
- owner object:
148
+ owner object:
149
149
-->
150
150
### 前台级联删除 {#foreground-deletion}
151
151
@@ -169,7 +169,7 @@ owner object:
169
169
After the owner object enters the deletion in progress state, the controller
170
170
deletes the dependents. After deleting all the dependent objects, the controller
171
171
deletes the owner object. At this point, the object is no longer visible in the
172
- Kubernetes API.
172
+ Kubernetes API.
173
173
174
174
During foreground cascading deletion, the only dependents that block owner
175
175
deletion are those that have the `ownerReference.blockOwnerDeletion=true` field.
@@ -223,7 +223,7 @@ to override this behaviour, see [Delete owner objects and orphan dependents](/do
223
223
The {{<glossary_tooltip text="kubelet" term_id="kubelet">}} performs garbage
224
224
collection on unused images every five minutes and on unused containers every
225
225
minute. You should avoid using external garbage collection tools, as these can
226
- break the kubelet behavior and remove containers that should exist.
226
+ break the kubelet behavior and remove containers that should exist.
227
227
-->
228
228
## 未使用容器和镜像的垃圾收集 {#containers-images}
229
229
@@ -248,7 +248,7 @@ resource type.
248
248
### Container image lifecycle
249
249
250
250
Kubernetes manages the lifecycle of all images through its *image manager*,
251
- which is part of the kubelet, with the cooperation of
251
+ which is part of the kubelet, with the cooperation of
252
252
{{< glossary_tooltip text="cadvisor" term_id="cadvisor" >}}. The kubelet
253
253
considers the following disk usage limits when making garbage collection
254
254
decisions:
@@ -277,7 +277,7 @@ kubelet 会持续删除镜像,直到磁盘用量到达 `LowThresholdPercent`
277
277
### Container garbage collection {#container-image-garbage-collection}
278
278
279
279
The kubelet garbage collects unused containers based on the following variables,
280
- which you can define:
280
+ which you can define:
281
281
-->
282
282
### 容器垃圾收集 {#container-image-garbage-collection}
283
283
@@ -300,7 +300,7 @@ kubelet 会基于如下变量对所有未使用的容器执行垃圾收集操作
300
300
301
301
<!--
302
302
In addition to these variables, the kubelet garbage collects unidentified and
303
- deleted containers, typically starting with the oldest first.
303
+ deleted containers, typically starting with the oldest first.
304
304
305
305
`MaxPerPodContainer` and `MaxContainers` may potentially conflict with each other
306
306
in situations where retaining the maximum number of containers per Pod
@@ -333,8 +333,8 @@ You can tune garbage collection of resources by configuring options specific to
333
333
the controllers managing those resources. The following pages show you how to
334
334
configure garbage collection:
335
335
336
- * [Configuring cascading deletion of Kubernetes objects](/docs/tasks/administer-cluster/use-cascading-deletion/)
337
- * [Configuring cleanup of finished Jobs](/docs/concepts/workloads/controllers/ttlafterfinished/)
336
+ * [Configuring cascading deletion of Kubernetes objects](/docs/tasks/administer-cluster/use-cascading-deletion/)
337
+ * [Configuring cleanup of finished Jobs](/docs/concepts/workloads/controllers/ttlafterfinished/)
338
338
-->
339
339
## 配置垃圾收集 {#configuring-gc}
340
340
0 commit comments