Skip to content

Commit 60d3721

Browse files
authored
Merge pull request #18252 from sftim/20191223_cleanup_pod_lifecycle
Clean up pod lifecycle page
2 parents 829b008 + 8a92e05 commit 60d3721

File tree

1 file changed

+46
-37
lines changed

1 file changed

+46
-37
lines changed

content/en/docs/concepts/workloads/pods/pod-lifecycle.md

Lines changed: 46 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -181,7 +181,7 @@ Once Pod is assigned to a node by scheduler, kubelet starts creating containers
181181
...
182182
State: Waiting
183183
Reason: ErrImagePull
184-
...
184+
...
185185
```
186186

187187
* `Running`: Indicates that the container is executing without issues. The `postStart` hook (if any) is executed prior to the container entering a Running state. This state also displays the time when the container entered Running state.
@@ -205,31 +205,34 @@ Once Pod is assigned to a node by scheduler, kubelet starts creating containers
205205
...
206206
```
207207

208-
## Pod readiness gate
208+
## Pod readiness {#pod-readiness-gate}
209209

210210
{{< feature-state for_k8s_version="v1.14" state="stable" >}}
211211

212-
In order to add extensibility to Pod readiness by enabling the injection of
213-
extra feedback or signals into `PodStatus`, Kubernetes 1.11 introduced a
214-
feature named [Pod ready++](https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/0007-pod-ready%2B%2B.md).
215-
You can use the new field `ReadinessGate` in the `PodSpec` to specify additional
216-
conditions to be evaluated for Pod readiness. If Kubernetes cannot find such a
212+
Your application can inject extra feedback or signals into PodStatus:
213+
_Pod readiness_. To use this, set `readinessGates` in the PodSpec to specify
214+
a list of additional conditions that the kubelet evaluates for Pod readiness.
215+
216+
Readiness gates are determined by the current state of `status.condition`
217+
fields for the Pod. If Kubernetes cannot find such a
217218
condition in the `status.conditions` field of a Pod, the status of the condition
218-
is default to "`False`". Below is an example:
219+
is defaulted to "`False`". Below is an example:
220+
221+
Here is an example:
219222

220223
```yaml
221-
Kind: Pod
224+
kind: Pod
222225
...
223226
spec:
224227
readinessGates:
225228
- conditionType: "www.example.com/feature-1"
226229
status:
227230
conditions:
228-
- type: Ready # this is a builtin PodCondition
231+
- type: Ready # a built in PodCondition
229232
status: "False"
230233
lastProbeTime: null
231234
lastTransitionTime: 2018-01-01T00:00:00Z
232-
- type: "www.example.com/feature-1" # an extra PodCondition
235+
- type: "www.example.com/feature-1" # an extra PodCondition
233236
status: "False"
234237
lastProbeTime: null
235238
lastTransitionTime: 2018-01-01T00:00:00Z
@@ -239,19 +242,26 @@ status:
239242
...
240243
```
241244

242-
The new Pod conditions must comply with Kubernetes [label key format](/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set).
243-
Since the `kubectl patch` command still doesn't support patching object status,
244-
the new Pod conditions have to be injected through the `PATCH` action using
245-
one of the [KubeClient libraries](/docs/reference/using-api/client-libraries/).
245+
The Pod conditions you add must have names that meet the Kubernetes [label key format](/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set).
246+
247+
248+
### Status for Pod readiness {#pod-readiness-status}
246249

247-
With the introduction of new Pod conditions, a Pod is evaluated to be ready **only**
248-
when both the following statements are true:
250+
The `kubectl patch` command does not support patching object status.
251+
To set these `status.conditions` for the pod, applications and
252+
{{< glossary_tooltip term_id="operator-pattern" text="operators">}} should use
253+
the `PATCH` action.
254+
You can use a [Kubernetes client library](/docs/reference/using-api/client-libraries/) to
255+
write code that sets custom Pod conditions for Pod readiness.
256+
257+
For a Pod that uses custom conditions, that Pod is evaluated to be ready **only**
258+
when both the following statements apply:
249259

250260
* All containers in the Pod are ready.
251-
* All conditions specified in `ReadinessGates` are "`True`".
261+
* All conditions specified in `ReadinessGates` are `True`.
252262

253-
To facilitate this change to Pod readiness evaluation, a new Pod condition
254-
`ContainersReady` is introduced to capture the old Pod `Ready` condition.
263+
When a Pod's containers are Ready but at least one custom condition is missing or
264+
`False`, the kubelet sets the Pod's condition to `ContainersReady`.
255265

256266
## Restart policy
257267

@@ -268,32 +278,31 @@ once bound to a node, a Pod will never be rebound to another node.
268278

269279
## Pod lifetime
270280

271-
In general, Pods remain until a human or controller process explicitly removes them.
272-
The control plane cleans up terminated Pods (with a phase of `Succeeded` or
281+
In general, Pods remain until a human or
282+
{{< glossary_tooltip term_id="controller" text="controller" >}} process
283+
explicitly removes them.
284+
The control plane cleans up terminated Pods (with a phase of `Succeeded` or
273285
`Failed`), when the number of Pods exceeds the configured threshold
274286
(determined by `terminated-pod-gc-threshold` in the kube-controller-manager).
275287
This avoids a resource leak as Pods are created and terminated over time.
276288

277-
Three types of controllers are available:
289+
There are different kinds of resources for creating Pods:
290+
291+
- Use a {{< glossary_tooltip term_id="deployment" >}},
292+
{{< glossary_tooltip term_id="replica-set" >}} or {{< glossary_tooltip term_id="statefulset" >}}
293+
for Pods that are not expected to terminate, for example, web servers.
278294

279-
- Use a [Job](/docs/concepts/jobs/run-to-completion-finite-workloads/) for Pods that are expected to terminate,
295+
- Use a {{< glossary_tooltip term_id="job" >}}
296+
for Pods that are expected to terminate once their work is complete;
280297
for example, batch computations. Jobs are appropriate only for Pods with
281298
`restartPolicy` equal to OnFailure or Never.
282299

283-
- Use a [ReplicationController](/docs/concepts/workloads/controllers/replicationcontroller/),
284-
[ReplicaSet](/docs/concepts/workloads/controllers/replicaset/), or
285-
[Deployment](/docs/concepts/workloads/controllers/deployment/)
286-
for Pods that are not expected to terminate, for example, web servers.
287-
ReplicationControllers are appropriate only for Pods with a `restartPolicy` of
288-
Always.
289-
290-
- Use a [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) for Pods that need to run one per
291-
machine, because they provide a machine-specific system service.
300+
- Use a {{< glossary_tooltip term_id="daemonset" >}}
301+
for Pods that need to run one per eligible node.
292302

293-
All three types of controllers contain a PodTemplate. It
294-
is recommended to create the appropriate controller and let
295-
it create Pods, rather than directly create Pods yourself. That is because Pods
296-
alone are not resilient to machine failures, but controllers are.
303+
All workload resources contain a PodSpec. It is recommended to create the
304+
appropriate workload resource and let the resource's controller create Pods
305+
for you, rather than directly create Pods yourself.
297306

298307
If a node dies or is disconnected from the rest of the cluster, Kubernetes
299308
applies a policy for setting the `phase` of all Pods on the lost node to Failed.

0 commit comments

Comments
 (0)