Skip to content

Commit 8633dda

Browse files
author
Tim Bannister
committed
Improve explanation of Pod lifetime
- Explain that Pods do restart containers, but that's the only kind of self-healing inherent in Pods. - Reword the page introduction for clarity
1 parent 32be09b commit 8633dda

File tree

1 file changed

+41
-18
lines changed

1 file changed

+41
-18
lines changed

content/en/docs/concepts/workloads/pods/pod-lifecycle.md

Lines changed: 41 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,19 @@ in the `Pending` [phase](#pod-phase), moving through `Running` if at least one
1111
of its primary containers starts OK, and then through either the `Succeeded` or
1212
`Failed` phases depending on whether any container in the Pod terminated in failure.
1313

14+
Like individual application containers, Pods are considered to be relatively
15+
ephemeral (rather than durable) entities. Pods are created, assigned a unique
16+
ID ([UID](/docs/concepts/overview/working-with-objects/names/#uids)), and scheduled
17+
to run on nodes where they remain until termination (according to restart policy) or
18+
deletion.
19+
If a {{< glossary_tooltip term_id="node" >}} dies, the Pods running on (or scheduled
20+
to run on) that node are [marked for deletion](#pod-garbage-collection). The control
21+
plane marks the Pods for removal after a timeout period.
22+
23+
<!-- body -->
24+
25+
## Pod lifetime
26+
1427
Whilst a Pod is running, the kubelet is able to restart containers to handle some
1528
kind of faults. Within a Pod, Kubernetes tracks different container
1629
[states](#container-states) and determines what action to take to make the Pod
@@ -21,32 +34,42 @@ status for a Pod object consists of a set of [Pod conditions](#pod-conditions).
2134
You can also inject [custom readiness information](#pod-readiness-gate) into the
2235
condition data for a Pod, if that is useful to your application.
2336

24-
Pods are only [scheduled](/docs/concepts/scheduling-eviction/) once in their lifetime.
25-
Once a Pod is scheduled (assigned) to a Node, the Pod runs on that Node until it stops
26-
or is [terminated](#pod-termination).
37+
Pods are only [scheduled](/docs/concepts/scheduling-eviction/) once in their lifetime;
38+
assigning a Pod to a specific node is called _binding_, and the process of selecting
39+
which node to use is called _scheduling_.
40+
Once a Pod has been scheduled and is bound to a node, Kubernetes tries
41+
to run that Pod on the node. The Pod runs on that node until it stops, or until the Pod
42+
is [terminated](#pod-termination); if Kubernetes isn't able start the Pod on the selected
43+
node (for example, if the node crashes before the Pod starts), then that particular Pod
44+
never starts.
2745

28-
<!-- body -->
2946

30-
## Pod lifetime
47+
### Pods and fault recovery {#pod-fault-recovery}
3148

32-
Like individual application containers, Pods are considered to be relatively
33-
ephemeral (rather than durable) entities. Pods are created, assigned a unique
34-
ID ([UID](/docs/concepts/overview/working-with-objects/names/#uids)), and scheduled
35-
to nodes where they remain until termination (according to restart policy) or
36-
deletion.
37-
If a {{< glossary_tooltip term_id="node" >}} dies, the Pods scheduled to that node
38-
are [scheduled for deletion](#pod-garbage-collection) after a timeout period.
49+
If one of the containers in the Pod fails, then Kubernetes may try to restart that
50+
specific container.
51+
Read [How Pods handle problems with containers](#container-restarts) to learn more.
52+
53+
Pods can however fail in a way that the cluster cannot recover from, and in that case
54+
Kubernetes does not attempt to heal the Pod further; instead, Kubernetes deletes the
55+
Pod and relies on other components to provide automatic healing.
3956

40-
Pods do not, by themselves, self-heal. If a Pod is scheduled to a
41-
{{< glossary_tooltip text="node" term_id="node" >}} that then fails, the Pod is deleted; likewise,
42-
a Pod won't survive an eviction due to a lack of resources or Node maintenance. Kubernetes uses a
43-
higher-level abstraction, called a
57+
If a Pod is scheduled to a {{< glossary_tooltip text="node" term_id="node" >}} and that
58+
node then fails, the Pod is treated as unhealthy and Kubernetes eventually deletes the Pod.
59+
A Pod won't survive an {{< glossary_tooltip text="eviction" term_id="eviction" >}} due to
60+
a lack of resources or Node maintenance.
61+
62+
Kubernetes uses a higher-level abstraction, called a
4463
{{< glossary_tooltip term_id="controller" text="controller" >}}, that handles the work of
4564
managing the relatively disposable Pod instances.
4665

4766
A given Pod (as defined by a UID) is never "rescheduled" to a different node; instead,
48-
that Pod can be replaced by a new, near-identical Pod, with even the same name if
49-
desired, but with a different UID.
67+
that Pod can be replaced by a new, near-identical Pod. If you make a replacement Pod, it can
68+
even have same name (as in `.metadata.name`) that the old Pod had, but the replacement
69+
would have a different `.metadata.uid` from the old Pod.
70+
71+
Kubernetes does not guarantee that a replacement for an existing Pod would be scheduled to
72+
the same node as the old Pod that was being replaced.
5073

5174
### Associated lifetimes
5275

0 commit comments

Comments
 (0)