Skip to content

Commit eed5171

Browse files
wrap lines in configure-liveness-readiness-startup-probes.md
1 parent 66c0e22 commit eed5171

File tree

1 file changed

+34
-23
lines changed

1 file changed

+34
-23
lines changed

content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md

Lines changed: 34 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -117,7 +117,9 @@ Wait another 30 seconds, and verify that the container has been restarted:
117117
kubectl get pod liveness-exec
118118
```
119119

120-
The output shows that `RESTARTS` has been incremented. Note that the `RESTARTS` counter increments as soon as a failed container comes back to the running state:
120+
The output shows that `RESTARTS` has been incremented. Note that the `RESTARTS`
121+
counter increments as soon as a failed container comes back to the running
122+
state:
121123

122124
```
123125
NAME READY STATUS RESTARTS AGE
@@ -258,17 +260,22 @@ After 15 seconds, view Pod events to verify that the liveness check has not fail
258260
kubectl describe pod etcd-with-grpc
259261
```
260262

261-
Before Kubernetes 1.23, gRPC health probes were often implemented using [grpc-health-probe](https://github.com/grpc-ecosystem/grpc-health-probe/),
262-
as described in the blog post [Health checking gRPC servers on Kubernetes](/blog/2018/10/01/health-checking-grpc-servers-on-kubernetes/).
263-
The built-in gRPC probes behavior is similar to one implemented by grpc-health-probe.
264-
When migrating from grpc-health-probe to built-in probes, remember the following differences:
265-
266-
- Built-in probes run against the pod IP address, unlike grpc-health-probe that often runs against `127.0.0.1`.
267-
Be sure to configure your gRPC endpoint to listen on the Pod's IP address.
263+
Before Kubernetes 1.23, gRPC health probes were often implemented using
264+
[grpc-health-probe](https://github.com/grpc-ecosystem/grpc-health-probe/), as
265+
described in the blog post [Health checking gRPC servers on
266+
Kubernetes](/blog/2018/10/01/health-checking-grpc-servers-on-kubernetes/). The
267+
built-in gRPC probes behavior is similar to one implemented by
268+
grpc-health-probe. When migrating from grpc-health-probe to built-in probes,
269+
remember the following differences:
270+
271+
- Built-in probes run against the pod IP address, unlike grpc-health-probe that
272+
often runs against `127.0.0.1`. Be sure to configure your gRPC endpoint to
273+
listen on the Pod's IP address.
268274
- Built-in probes do not support any authentication parameters (like `-tls`).
269275
- There are no error codes for built-in probes. All errors are considered as probe failures.
270-
- If `ExecProbeTimeout` feature gate is set to `false`, grpc-health-probe does **not** respect the `timeoutSeconds` setting (which defaults to 1s),
271-
while built-in probe would fail on timeout.
276+
- If `ExecProbeTimeout` feature gate is set to `false`, grpc-health-probe does
277+
**not** respect the `timeoutSeconds` setting (which defaults to 1s), while
278+
built-in probe would fail on timeout.
272279

273280
## Use a named port
274281

@@ -346,7 +353,9 @@ Readiness probes runs on the container during its whole lifecycle.
346353
{{< /note >}}
347354

348355
{{< caution >}}
349-
Liveness probes *do not* wait for readiness probes to succeed. If you want to wait before executing a liveness probe you should use initialDelaySeconds or a startupProbe.
356+
Liveness probes *do not* wait for readiness probes to succeed. If you want to
357+
wait before executing a liveness probe you should use initialDelaySeconds or a
358+
startupProbe.
350359
{{< /caution >}}
351360

352361
Readiness probes are configured similarly to liveness probes. The only difference
@@ -380,15 +389,16 @@ you can use to more precisely control the behavior of startup, liveness and read
380389
checks:
381390

382391
* `initialDelaySeconds`: Number of seconds after the container has started
383-
before startup, liveness or readiness probes are initiated. If a startup probe is defined, liveness and readiness probe delays do not begin until the startup probe has succeeded.
384-
Defaults to 0 seconds. Minimum value is 0.
392+
before startup, liveness or readiness probes are initiated. If a startup
393+
probe is defined, liveness and readiness probe delays do not begin until the
394+
startup probe has succeeded. Defaults to 0 seconds. Minimum value is 0.
385395
* `periodSeconds`: How often (in seconds) to perform the probe. Default to 10
386-
seconds. Minimum value is 1.
396+
seconds. Minimum value is 1.
387397
* `timeoutSeconds`: Number of seconds after which the probe times out. Defaults
388-
to 1 second. Minimum value is 1.
398+
to 1 second. Minimum value is 1.
389399
* `successThreshold`: Minimum consecutive successes for the probe to be
390-
considered successful after having failed. Defaults to 1. Must be 1 for liveness
391-
and startup Probes. Minimum value is 1.
400+
considered successful after having failed. Defaults to 1. Must be 1 for liveness
401+
and startup Probes. Minimum value is 1.
392402
* `failureThreshold`: After a probe fails `failureThreshold` times in a row, Kubernetes
393403
considers that the overall check has failed: the container is _not_ ready / healthy /
394404
live.
@@ -415,12 +425,13 @@ until a result was returned.
415425

416426
This defect was corrected in Kubernetes v1.20. You may have been relying on the previous behavior,
417427
even without realizing it, as the default timeout is 1 second.
418-
As a cluster administrator, you can disable the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) `ExecProbeTimeout` (set it to `false`)
419-
on each kubelet to restore the behavior from older versions, then remove that override
420-
once all the exec probes in the cluster have a `timeoutSeconds` value set.
421-
If you have pods that are impacted from the default 1 second timeout,
422-
you should update their probe timeout so that you're ready for the
423-
eventual removal of that feature gate.
428+
As a cluster administrator, you can disable the
429+
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
430+
`ExecProbeTimeout` (set it to `false`) on each kubelet to restore the behavior
431+
from older versions, then remove that override once all the exec probes in the
432+
cluster have a `timeoutSeconds` value set. If you have pods that are impacted
433+
from the default 1 second timeout, you should update their probe timeout so
434+
that you're ready for the eventual removal of that feature gate.
424435

425436
With the fix of the defect, for exec probes, on Kubernetes `1.20+` with the `dockershim` container runtime,
426437
the process inside the container may keep running even after probe returned failure because of the timeout.

0 commit comments

Comments
 (0)