You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md
+34-23Lines changed: 34 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -117,7 +117,9 @@ Wait another 30 seconds, and verify that the container has been restarted:
117
117
kubectl get pod liveness-exec
118
118
```
119
119
120
-
The output shows that `RESTARTS` has been incremented. Note that the `RESTARTS` counter increments as soon as a failed container comes back to the running state:
120
+
The output shows that `RESTARTS` has been incremented. Note that the `RESTARTS`
121
+
counter increments as soon as a failed container comes back to the running
122
+
state:
121
123
122
124
```
123
125
NAME READY STATUS RESTARTS AGE
@@ -258,17 +260,22 @@ After 15 seconds, view Pod events to verify that the liveness check has not fail
258
260
kubectl describe pod etcd-with-grpc
259
261
```
260
262
261
-
Before Kubernetes 1.23, gRPC health probes were often implemented using [grpc-health-probe](https://github.com/grpc-ecosystem/grpc-health-probe/),
262
-
as described in the blog post [Health checking gRPC servers on Kubernetes](/blog/2018/10/01/health-checking-grpc-servers-on-kubernetes/).
263
-
The built-in gRPC probes behavior is similar to one implemented by grpc-health-probe.
264
-
When migrating from grpc-health-probe to built-in probes, remember the following differences:
265
-
266
-
- Built-in probes run against the pod IP address, unlike grpc-health-probe that often runs against `127.0.0.1`.
267
-
Be sure to configure your gRPC endpoint to listen on the Pod's IP address.
263
+
Before Kubernetes 1.23, gRPC health probes were often implemented using
264
+
[grpc-health-probe](https://github.com/grpc-ecosystem/grpc-health-probe/), as
265
+
described in the blog post [Health checking gRPC servers on
266
+
Kubernetes](/blog/2018/10/01/health-checking-grpc-servers-on-kubernetes/). The
267
+
built-in gRPC probes behavior is similar to one implemented by
268
+
grpc-health-probe. When migrating from grpc-health-probe to built-in probes,
269
+
remember the following differences:
270
+
271
+
- Built-in probes run against the pod IP address, unlike grpc-health-probe that
272
+
often runs against `127.0.0.1`. Be sure to configure your gRPC endpoint to
273
+
listen on the Pod's IP address.
268
274
- Built-in probes do not support any authentication parameters (like `-tls`).
269
275
- There are no error codes for built-in probes. All errors are considered as probe failures.
270
-
- If `ExecProbeTimeout` feature gate is set to `false`, grpc-health-probe does **not** respect the `timeoutSeconds` setting (which defaults to 1s),
271
-
while built-in probe would fail on timeout.
276
+
- If `ExecProbeTimeout` feature gate is set to `false`, grpc-health-probe does
277
+
**not** respect the `timeoutSeconds` setting (which defaults to 1s), while
278
+
built-in probe would fail on timeout.
272
279
273
280
## Use a named port
274
281
@@ -346,7 +353,9 @@ Readiness probes runs on the container during its whole lifecycle.
346
353
{{< /note >}}
347
354
348
355
{{< caution >}}
349
-
Liveness probes *do not* wait for readiness probes to succeed. If you want to wait before executing a liveness probe you should use initialDelaySeconds or a startupProbe.
356
+
Liveness probes *do not* wait for readiness probes to succeed. If you want to
357
+
wait before executing a liveness probe you should use initialDelaySeconds or a
358
+
startupProbe.
350
359
{{< /caution >}}
351
360
352
361
Readiness probes are configured similarly to liveness probes. The only difference
@@ -380,15 +389,16 @@ you can use to more precisely control the behavior of startup, liveness and read
380
389
checks:
381
390
382
391
* `initialDelaySeconds`: Number of seconds after the container has started
383
-
before startup, liveness or readiness probes are initiated. If a startup probe is defined, liveness and readiness probe delays do not begin until the startup probe has succeeded.
384
-
Defaults to 0 seconds. Minimum value is 0.
392
+
before startup, liveness or readiness probes are initiated. If a startup
393
+
probe is defined, liveness and readiness probe delays do not begin until the
394
+
startup probe has succeeded. Defaults to 0 seconds. Minimum value is 0.
385
395
* `periodSeconds`: How often (in seconds) to perform the probe. Default to 10
386
-
seconds. Minimum value is 1.
396
+
seconds. Minimum value is 1.
387
397
* `timeoutSeconds`: Number of seconds after which the probe times out. Defaults
388
-
to 1 second. Minimum value is 1.
398
+
to 1 second. Minimum value is 1.
389
399
* `successThreshold`: Minimum consecutive successes for the probe to be
390
-
considered successful after having failed. Defaults to 1. Must be 1 for liveness
391
-
and startup Probes. Minimum value is 1.
400
+
considered successful after having failed. Defaults to 1. Must be 1 for liveness
401
+
and startup Probes. Minimum value is 1.
392
402
* `failureThreshold`: After a probe fails `failureThreshold` times in a row, Kubernetes
393
403
considers that the overall check has failed: the container is _not_ ready / healthy /
394
404
live.
@@ -415,12 +425,13 @@ until a result was returned.
415
425
416
426
This defect was corrected in Kubernetes v1.20. You may have been relying on the previous behavior,
417
427
even without realizing it, as the default timeout is 1 second.
418
-
As a cluster administrator, you can disable the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) `ExecProbeTimeout` (set it to `false`)
419
-
on each kubelet to restore the behavior from older versions, then remove that override
420
-
once all the exec probes in the cluster have a `timeoutSeconds` value set.
421
-
If you have pods that are impacted from the default 1 second timeout,
422
-
you should update their probe timeout so that you're ready for the
0 commit comments