Skip to content

Commit 477630e

Browse files
authored
Update k8s-best-practices-platform-upgrade.adoc
1 parent f4e4fe1 commit 477630e

File tree

1 file changed

+3
-2
lines changed

1 file changed

+3
-2
lines changed

modules/k8s-best-practices-platform-upgrade.adoc

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -30,9 +30,10 @@ In clusters larger than the example cluster, the `maxUnavailable` for the worker
3030

3131
For an application to stay healthy during this process, if they are stateful at all, they should specify a statefulset or replicaset, kubernetes by default will attempt to schedule the set members across multiple nodes to give additional resiliency. In order to prevent kubernetes from stealing too many nodes out from under an application, an application that has a minimum number of pods that need to be running must specify a pod disruption budget. Pod disruption budgets allow an application to tell kubernetes that it needs N number of pods of said microservice alive at any given time. For example, a small stateful database may need 2 out of three pods available at any given time, so that application should set a pod disruption budget with a minavailable set to a value of 2. This will allow the scheduler to know that it should not take the second pod out of a set of 3 down at any given time during the series of node reboots.
3232

33-
[NOTE]
33+
.Workload requirement
34+
[IMPORTANT]
3435
====
35-
Do NOT set your pod disruption budget to `maxUnavailable` <number of pods in replica> or minUnavailable zero, operations will change your pod disruption budget to proceed with an upgrade at the risk of your application.
36+
Applications may not set the pod disruption budget to minUnavailable equal to the number of pods in the deployment/replicaset or maxUnavailable pods to zero, operations will change your pod disruption budget to proceed with an upgrade at the risk of your application.
3637
====
3738

3839
A corollary to the pod disruption budget is a strong readiness and health check. A well implemented readiness check is key for surviving these upgrades in that a pod should not report itself ready to kubernetes until it is actually ready to take over the load from another pod of the example set. An example of this being implemented poorly would be for a pod to report itself ready but it is not in sync with the other DB pods in the example above. Kubernetes could see that three of the pods are "ready" and destroy a second pod and cause disruption to the DB leading to failure of the application served by said DB.

0 commit comments

Comments
 (0)