Skip to content

Commit b54719f

Browse files
authored
Update k8s-best-practices-platform-upgrade.adoc
1 parent 8a67623 commit b54719f

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

modules/k8s-best-practices-platform-upgrade.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ Do NOT set your pod disruption budget to `maxUnavailable` <number of pods in rep
3737

3838
A corollary to the pod disruption budget is a strong readiness and health check. A well implemented readiness check is key for surviving these upgrades in that a pod should not report itself ready to kubernetes until it is actually ready to take over the load from another pod of the example set. An example of this being implemented poorly would be for a pod to report itself ready but it is not in sync with the other DB pods in the example above. Kubernetes could see that three of the pods are "ready" and destroy a second pod and cause disruption to the DB leading to failure of the application served by said DB.
3939

40-
See link:https://kubernetes.io/docs/tasks/run-application/configure-pdb/[pod disruption budget reference].
40+
See link:https://kubernetes.io/docs/tasks/run-application/configure-pdb/[pod disruption budget reference], link:https://docs.openshift.com/container-platform/latest/rest_api/policy_apis/poddisruptionbudget-policy-v1.html[pod disruption budget policy & API].
4141

4242
[source,yaml]
4343
----
@@ -52,7 +52,7 @@ spec:
5252
app: db
5353
----
5454

55-
See link:https://docs.openshift.com/container-platform/latest/scalability_and_performance/recommended-host-practices.html[Recommended performance and scalability practices].
55+
See link:https://docs.openshift.com/container-platform/latest/scalability_and_performance/index.html[Recommended performance and scalability practices].
5656

5757
By default, only one machine is allowed to be unavailable when applying the kubelet-related configuration to the available worker nodes. For a large cluster, it can take a long time for the configuration change to be reflected. At any time, you can adjust the number of machines that are updating to speed up the process.
5858

0 commit comments

Comments
 (0)