You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/load-balancer/load-balancer-ipv6-for-linux.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -51,7 +51,7 @@ For RHEL and Oracle Linux versions 7.4 or higher, follow these steps:
51
51
52
52
Recent SUSE Linux Enterprise Server (SLES) and openSUSE images in Azure have been preconfigured with DHCPv6. No other changes are required when you use these images. If you have a VM that's based on an older or custom SUSE image, use one of the following procedures to configure DHCPv6.
53
53
54
-
## OpenSuSE 13 and SLES 11
54
+
## openSUSE 13 and SLES 11
55
55
56
56
1. Install the `dhcp-client` package, if needed:
57
57
@@ -70,7 +70,7 @@ Recent SUSE Linux Enterprise Server (SLES) and openSUSE images in Azure have bee
70
70
```bash
71
71
sudo ifdown eth0 && sudo ifup eth0
72
72
```
73
-
## OpenSUSE Leap and SLES 12
73
+
## openSUSE Leap and SLES 12
74
74
75
75
For openSUSE Leap and SLES 12, follow these steps:
Copy file name to clipboardExpand all lines: articles/operator-nexus/concepts-cluster-upgrade-overview.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -63,9 +63,9 @@ Details on how to run an upgrade with rack pause are located [here](./howto-clus
63
63
64
64
During a runtime upgrade, impacted Nexus Kubernetes Cluster nodes are cordoned and drained before the servers are upgraded. Cordoning the Kubernetes Cluster node prevents new pods from being scheduled on it. Draining the Kubernetes Cluster node allows pods that are running tenant workloads a chance to shift to another available Kubernetes Cluster node, which helps to reduce the disruption on services. The draining mechanism's effectiveness is contingent on the available capacity within the Nexus Kubernetes Cluster. If the Kubernetes Cluster is nearing full capacity and lacks space for the pods to relocate, they transition into a Pending state following the draining process.
65
65
66
-
Once the cordon and drain process of the tenant cluster node is completed, the upgrade of the server proceeds. Each tenant cluster node is allowed up to 10 minutes for the draining process to complete, after which the server upgrade begins. This process guarantees the server upgrade makes progress. Servers are upgraded one rack at a time, and upgrades are performed in parallel within the same rack. The server upgrade doesn't wait for tenant resources to come online before continuing with the runtime upgrade of servers in the rack being upgraded. The benefit of this is that the maximum overall wait time for a rack upgrade is kept at 10 minutes regardless of how many nodes are available. This maximum wait time is specific to the cordon and drain procedure and isn't applied to the overall upgrade procedure. Upon completion of each server upgrade, the Nexus Kubernetes cluster node starts, rejoins the cluster, and is uncordoned, allowing pods to be scheduled on the node once again.
66
+
Once the cordon and drain process of the tenant cluster node is completed, the upgrade of the server proceeds. Each tenant cluster node is allowed up to 20 minutes for the draining process to complete, after which the server upgrade begins. This process guarantees the server upgrade makes progress. Servers are upgraded one rack at a time, and upgrades are performed in parallel within the same rack. The server upgrade doesn't wait for tenant resources to come online before continuing with the runtime upgrade of servers in the rack being upgraded. The benefit of this is that the maximum overall wait time for a rack upgrade is kept at 20 minutes regardless of how many nodes are available. This maximum wait time is specific to the cordon and drain procedure and isn't applied to the overall upgrade procedure. Upon completion of each server upgrade, the Nexus Kubernetes cluster node starts, rejoins the cluster, and is uncordoned, allowing pods to be scheduled on the node once again.
67
67
68
-
It's important to note that the Nexus Kubernetes cluster node won't be shut down after the cordon and drain process. The server is rebooted with the new image as soon as all the Nexus Kubernetes cluster nodes are cordoned and drained, after 10 minutes if the drain process isn't completed. Additionally, the cordon and drain isn't initiated for power-off or restart actions of the server; it exclusively activates only during a runtime upgrade.
68
+
It's important to note that the Nexus Kubernetes cluster node won't be shut down after the cordon and drain process. The server is rebooted with the new image as soon as all the Nexus Kubernetes cluster nodes are cordoned and drained, after 20 minutes if the drain process isn't completed. Additionally, the cordon and drain isn't initiated for power-off or restart actions of the server; it exclusively activates only during a runtime upgrade.
69
69
70
70
It's important to note that following the runtime upgrade, there could be instance where a Nexus Kubernetes Cluster node remains cordoned. For such scenario, you locate uncordon nodes by executing the following command.
71
71
@@ -80,4 +80,4 @@ When a server is upgraded to utilize a new OS, the BMM keysets have to be re-est
80
80
81
81
## Servers not upgraded successfully
82
82
83
-
A server remains unavailable if they fail upgrade or provisioning from possible hardware issue during reboot or issue with cloud-init (networking, chronyd, etc.). The underlying condition needs to be resolved and either baremetalmachine replace/reimage would need to be executed. Uncordoning the server manually won't resolve the issues.
83
+
A server remains unavailable if they fail upgrade or provisioning from possible hardware issue during reboot or issue with cloud-init (networking, chronyd, etc.). The underlying condition needs to be resolved and either baremetalmachine replace/reimage would need to be executed. Uncordoning the server manually won't resolve the issues.
Copy file name to clipboardExpand all lines: articles/service-bus-messaging/automate-update-messaging-units.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -146,7 +146,7 @@ The previous section shows you how to add a default condition for the autoscale
146
146
> [!NOTE]
147
147
> - The metrics you review to make decisions on autoscaling may be 5-10 minutes old. When you are dealing with spiky workloads, we recommend that you have shorter durations for scaling up and longer durations for scaling down. As Service Bus Premium is charged per hour, scaling down quickly will not reduce the costs for that hour. Instead, it is recoomended to give enough time to ensure the reduced workload is stable before scaling down to ensure that there are enough messaging units to process spiky workloads.
148
148
>
149
-
> When scaling down, set the threshold to less than half of the scale-up threshold. For instance, if the scale-up threshold is 80%, set the scale-down threshold to 30-35% (something below 40%) to prevent continuous scaling up and down.This will prevent autoscale to switch between scaling up and down continously.
149
+
> When scaling down, set the threshold to less than half of the scale-up threshold. For instance, if the scale-up threshold is 80%, set the scale-down threshold to 30-35% (something below 40%) to prevent continuous scaling up and down.This will prevent autoscale to switch between scaling up and down continuously.
150
150
>
151
151
> - If you see failures due to lack of capacity (no messaging units available), raise a support ticket with us. Capacity fulfillment is subject to the constraints of the environment and is carried out to our best effort.
0 commit comments