Skip to content

Commit e4150a9

Browse files
Apply suggestions from code review
Co-authored-by: Jake Smith <[email protected]>
1 parent 88373ac commit e4150a9

File tree

3 files changed

+7
-7
lines changed

3 files changed

+7
-7
lines changed

articles/operator-nexus/howto-baremetal-functions.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ az networkcloud baremetalmachine show -n <nodeName> /
8585
--subscription <subscriptionID> | jq '.virtualMachinesAssociatedIds'
8686
```
8787

88-
***For NAKS nodes: (requires logging into the NAKS cluster)***
88+
***For Nexus Kubernetes cluster nodes: (requires logging into the Nexus Kubernetes cluster)***
8989

9090
```
9191
kubectl get nodes <resourceName> -ojson |jq '.metadata.labels."topology.kubernetes.io/baremetalmachine"'

articles/operator-nexus/howto-cluster-runtime-upgrade.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -107,7 +107,7 @@ az networkcloud cluster show --resource-group "<resourceGroup>" /
107107
"waitTimeMinutes": 1
108108
```
109109

110-
In this example, if less than 60% of the compute nodes being provisioned in a rack fail to provision (on a rack by rack basis), the cluster deployment fails. If 60% or more of the compute nodes are successfully provisioned, cluster deployment moves on to the next rack of compute nodes.
110+
In this example, if less than 60% of the compute nodes being provisioned in a rack fail to provision (on a Rack by Rack basis), the cluster deployment fails. If 60% or more of the compute nodes are successfully provisioned, cluster deployment moves on to the next rack of compute nodes.
111111

112112
The following example is for a customer using Rack by Rack strategy with a threshold type CountSuccess of 10 nodes per rack and a 1-minute pause.
113113

@@ -132,7 +132,7 @@ az networkcloud cluster show --resource-group "<resourceGroup>" /
132132
"waitTimeMinutes": 1
133133
```
134134

135-
In this example, if less than 10 compute nodes being provisioned in a rack fail to provision (on a rack by rack basis), the cluster deployment fails. If 10 or more of the compute nodes are successfully provisioned, cluster deployment moves on to the next rack of compute nodes.
135+
In this example, if less than 10 compute nodes being provisioned in a rack fail to provision (on a Rack by Rack basis), the cluster deployment fails. If 10 or more of the compute nodes are successfully provisioned, cluster deployment moves on to the next rack of compute nodes.
136136

137137
> [!NOTE]
138138
> ***`update-strategy` cannot be changed after the cluster runtime upgrade has started.***
@@ -149,13 +149,13 @@ az networkcloud cluster update-version --cluster-name "<clusterName>" /
149149
--subscription <subscriptionID>
150150
```
151151

152-
The runtime upgrade is a long process. The upgrade first upgrades the management nodes and then sequentially rack by rack for the worker nodes.
152+
The runtime upgrade is a long process. The upgrade first upgrades the management nodes and then sequentially Rack by Rack for the worker nodes.
153153
The upgrade is considered to be finished when 80% of worker nodes per rack and 100% of management nodes are successfully upgraded.
154154
Workloads might be impacted while the worker nodes in a rack are in the process of being upgraded, however workloads in all other racks are not impacted. Consideration of workload placement in light of this implementation design is encouraged.
155155

156-
Upgrading all the nodes takes multiple hours but can take more if other processes, like firmware updates, are also part of the upgrade.
156+
Upgrading all the nodes takes multiple hours, depending upon how many racks exist for the Cluster.
157157
Due to the length of the upgrade process, the Cluster's detail status should be checked periodically for the current state of the upgrade.
158-
To check on the status of the upgrade observe the detailed status of the cluster. This check can be done via the portal or az CLI.
158+
To check on the status of the upgrade observe the detailed status of the Cluster. This check can be done via the portal or az CLI.
159159

160160
To view the upgrade status through the Azure portal, navigate to the targeted cluster resource. In the cluster's *Overview* screen, the detailed status is provided along with a detailed status message.
161161

articles/operator-nexus/troubleshoot-reboot-reimage-replace.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,7 @@ az networkcloud baremetalmachine show -n <nodeName> /
9797
--subscription <subscriptionID> | jq '.virtualMachinesAssociatedIds'
9898
```
9999

100-
***For NAKS nodes: (requires logging into the NAKS cluster)***
100+
***For Nexus Kubernetes cluster nodes: (requires logging into the Nexus Kubernetes cluster)***
101101

102102
```
103103
kubectl get nodes <resourceName> -ojson |jq '.metadata.labels."topology.kubernetes.io/baremetalmachine"'

0 commit comments

Comments
 (0)