You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Given an empty Operator Nexus environment with the given capacity, we create
83
83
three differently sized Nexus Kubernetes Clusters.
@@ -124,15 +124,14 @@ Cluster C Agent Pool #1 has 12 VMs restricted to AvailabilityZones [1, 4] so it
124
124
has 12 VMs on 12 bare metal servers, six in each of racks 1 and 4.
125
125
126
126
Extra-large VMs (the `NC_P46_224_v1` SKU) from different clusters are placed
127
-
on the same bare metal servers (see rule #3 in
128
-
[How the Nexus Platform Schedules a VM][#how-the-nexus-platform-schedule-a-vm]).
127
+
on the same bare metal servers (see rule #3 in [How the Nexus platform schedules a Nexus Kubernetes Cluster VM](#how-the-nexus-platform-schedules-a-nexus-kubernetes-cluster-vm)).
129
128
130
129
Here's a visualization of a layout the user might see after deploying Clusters
131
130
A, B, and C into an empty environment.
132
131
133
132
:::image type="content" source="media/nexus-kubernetes/after-first-deployment.png" lightbox="media/nexus-kubernetes/after-first-deployment.png" alt-text="Diagram showing possible layout of VMs after first deployment.":::
134
133
135
-
### Half-full Environment
134
+
### Half-full environment
136
135
137
136
We now run through an example of launching another NKS Cluster when the target
138
137
environment is half-full. The target environment is half-full after Clusters A,
@@ -178,7 +177,7 @@ D into the target environment.
178
177
179
178
:::image type="content" source="media/nexus-kubernetes/after-second-deployment.png" lightbox="media/nexus-kubernetes/after-second-deployment.png" alt-text="Diagram showing possible layout of VMs after second deployment.":::
180
179
181
-
### Nearly full Environment
180
+
### Nearly full environment
182
181
183
182
In our example target environment, four of the eight racks are
184
183
close to capacity. Let's try to launch another NKS Cluster.
@@ -209,7 +208,7 @@ E into the target environment.
209
208
210
209
:::image type="content" source="media/nexus-kubernetes/after-third-deployment.png" lightbox="media/nexus-kubernetes/after-third-deployment.png" alt-text="Diagram showing possible layout of VMs after third deployment.":::
211
210
212
-
## Placement during a Runtime Upgrade
211
+
## Placement during a runtime upgrade
213
212
214
213
As of April 2024 (Network Cloud 2304.1 release), runtime upgrades are performed
215
214
using a rack-by-rack strategy. Bare metal servers in rack 1 are reimaged all at
@@ -248,7 +247,7 @@ StatefulSets that had Pods on NKS VMs that were on the bare metal server.
248
247
> NKS VM was launched on the newly reimaged bare metal server that retained the
249
248
> same bare metal server name as before reimaging.
250
249
251
-
## Best Practices
250
+
## Best practices
252
251
253
252
When working with Operator Nexus, keep the following best practices in mind.
254
253
@@ -290,7 +289,7 @@ greatest count of these extra-large SKU VMs creates a larger set of bare metal
290
289
servers upon which other extra-large SKU VMs from Agent Pools in other NKS
291
290
Clusters can collocate.
292
291
293
-
### Reduce the Agent Pool's Count before reducing the VM SKU size
292
+
### Reduce the Agent Pool's count before reducing the VM SKU size
294
293
295
294
If you run into capacity constraints when launching a NKS Cluster or Agent
296
295
Pool, reduce the Count of the Agent Pool before adjusting the VM SKU size. For
0 commit comments