Skip to content

Commit b13be78

Browse files
committed
Fix broken link
1 parent aa7a26b commit b13be78

File tree

1 file changed

+10
-11
lines changed

1 file changed

+10
-11
lines changed

articles/operator-nexus/concepts-nexus-kubernetes-placement.md

Lines changed: 10 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ ms.date: 04/19/2024
99
ms.custom: template-concept
1010
---
1111

12-
# Resource Placement in Azure Operator Nexus Kubernetes
12+
# Resource placement in Azure Operator Nexus Kubernetes
1313

1414
Operator Nexus instances are deployed at the customer premises. Each instance
1515
comprises one or more racks of bare metal servers.
@@ -23,7 +23,7 @@ containerized network functions run.
2323
The Nexus platform is responsible for deciding the bare metal server on which
2424
each NKS VM launches.
2525

26-
## How the Nexus Platform Schedules a NKS VM
26+
## How the Nexus platform schedules a Nexus Kubernetes Cluster VM
2727

2828
Nexus first identifies the set of potential bare metal servers that meet all of
2929
the resource requirements of the NKS VM SKU. For example, if the user
@@ -57,7 +57,7 @@ following sorting rules:
5757
"bin packs" the extra-large VMs in order to reduce fragmentation of the
5858
available compute resources.
5959

60-
## Example Placement Scenarios
60+
## Example placement scenarios
6161

6262
The following sections highlight behavior that Nexus users should expect
6363
when creating NKS Clusters against an Operator Nexus environment.
@@ -77,7 +77,7 @@ The example Operator Nexus environment has these specifications:
7777

7878
[numa]: https://en.wikipedia.org/wiki/Non-uniform_memory_access
7979

80-
### Empty Environment
80+
### Empty environment
8181

8282
Given an empty Operator Nexus environment with the given capacity, we create
8383
three differently sized Nexus Kubernetes Clusters.
@@ -124,15 +124,14 @@ Cluster C Agent Pool #1 has 12 VMs restricted to AvailabilityZones [1, 4] so it
124124
has 12 VMs on 12 bare metal servers, six in each of racks 1 and 4.
125125

126126
Extra-large VMs (the `NC_P46_224_v1` SKU) from different clusters are placed
127-
on the same bare metal servers (see rule #3 in
128-
[How the Nexus Platform Schedules a VM][#how-the-nexus-platform-schedule-a-vm]).
127+
on the same bare metal servers (see rule #3 in [How the Nexus platform schedules a Nexus Kubernetes Cluster VM](#how-the-nexus-platform-schedules-a-nexus-kubernetes-cluster-vm)).
129128

130129
Here's a visualization of a layout the user might see after deploying Clusters
131130
A, B, and C into an empty environment.
132131

133132
:::image type="content" source="media/nexus-kubernetes/after-first-deployment.png" lightbox="media/nexus-kubernetes/after-first-deployment.png" alt-text="Diagram showing possible layout of VMs after first deployment.":::
134133

135-
### Half-full Environment
134+
### Half-full environment
136135

137136
We now run through an example of launching another NKS Cluster when the target
138137
environment is half-full. The target environment is half-full after Clusters A,
@@ -178,7 +177,7 @@ D into the target environment.
178177

179178
:::image type="content" source="media/nexus-kubernetes/after-second-deployment.png" lightbox="media/nexus-kubernetes/after-second-deployment.png" alt-text="Diagram showing possible layout of VMs after second deployment.":::
180179

181-
### Nearly full Environment
180+
### Nearly full environment
182181

183182
In our example target environment, four of the eight racks are
184183
close to capacity. Let's try to launch another NKS Cluster.
@@ -209,7 +208,7 @@ E into the target environment.
209208

210209
:::image type="content" source="media/nexus-kubernetes/after-third-deployment.png" lightbox="media/nexus-kubernetes/after-third-deployment.png" alt-text="Diagram showing possible layout of VMs after third deployment.":::
211210

212-
## Placement during a Runtime Upgrade
211+
## Placement during a runtime upgrade
213212

214213
As of April 2024 (Network Cloud 2304.1 release), runtime upgrades are performed
215214
using a rack-by-rack strategy. Bare metal servers in rack 1 are reimaged all at
@@ -248,7 +247,7 @@ StatefulSets that had Pods on NKS VMs that were on the bare metal server.
248247
> NKS VM was launched on the newly reimaged bare metal server that retained the
249248
> same bare metal server name as before reimaging.
250249
251-
## Best Practices
250+
## Best practices
252251

253252
When working with Operator Nexus, keep the following best practices in mind.
254253

@@ -290,7 +289,7 @@ greatest count of these extra-large SKU VMs creates a larger set of bare metal
290289
servers upon which other extra-large SKU VMs from Agent Pools in other NKS
291290
Clusters can collocate.
292291

293-
### Reduce the Agent Pool's Count before reducing the VM SKU size
292+
### Reduce the Agent Pool's count before reducing the VM SKU size
294293

295294
If you run into capacity constraints when launching a NKS Cluster or Agent
296295
Pool, reduce the Count of the Agent Pool before adjusting the VM SKU size. For

0 commit comments

Comments
 (0)