You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this article, you learn how to work with agent pools in a Nexus Kubernetes cluster. Agent pools serve as groups of nodes with the same configuration and play a key role in managing your applications.
15
15
16
16
Nexus Kubernetes clusters offer two types of agent pools.
17
-
* System agent pools are designed for hosting critical system pods like CoreDNS and metrics-server.
18
-
* User agent pools are designed for hosting your application pods.
19
17
20
-
Application pods can be scheduled on system node pools if you wish to only have one pool in your Kubernetes cluster. Nexus Kubernetes cluster must have an initial agent pool that includes at least one system node pool with at least one node.
18
+
* System agent pools are designed for hosting critical system pods like CoreDNS and metrics-server.
19
+
* User agent pools are designed for hosting your application pods.
20
+
21
+
Application pods can be scheduled on system agent pools if you wish to only have one pool in your Kubernetes cluster. Nexus Kubernetes cluster must have an initial agent pool that includes at least one system agent pool with at least one node.
21
22
22
23
## Prerequisites
23
24
24
25
Before proceeding with this how-to guide, it's recommended that you:
25
26
26
-
* Refer to the Nexus Kubernetes cluster [QuickStart guide](./quickstarts-kubernetes-cluster-deployment-bicep.md) for a comprehensive overview and steps involved.
27
-
* Ensure that you meet the outlined prerequisites to ensure smooth implementation of the guide.
27
+
* Refer to the Nexus Kubernetes cluster [QuickStart guide](./quickstarts-kubernetes-cluster-deployment-bicep.md) for a comprehensive overview and steps involved.
28
+
* Ensure that you meet the outlined prerequisites to ensure smooth implementation of the guide.
28
29
29
30
## Limitations
30
-
* You can delete system node pools, provided you have another system node pool to take its place in the Nexus Kubernetes cluster.
31
-
* System pools must contain at least one node.
32
-
* You can't change the VM size of a node pool after you create it.
33
-
* Each Nexus Kubernetes cluster requires at least one system node pool.
34
-
* Don't run application workloads on Kubernetes control plane nodes, as they're designed only for managing the cluster, and doing so can harm its performance and stability.
31
+
32
+
* You can delete system agent pools, provided you have another system agent pool to take its place in the Nexus Kubernetes cluster.
33
+
* System pools must contain at least one node.
34
+
* You can't change the VM size of an agent pool after you create it.
35
+
* Each Nexus Kubernetes cluster requires at least one system agent pool.
36
+
* Don't run application workloads on Kubernetes control plane nodes, as they're designed only for managing the cluster, and doing so can harm its performance and stability.
35
37
36
38
## System pool
37
-
For a system node pool, Nexus Kubernetes automatically assigns the label `kubernetes.azure.com/mode: system` to its nodes. This label causes Nexus Kubernetes to prefer scheduling system pods on node pools that contain this label. This label doesn't prevent you from scheduling application pods on system node pools. However, we recommend you isolate critical system pods from your application pods to prevent misconfigured or rogue application pods from accidentally killing system pods.
38
39
39
-
You can enforce this behavior by creating a dedicated system node pool. Use the `CriticalAddonsOnly=true:NoSchedule` taint to prevent application pods from being scheduled on system node pools. If you intend to use the system pool for application pods (not dedicated), don't apply any application specific taints to the pool, as applying such taints can lead to cluster creation failures.
40
+
For a system agent pool, Nexus Kubernetes automatically assigns the label `kubernetes.azure.com/mode: system` to its nodes. This label causes Nexus Kubernetes to prefer scheduling system pods on agent pools that contain this label. This label doesn't prevent you from scheduling application pods on system agent pools. However, we recommend you isolate critical system pods from your application pods to prevent misconfigured or rogue application pods from accidentally killing system pods.
41
+
42
+
You can enforce this behavior by creating a dedicated system agent pool. Use the `CriticalAddonsOnly=true:NoSchedule` taint to prevent application pods from being scheduled on system agent pools. If you intend to use the system pool for application pods (not dedicated), don't apply any application specific taints to the pool, as applying such taints can lead to cluster creation failures.
40
43
41
44
> [!IMPORTANT]
42
-
> If you run a single system node pool for your Nexus Kubernetes cluster in a production environment, we recommend you use at least three nodes for the node pool.
45
+
> If you run a single system agent pool for your Nexus Kubernetes cluster in a production environment, we recommend you use at least three nodes for the agent pool.
43
46
44
47
## User pool
45
48
@@ -49,6 +52,6 @@ The user pool, on the other hand, is designed for your applications. This dedica
49
52
50
53
Choosing how to utilize your system pool and user pool depends largely on your specific requirements and use case. Both dedicated and shared methods offer unique advantages. Dedicated pools can isolate workloads and provide guaranteed resources, while shared pools can optimize resource usage across the cluster.
51
54
52
-
Always consider your cluster's resource capacity, the nature of your workloads, and the required level of resiliency when making your decision. By managing and understanding these node pools effectively, you can optimize your Nexus Kubernetes cluster to best fit your operational needs.
55
+
Always consider your cluster's resource capacity, the nature of your workloads, and the required level of resiliency when making your decision. By managing and understanding these agent pools effectively, you can optimize your Nexus Kubernetes cluster to best fit your operational needs.
53
56
54
57
Refer to the [QuickStart guide](./quickstarts-kubernetes-cluster-deployment-bicep.md#add-an-agent-pool) to add new agent pools and experiment with configurations in your Nexus Kubernetes cluster.
Copy file name to clipboardExpand all lines: articles/operator-nexus/quickstarts-tenant-workload-prerequisites.md
+17-40Lines changed: 17 additions & 40 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -32,49 +32,25 @@ You need to create various networks based on your workload needs. The following
32
32
- Determine the BGP peering info for each network, and whether the networks need to talk to each other. You should group networks that need to talk to each other into the same L3 isolation domain, because each L3 isolation domain can support multiple L3 networks.
33
33
- The platform provides a proxy to allow your VM to reach other external endpoints. Creating a `cloudservicesnetwork` instance requires the endpoints to be proxied, so gather the list of endpoints. You can modify the list of endpoints after the network creation.
34
34
35
-
## Create networks for tenant workloads
36
-
37
-
The following sections explain the steps to create networks for tenant workloads (VMs and Kubernetes clusters).
38
-
39
-
### Create isolation domains
40
-
41
-
Isolation domains enable creation of layer 2 (L2) and layer 3 (L3) connectivity between network functions running on Azure Operator Nexus. This connectivity enables inter-rack and intra-rack communication between the workloads.
42
-
You can create as many L2 and L3 isolation domains as needed.
43
-
44
-
You should have the following information already:
45
-
46
-
- The network fabric resource ID to create isolation domains.
47
-
- VLAN and subnet info for each L3 network.
48
-
- Which networks need to talk to each other. (Remember to put VLANs and subnets that need to talk to each other into the same L3 isolation domain.)
49
-
- BGP peering and network policy information for your L3 isolation domains.
The isolation-domains enable communication between workloads hosted in the same rack (intra-rack communication) or different racks (inter-rack communication). You can find more details about creating isolation domains [here](./howto-configure-isolation-domain.md).
61
38
62
-
###Create networks for tenant workloads
39
+
## Create networks for tenant workloads
63
40
64
41
The following sections describe how to create these networks:
65
42
66
43
- Layer 2 network
67
44
- Layer 3 network
68
45
- Trunked network
69
-
- Cloud services network
70
46
71
-
####Create an L2 network
47
+
### Create an L2 network
72
48
73
49
Create an L2 network, if necessary, for your workloads. You can repeat the instructions for each required L2 network.
74
50
75
-
Gather the resource ID of the L2 isolation domain that you [created](#l2-isolation-domain) to configure the VLAN for this network.
51
+
Gather the resource ID of the L2 isolation domain that you created to configure the VLAN for this network.
76
52
77
-
### [Azure CLI](#tab/azure-cli)
53
+
####[Azure CLI](#tab/azure-cli)
78
54
79
55
```azurecli-interactive
80
56
az networkcloud l2network create --name "<YourL2NetworkName>" \
@@ -85,7 +61,7 @@ Gather the resource ID of the L2 isolation domain that you [created](#l2-isolati
Create a trunked network, if necessary, for your VM. Repeat the instructions for each required trunked network.
150
126
151
127
Gather the `resourceId` values of the L2 and L3 isolation domains that you created earlier to configure the VLANs for this network. You can include as many L2 and L3 isolation domains as needed.
152
128
153
-
### [Azure CLI](#tab/azure-cli)
129
+
####[Azure CLI](#tab/azure-cli)
154
130
155
131
```azurecli-interactive
156
132
az networkcloud trunkednetwork create --name "<YourTrunkedNetworkName>" \
@@ -167,7 +143,8 @@ Gather the `resourceId` values of the L2 and L3 isolation domains that you creat
To create an Operator Nexus virtual machine (VM) or Operator Nexus Kubernetes cluster, you must have a cloud services network. Without this network, you can't create a VM or cluster.
189
166
@@ -241,7 +218,7 @@ After setting up the cloud services network, you can use it to create a VM or cl
241
218
>
242
219
> In addition, if your ACR has dedicated data endpoints enabled, you will need to add all the new data-endpoints to the egress allow list. To find all the possible endpoints for your ACR follow the instruction [here](../container-registry/container-registry-dedicated-data-endpoints.md#dedicated-data-endpoints).
243
220
244
-
#### Using the proxy to reach outside of the virtual machine
221
+
###Use the proxy to reach outside of the virtual machine
245
222
246
223
After creating your Operator Nexus VM or Operator Nexus Kubernetes cluster with this cloud services network, you need to additionally set appropriate environment variables within VM to use tenant proxy and to reach outside of virtual machine. This tenant proxy is useful if you need to access resources outside of the virtual machine, such as managing packages or installing software.
0 commit comments