You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/aks/egress-outboundtype.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,7 +23,7 @@ This article covers the various types of outbound connectivity that are availabl
23
23
24
24
## Overview of outbound types in AKS
25
25
26
-
An AKS cluster can be configured with 3 different categories of outbound type: load balancer, NAT gateway, or user-defined routing.
26
+
An AKS cluster can be configured with three different categories of outbound type: load balancer, NAT gateway, or user-defined routing.
27
27
28
28
> [!IMPORTANT]
29
29
> Outbound type impacts only the egress traffic of your cluster. For more information, see [setting up ingress controllers](ingress-basic.md).
@@ -44,18 +44,18 @@ Below is a network topology deployed in AKS clusters by default, which use an `o
44
44
45
45

46
46
47
-
Refer to the documentation on[using a standard load balancer in AKS](load-balancer-standard.md) for more information.
47
+
For more information, see[using a standard load balancer in AKS](load-balancer-standard.md) for more information.
48
48
49
49
### Outbound type of `managedNatGateway` or `userAssignedNatGateway`
50
50
51
51
If `managedNatGateway` or `userAssignedNatGateway` are selected for `outboundType`, AKS relies on [Azure Networking NAT gateway](/azure/virtual-network/nat-gateway/manage-nat-gateway) for cluster egress.
52
52
53
53
-`managedNatGateway` is used when using managed virtual networks, and tells AKS to provision a NAT gateway and attach it to the cluster subnet.
54
-
-`userAssignedNatGateway` is used when using bring-your-own virtual networking, and requires that a NAT gateway be provisioned before cluster creation.
54
+
-`userAssignedNatGateway` is used when using bring-your-own virtual networking, and requires that a NAT gateway has been provisioned before cluster creation.
55
55
56
56
NAT gateway has significantly improved handling of SNAT ports when compared to Standard Load Balancer.
57
57
58
-
Refer to the documentation on[using NAT Gateway with AKS](nat-gateway.md) for more information.
58
+
For more information, see[using NAT Gateway with AKS](nat-gateway.md) for more information.
59
59
60
60
### Outbound type of userDefinedRouting
61
61
@@ -66,15 +66,15 @@ If `userDefinedRouting` is set, AKS won't automatically configure egress paths.
66
66
67
67
The AKS cluster must be deployed into an existing virtual network with a subnet that has been previously configured because when not using standard load balancer (SLB) architecture, you must establish explicit egress. As such, this architecture requires explicitly sending egress traffic to an appliance like a firewall, gateway, proxy or to allow the Network Address Translation (NAT) to be done by a public IP assigned to the standard load balancer or appliance.
68
68
69
-
Refer to the documentation on[configuring cluster egress via user-defined routing](egress-udr.md) for more information.
69
+
For more information, see[configuring cluster egress via user-defined routing](egress-udr.md) for more information.
70
70
71
71
## Updating `outboundType` after cluster creation (PREVIEW)
72
72
73
73
Changing the outbound type after cluster creation will deploy or remove resources as required to put the cluster into the new egress configuration.
74
74
75
75
> [!WARNING]
76
76
> - Changing the outbound type on a cluster is disruptive to network connectivity and will result in a change of the cluster's egress IP address. If any firewall rules have been configured to restrict traffic from the cluster, they will need to be updated to match the new egress IP address.
77
-
> - Changing to the `userDefinedRouting` egress type may require changing how incoming load balancer traffic flows, as egress traffic will now flow via the user-defined route instead of back via the load balancer, leading to asymmetric routing and dropped traffic. See the [UDR documentation](egress-udr.md) for more details.
77
+
> - Changing to the `userDefinedRouting` egress type may require changing how incoming load balancer traffic flows, as egress traffic will now flow via the user-defined route instead of back via the load balancer, leading to asymmetric routing and dropped traffic. For more information, see the [user-defined routing documentation](egress-udr.md).
78
78
79
79
[!INCLUDE [preview features callout](includes/preview/preview-callout.md)]
Copy file name to clipboardExpand all lines: articles/aks/use-multiple-node-pools.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,7 +28,7 @@ The following limitations apply when you create and manage AKS clusters that sup
28
28
* You can delete system node pools, provided you have another system node pool to take its place in the AKS cluster.
29
29
* System pools must contain at least one node, and user node pools may contain zero or more nodes.
30
30
* The AKS cluster must use the Standard SKU load balancer to use multiple node pools, the feature isn't supported with Basic SKU load balancers.
31
-
* The AKS cluster must use virtual machine scale sets for the nodes.
31
+
* The AKS cluster must use Virtual Machine Scale Sets for the nodes.
32
32
* You can't change the VM size of a node pool after you create it.
33
33
* The name of a node pool may only contain lowercase alphanumeric characters and must begin with a lowercase letter. For Linux node pools the length must be between 1 and 12 characters, for Windows node pools the length must be between 1 and 6 characters.
34
34
* All node pools must reside in the same virtual network.
@@ -210,7 +210,7 @@ The commands in this section explain how to upgrade a single specific node pool.
210
210
> [!NOTE]
211
211
> The node pool OS image version is tied to the Kubernetes version of the cluster. You will only get OS image upgrades, following a cluster upgrade.
212
212
213
-
Since there are two node pools in this example, we must use [az aks nodepool upgrade][az-aks-nodepool-upgrade] to upgrade a node pool. To see the available upgrades use [az aks get-upgrades][az-aks-get-upgrades]
213
+
Since there are two node pools in this example, we must use [`az aks nodepool upgrade`][az-aks-nodepool-upgrade] to upgrade a node pool. To see the available upgrades use [`az aks get-upgrades`][az-aks-get-upgrades]
214
214
215
215
```azurecli-interactive
216
216
az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster
@@ -498,9 +498,9 @@ When creating a node pool, you can add taints, labels, or tags to that node pool
498
498
> [!IMPORTANT]
499
499
> Adding taints, labels, or tags to nodes should be done for the entire node pool using `az aks nodepool`. Applying taints, labels, or tags to individual nodes in a node pool using `kubectl` is not recommended.
500
500
501
-
### Setting nodepool taints
501
+
### Setting node pool taints
502
502
503
-
To create a node pool with a taint, use [az aks nodepool add][az-aks-nodepool-add]. Specify the name *taintnp* and use the `--node-taints` parameter to specify *sku=gpu:NoSchedule* for the taint.
503
+
To create a node pool with a taint, use [`az aks nodepool add`][az-aks-nodepool-add]. Specify the name *taintnp* and use the `--node-taints` parameter to specify *sku=gpu:NoSchedule* for the taint.
504
504
505
505
```azurecli-interactive
506
506
az aks nodepool add \
@@ -601,11 +601,11 @@ Events:
601
601
602
602
Only pods that have this toleration applied can be scheduled on nodes in *taintnp*. Any other pod would be scheduled in the *nodepool1* node pool. If you create additional node pools, you can use additional taints and tolerations to limit what pods can be scheduled on those node resources.
603
603
604
-
### Setting nodepool labels
604
+
### Setting node pool labels
605
605
606
606
For more information on using labels with node pools, see [Use labels in an Azure Kubernetes Service (AKS) cluster][use-labels].
607
607
608
-
### Setting nodepool Azure tags
608
+
### Setting node pool Azure tags
609
609
610
610
For more information on using Azure tags with node pools, see [Use Azure tags in Azure Kubernetes Service (AKS)][use-tags].
611
611
@@ -692,7 +692,7 @@ Edit these values as need to update, add, or delete node pools as needed:
692
692
}
693
693
```
694
694
695
-
Deploy this template using the [az deployment group create][az-deployment-group-create] command, as shown in the following example. You're prompted for the existing AKS cluster name and location:
695
+
Deploy this template using the [`az deployment group create`][az-deployment-group-create] command, as shown in the following example. You're prompted for the existing AKS cluster name and location:
696
696
697
697
```azurecli-interactive
698
698
az deployment group create \
@@ -725,13 +725,13 @@ It may take a few minutes to update your AKS cluster depending on the node pool
725
725
726
726
In this article, you created an AKS cluster that includes GPU-based nodes. To reduce unnecessary cost, you may want to delete the *gpunodepool*, or the whole AKS cluster.
727
727
728
-
To delete the GPU-based node pool, use the [az aks nodepool delete][az-aks-nodepool-delete] command as shown in following example:
728
+
To delete the GPU-based node pool, use the [`az aks nodepool delete`][az-aks-nodepool-delete] command as shown in following example:
729
729
730
730
```azurecli-interactive
731
731
az aks nodepool delete -g myResourceGroup --cluster-name myAKSCluster --name gpunodepool
732
732
```
733
733
734
-
To delete the cluster itself, use the [az group delete][az-group-delete] command to delete the AKS resource group:
734
+
To delete the cluster itself, use the [`az group delete`][az-group-delete] command to delete the AKS resource group:
735
735
736
736
```azurecli-interactive
737
737
az group delete --name myResourceGroup --yes --no-wait
Copy file name to clipboardExpand all lines: articles/aks/use-node-public-ips.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -60,7 +60,7 @@ az aks create -g MyResourceGroup3 -n MyManagedCluster -l eastus --enable-node-pu
60
60
61
61
You can locate the public IPs for your nodes in various ways:
62
62
63
-
* Use the Azure CLI command [az vmss list-instance-public-ips][az-list-ips].
63
+
* Use the Azure CLI command [`az vmss list-instance-public-ips`][az-list-ips].
64
64
* Use [PowerShell or Bash commands][vmss-commands].
65
65
* You can also view the public IPs in the Azure portal by viewing the instances in the Virtual Machine Scale Set.
66
66
@@ -136,7 +136,7 @@ az aks nodepool add --cluster-name <clusterName> -n <nodepoolName> -l <location>
136
136
137
137
AKS nodes utilizing node public IPs that host services on their host address need to have an NSG rule added to allow the traffic. Adding the desired ports in the node pool configuration will create the appropriate allow rules in the cluster network security group.
138
138
139
-
If a network security group is in place on the subnet with a cluster using bring-your-own virtual network, an allow rule must be added to that network security group. This can be limited to the nodes in a given node pool by adding the node pool to an [application security group](/azure/virtual-network/network-security-groups-overview#application-security-groups) (ASG). A managed ASG will be created by default in the managed resource group if allowed host ports are specified. Nodes can also be added to one or more custom ASGs by specifying the resource ID of the NSG(s) in the nodepool parameters.
139
+
If a network security group is in place on the subnet with a cluster using bring-your-own virtual network, an allow rule must be added to that network security group. This can be limited to the nodes in a given node pool by adding the node pool to an [application security group](/azure/virtual-network/network-security-groups-overview#application-security-groups) (ASG). A managed ASG will be created by default in the managed resource group if allowed host ports are specified. Nodes can also be added to one or more custom ASGs by specifying the resource ID of the NSG(s) in the node pool parameters.
140
140
141
141
### Host port specification format
142
142
@@ -225,7 +225,7 @@ az aks nodepool update \
225
225
226
226
## Automatically assign host ports for pod workloads (PREVIEW)
227
227
228
-
When using public IPs on nodes, host ports can be utilized to allow pods to directly receive traffic without having to configure a load balancer service. This is especially useful in scenarios like gaming, where the ephemeral nature of the node IP and port is not a problem because a matchmaker service at a well-known hostname can provide the correct host and port to use at connection time. However, because only one process on a host can be listening on the same port, using applications with host ports can lead to problems with scheduling. To avoid this issue, AKS provides the ability to have the system dynamically assign an available port at scheduling time, preventing conflicts.
228
+
When public IPs are configured on nodes, host ports can be utilized to allow pods to directly receive traffic without having to configure a load balancer service. This is especially useful in scenarios like gaming, where the ephemeral nature of the node IP and port is not a problem because a matchmaker service at a well-known hostname can provide the correct host and port to use at connection time. However, because only one process on a host can be listening on the same port, using applications with host ports can lead to problems with scheduling. To avoid this issue, AKS provides the ability to have the system dynamically assign an available port at scheduling time, preventing conflicts.
229
229
230
230
> [!WARNING]
231
231
> Pod host port traffic will be blocked by the default NSG rules in place on the cluster. This feature should be combined with allowing host ports on the node pool to allow traffic to flow.
@@ -262,7 +262,7 @@ Triggering host port auto assignment is done by deploying a workload without any
262
262
263
263
Ports will be assigned from the range `40000-59999` and will be unique across the cluster. The assigned ports will also be added to environment variables inside the pod so that the application can determine what ports were assigned.
264
264
265
-
Here is an example echoserver deployment, showing the mapping of host ports for ports 8080 and 8443:
265
+
Here is an example `echoserver` deployment, showing the mapping of host ports for ports 8080 and 8443:
0 commit comments