Skip to content

Commit c30ac91

Browse files
committed
Fix consistency items for Acrolinx.
1 parent 3991de6 commit c30ac91

File tree

3 files changed

+19
-19
lines changed

3 files changed

+19
-19
lines changed

articles/aks/egress-outboundtype.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ This article covers the various types of outbound connectivity that are availabl
2323

2424
## Overview of outbound types in AKS
2525

26-
An AKS cluster can be configured with 3 different categories of outbound type: load balancer, NAT gateway, or user-defined routing.
26+
An AKS cluster can be configured with three different categories of outbound type: load balancer, NAT gateway, or user-defined routing.
2727

2828
> [!IMPORTANT]
2929
> Outbound type impacts only the egress traffic of your cluster. For more information, see [setting up ingress controllers](ingress-basic.md).
@@ -44,18 +44,18 @@ Below is a network topology deployed in AKS clusters by default, which use an `o
4444

4545
![Diagram shows ingress I P and egress I P, where the ingress I P directs traffic to a load balancer, which directs traffic to and from an internal cluster and other traffic to the egress I P, which directs traffic to the Internet, M C R, Azure required services, and the A K S Control Plane.](media/egress-outboundtype/outboundtype-lb.png)
4646

47-
Refer to the documentation on [using a standard load balancer in AKS](load-balancer-standard.md) for more information.
47+
For more information, see [using a standard load balancer in AKS](load-balancer-standard.md) for more information.
4848

4949
### Outbound type of `managedNatGateway` or `userAssignedNatGateway`
5050

5151
If `managedNatGateway` or `userAssignedNatGateway` are selected for `outboundType`, AKS relies on [Azure Networking NAT gateway](/azure/virtual-network/nat-gateway/manage-nat-gateway) for cluster egress.
5252

5353
- `managedNatGateway` is used when using managed virtual networks, and tells AKS to provision a NAT gateway and attach it to the cluster subnet.
54-
- `userAssignedNatGateway` is used when using bring-your-own virtual networking, and requires that a NAT gateway be provisioned before cluster creation.
54+
- `userAssignedNatGateway` is used when using bring-your-own virtual networking, and requires that a NAT gateway has been provisioned before cluster creation.
5555

5656
NAT gateway has significantly improved handling of SNAT ports when compared to Standard Load Balancer.
5757

58-
Refer to the documentation on [using NAT Gateway with AKS](nat-gateway.md) for more information.
58+
For more information, see [using NAT Gateway with AKS](nat-gateway.md) for more information.
5959

6060
### Outbound type of userDefinedRouting
6161

@@ -66,15 +66,15 @@ If `userDefinedRouting` is set, AKS won't automatically configure egress paths.
6666

6767
The AKS cluster must be deployed into an existing virtual network with a subnet that has been previously configured because when not using standard load balancer (SLB) architecture, you must establish explicit egress. As such, this architecture requires explicitly sending egress traffic to an appliance like a firewall, gateway, proxy or to allow the Network Address Translation (NAT) to be done by a public IP assigned to the standard load balancer or appliance.
6868

69-
Refer to the documentation on [configuring cluster egress via user-defined routing](egress-udr.md) for more information.
69+
For more information, see [configuring cluster egress via user-defined routing](egress-udr.md) for more information.
7070

7171
## Updating `outboundType` after cluster creation (PREVIEW)
7272

7373
Changing the outbound type after cluster creation will deploy or remove resources as required to put the cluster into the new egress configuration.
7474

7575
> [!WARNING]
7676
> - Changing the outbound type on a cluster is disruptive to network connectivity and will result in a change of the cluster's egress IP address. If any firewall rules have been configured to restrict traffic from the cluster, they will need to be updated to match the new egress IP address.
77-
> - Changing to the `userDefinedRouting` egress type may require changing how incoming load balancer traffic flows, as egress traffic will now flow via the user-defined route instead of back via the load balancer, leading to asymmetric routing and dropped traffic. See the [UDR documentation](egress-udr.md) for more details.
77+
> - Changing to the `userDefinedRouting` egress type may require changing how incoming load balancer traffic flows, as egress traffic will now flow via the user-defined route instead of back via the load balancer, leading to asymmetric routing and dropped traffic. For more information, see the [user-defined routing documentation](egress-udr.md).
7878
7979
[!INCLUDE [preview features callout](includes/preview/preview-callout.md)]
8080

articles/aks/use-multiple-node-pools.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ The following limitations apply when you create and manage AKS clusters that sup
2828
* You can delete system node pools, provided you have another system node pool to take its place in the AKS cluster.
2929
* System pools must contain at least one node, and user node pools may contain zero or more nodes.
3030
* The AKS cluster must use the Standard SKU load balancer to use multiple node pools, the feature isn't supported with Basic SKU load balancers.
31-
* The AKS cluster must use virtual machine scale sets for the nodes.
31+
* The AKS cluster must use Virtual Machine Scale Sets for the nodes.
3232
* You can't change the VM size of a node pool after you create it.
3333
* The name of a node pool may only contain lowercase alphanumeric characters and must begin with a lowercase letter. For Linux node pools the length must be between 1 and 12 characters, for Windows node pools the length must be between 1 and 6 characters.
3434
* All node pools must reside in the same virtual network.
@@ -210,7 +210,7 @@ The commands in this section explain how to upgrade a single specific node pool.
210210
> [!NOTE]
211211
> The node pool OS image version is tied to the Kubernetes version of the cluster. You will only get OS image upgrades, following a cluster upgrade.
212212
213-
Since there are two node pools in this example, we must use [az aks nodepool upgrade][az-aks-nodepool-upgrade] to upgrade a node pool. To see the available upgrades use [az aks get-upgrades][az-aks-get-upgrades]
213+
Since there are two node pools in this example, we must use [`az aks nodepool upgrade`][az-aks-nodepool-upgrade] to upgrade a node pool. To see the available upgrades use [`az aks get-upgrades`][az-aks-get-upgrades]
214214

215215
```azurecli-interactive
216216
az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster
@@ -498,9 +498,9 @@ When creating a node pool, you can add taints, labels, or tags to that node pool
498498
> [!IMPORTANT]
499499
> Adding taints, labels, or tags to nodes should be done for the entire node pool using `az aks nodepool`. Applying taints, labels, or tags to individual nodes in a node pool using `kubectl` is not recommended.
500500
501-
### Setting nodepool taints
501+
### Setting node pool taints
502502

503-
To create a node pool with a taint, use [az aks nodepool add][az-aks-nodepool-add]. Specify the name *taintnp* and use the `--node-taints` parameter to specify *sku=gpu:NoSchedule* for the taint.
503+
To create a node pool with a taint, use [`az aks nodepool add`][az-aks-nodepool-add]. Specify the name *taintnp* and use the `--node-taints` parameter to specify *sku=gpu:NoSchedule* for the taint.
504504

505505
```azurecli-interactive
506506
az aks nodepool add \
@@ -601,11 +601,11 @@ Events:
601601

602602
Only pods that have this toleration applied can be scheduled on nodes in *taintnp*. Any other pod would be scheduled in the *nodepool1* node pool. If you create additional node pools, you can use additional taints and tolerations to limit what pods can be scheduled on those node resources.
603603

604-
### Setting nodepool labels
604+
### Setting node pool labels
605605

606606
For more information on using labels with node pools, see [Use labels in an Azure Kubernetes Service (AKS) cluster][use-labels].
607607

608-
### Setting nodepool Azure tags
608+
### Setting node pool Azure tags
609609

610610
For more information on using Azure tags with node pools, see [Use Azure tags in Azure Kubernetes Service (AKS)][use-tags].
611611

@@ -692,7 +692,7 @@ Edit these values as need to update, add, or delete node pools as needed:
692692
}
693693
```
694694

695-
Deploy this template using the [az deployment group create][az-deployment-group-create] command, as shown in the following example. You're prompted for the existing AKS cluster name and location:
695+
Deploy this template using the [`az deployment group create`][az-deployment-group-create] command, as shown in the following example. You're prompted for the existing AKS cluster name and location:
696696

697697
```azurecli-interactive
698698
az deployment group create \
@@ -725,13 +725,13 @@ It may take a few minutes to update your AKS cluster depending on the node pool
725725

726726
In this article, you created an AKS cluster that includes GPU-based nodes. To reduce unnecessary cost, you may want to delete the *gpunodepool*, or the whole AKS cluster.
727727

728-
To delete the GPU-based node pool, use the [az aks nodepool delete][az-aks-nodepool-delete] command as shown in following example:
728+
To delete the GPU-based node pool, use the [`az aks nodepool delete`][az-aks-nodepool-delete] command as shown in following example:
729729

730730
```azurecli-interactive
731731
az aks nodepool delete -g myResourceGroup --cluster-name myAKSCluster --name gpunodepool
732732
```
733733

734-
To delete the cluster itself, use the [az group delete][az-group-delete] command to delete the AKS resource group:
734+
To delete the cluster itself, use the [`az group delete`][az-group-delete] command to delete the AKS resource group:
735735

736736
```azurecli-interactive
737737
az group delete --name myResourceGroup --yes --no-wait

articles/aks/use-node-public-ips.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ az aks create -g MyResourceGroup3 -n MyManagedCluster -l eastus --enable-node-pu
6060

6161
You can locate the public IPs for your nodes in various ways:
6262

63-
* Use the Azure CLI command [az vmss list-instance-public-ips][az-list-ips].
63+
* Use the Azure CLI command [`az vmss list-instance-public-ips`][az-list-ips].
6464
* Use [PowerShell or Bash commands][vmss-commands].
6565
* You can also view the public IPs in the Azure portal by viewing the instances in the Virtual Machine Scale Set.
6666

@@ -136,7 +136,7 @@ az aks nodepool add --cluster-name <clusterName> -n <nodepoolName> -l <location>
136136

137137
AKS nodes utilizing node public IPs that host services on their host address need to have an NSG rule added to allow the traffic. Adding the desired ports in the node pool configuration will create the appropriate allow rules in the cluster network security group.
138138

139-
If a network security group is in place on the subnet with a cluster using bring-your-own virtual network, an allow rule must be added to that network security group. This can be limited to the nodes in a given node pool by adding the node pool to an [application security group](/azure/virtual-network/network-security-groups-overview#application-security-groups) (ASG). A managed ASG will be created by default in the managed resource group if allowed host ports are specified. Nodes can also be added to one or more custom ASGs by specifying the resource ID of the NSG(s) in the nodepool parameters.
139+
If a network security group is in place on the subnet with a cluster using bring-your-own virtual network, an allow rule must be added to that network security group. This can be limited to the nodes in a given node pool by adding the node pool to an [application security group](/azure/virtual-network/network-security-groups-overview#application-security-groups) (ASG). A managed ASG will be created by default in the managed resource group if allowed host ports are specified. Nodes can also be added to one or more custom ASGs by specifying the resource ID of the NSG(s) in the node pool parameters.
140140

141141
### Host port specification format
142142

@@ -225,7 +225,7 @@ az aks nodepool update \
225225

226226
## Automatically assign host ports for pod workloads (PREVIEW)
227227

228-
When using public IPs on nodes, host ports can be utilized to allow pods to directly receive traffic without having to configure a load balancer service. This is especially useful in scenarios like gaming, where the ephemeral nature of the node IP and port is not a problem because a matchmaker service at a well-known hostname can provide the correct host and port to use at connection time. However, because only one process on a host can be listening on the same port, using applications with host ports can lead to problems with scheduling. To avoid this issue, AKS provides the ability to have the system dynamically assign an available port at scheduling time, preventing conflicts.
228+
When public IPs are configured on nodes, host ports can be utilized to allow pods to directly receive traffic without having to configure a load balancer service. This is especially useful in scenarios like gaming, where the ephemeral nature of the node IP and port is not a problem because a matchmaker service at a well-known hostname can provide the correct host and port to use at connection time. However, because only one process on a host can be listening on the same port, using applications with host ports can lead to problems with scheduling. To avoid this issue, AKS provides the ability to have the system dynamically assign an available port at scheduling time, preventing conflicts.
229229

230230
> [!WARNING]
231231
> Pod host port traffic will be blocked by the default NSG rules in place on the cluster. This feature should be combined with allowing host ports on the node pool to allow traffic to flow.
@@ -262,7 +262,7 @@ Triggering host port auto assignment is done by deploying a workload without any
262262

263263
Ports will be assigned from the range `40000-59999` and will be unique across the cluster. The assigned ports will also be added to environment variables inside the pod so that the application can determine what ports were assigned.
264264

265-
Here is an example echoserver deployment, showing the mapping of host ports for ports 8080 and 8443:
265+
Here is an example `echoserver` deployment, showing the mapping of host ports for ports 8080 and 8443:
266266

267267
```yml
268268
apiVersion: apps/v1

0 commit comments

Comments
 (0)