Skip to content

Commit e893421

Browse files
committed
Acrolinx fixes
1 parent 2264693 commit e893421

File tree

2 files changed

+11
-11
lines changed

2 files changed

+11
-11
lines changed

articles/aks/cluster-configuration.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -56,13 +56,13 @@ If you want to create a regular Ubuntu 16.04 cluster, you can do so by omitting
5656

5757
### Existing clusters
5858

59-
Configure a new nodepool to use Ubuntu 18.04. Use the `--aks-custom-headers` flag to set the Ubuntu 18.04 as the default OS for that nodepool.
59+
Configure a new node pool to use Ubuntu 18.04. Use the `--aks-custom-headers` flag to set the Ubuntu 18.04 as the default OS for that node pool.
6060

6161
```azure-cli
6262
az aks nodepool add --name ubuntu1804 --cluster-name myAKSCluster --resource-group myResourceGroup --aks-custom-headers CustomizedUbuntu=aks-ubuntu-1804
6363
```
6464

65-
If you want to create a regular Ubuntu 16.04 nodepools, you can do so by omitting the custom `--aks-custom-headers` tag.
65+
If you want to create a regular Ubuntu 16.04 node pools, you can do so by omitting the custom `--aks-custom-headers` tag.
6666

6767

6868
## Custom resource group name
@@ -75,9 +75,9 @@ To specify your own resource group name, install the aks-preview Azure CLI exten
7575
az aks create --name myAKSCluster --resource-group myResourceGroup --node-resource-group myNodeResourceGroup
7676
```
7777

78-
The secondary resource group is automatically created by the Azure resource provider in your own subscription. Note that you can only specify the custom resource group name when the cluster is created.
78+
The secondary resource group is automatically created by the Azure resource provider in your own subscription. You can only specify the custom resource group name when the cluster is created.
7979

80-
As you work with the node resource group, keep in mind that you cannot:
80+
As you work with the node resource group, keep in mind that you can't:
8181

8282
- Specify an existing resource group for the node resource group.
8383
- Specify a different subscription for the node resource group.

articles/aks/troubleshooting.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.date: 05/16/2020
88

99
# AKS troubleshooting
1010

11-
When you create or manage Azure Kubernetes Service (AKS) clusters, you might occasionally encounter problems. This article details some common problems and troubleshooting steps.
11+
When you create or manage Azure Kubernetes Service (AKS) clusters, you might occasionally come across problems. This article details some common problems and troubleshooting steps.
1212

1313
## In general, where do I find information about debugging Kubernetes problems?
1414

@@ -74,7 +74,7 @@ This error occurs when clusters enter a failed state for multiple reasons. Follo
7474
* Scaling a cluster with advanced networking and **insufficient subnet (networking) resources**. To resolve, first scale your cluster back to a stable goal state within quota. Then follow [these steps to request a resource quota increase](../azure-resource-manager/templates/error-resource-quota.md#solution) before trying to scale up again beyond initial quota limits.
7575
2. Once the underlying cause for upgrade failure is resolved, your cluster should be in a succeeded state. Once a succeeded state is verified, retry the original operation.
7676

77-
## I'm receiving errors when trying to upgrade or scale that state my cluster is being currently being upgraded or has failed upgrade
77+
## I'm receiving errors when trying to upgrade or scale that state my cluster is being upgraded or has failed upgrade
7878

7979
*This troubleshooting assistance is directed from https://aka.ms/aks-pending-upgrade*
8080

@@ -95,7 +95,7 @@ If you've moved your AKS cluster to a different subscription or the cluster's su
9595

9696
You may receive errors that indicate your AKS cluster isn't on a virtual machine scale set, such as the following example:
9797

98-
**AgentPool '<agentpoolname>' has set auto scaling as enabled but isn't on Virtual Machine Scale Sets**
98+
**AgentPool `<agentpoolname>` has set auto scaling as enabled but isn't on Virtual Machine Scale Sets**
9999

100100
Features such as the cluster autoscaler or multiple node pools require virtual machine scale sets as the `vm-set-type`.
101101

@@ -132,9 +132,9 @@ Based on the output of the cluster status:
132132
When creating an AKS cluster, it requires a service principal or managed identity to create resources on your behalf. AKS can automatically create a new service principal at cluster creation time or receive an existing one. When using an automatically created one, Azure Active Directory needs to propagate it to every region so the creation succeeds. When the propagation takes too long, the cluster will fail validation to create as it can't find an available service principal to do so.
133133

134134
Use the following workarounds for this issue:
135-
1. Use an existing service principal, which has already propagated across regions and exists to pass into AKS at cluster create time.
136-
2. If using automation scripts, add time delays between service principal creation and AKS cluster creation.
137-
3. If using Azure portal, return to the cluster settings during create and retry the validation page after a few minutes.
135+
* Use an existing service principal, which has already propagated across regions and exists to pass into AKS at cluster create time.
136+
* If using automation scripts, add time delays between service principal creation and AKS cluster creation.
137+
* If using Azure portal, return to the cluster settings during create and retry the validation page after a few minutes.
138138

139139

140140

@@ -243,7 +243,7 @@ If you're using a version of Kubernetes that doesn't have the fix for this issue
243243

244244
### Large number of Azure Disks causes slow attach/detach
245245

246-
When the number of Azure Disks attached to a node VM is larger than 10, attach and detach operations may be slow. This issue is a known issue and there are no workarounds at this time.
246+
When the numbers of Azure Disk attach/detach operations targeting a single node VM is larger than 10, or larger than 3 when targeting single virtual machine scale set pool they may be slower than expected as they are done sequentially. This issue is a known limitation and there are no workarounds at this time. [User voice item to support parallel attach/detach beyond number.](https://feedback.azure.com/forums/216843-virtual-machines/suggestions/40444528-vmss-support-for-parallel-disk-attach-detach-for).
247247

248248
### Azure Disk detach failure leading to potential node VM in failed state
249249

0 commit comments

Comments
 (0)