You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/aks/cluster-configuration.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -56,13 +56,13 @@ If you want to create a regular Ubuntu 16.04 cluster, you can do so by omitting
56
56
57
57
### Existing clusters
58
58
59
-
Configure a new nodepool to use Ubuntu 18.04. Use the `--aks-custom-headers` flag to set the Ubuntu 18.04 as the default OS for that nodepool.
59
+
Configure a new node pool to use Ubuntu 18.04. Use the `--aks-custom-headers` flag to set the Ubuntu 18.04 as the default OS for that node pool.
60
60
61
61
```azure-cli
62
62
az aks nodepool add --name ubuntu1804 --cluster-name myAKSCluster --resource-group myResourceGroup --aks-custom-headers CustomizedUbuntu=aks-ubuntu-1804
63
63
```
64
64
65
-
If you want to create a regular Ubuntu 16.04 nodepools, you can do so by omitting the custom `--aks-custom-headers` tag.
65
+
If you want to create a regular Ubuntu 16.04 node pools, you can do so by omitting the custom `--aks-custom-headers` tag.
66
66
67
67
68
68
## Custom resource group name
@@ -75,9 +75,9 @@ To specify your own resource group name, install the aks-preview Azure CLI exten
75
75
az aks create --name myAKSCluster --resource-group myResourceGroup --node-resource-group myNodeResourceGroup
76
76
```
77
77
78
-
The secondary resource group is automatically created by the Azure resource provider in your own subscription. Note that you can only specify the custom resource group name when the cluster is created.
78
+
The secondary resource group is automatically created by the Azure resource provider in your own subscription. You can only specify the custom resource group name when the cluster is created.
79
79
80
-
As you work with the node resource group, keep in mind that you cannot:
80
+
As you work with the node resource group, keep in mind that you can't:
81
81
82
82
- Specify an existing resource group for the node resource group.
83
83
- Specify a different subscription for the node resource group.
Copy file name to clipboardExpand all lines: articles/aks/troubleshooting.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ ms.date: 05/16/2020
8
8
9
9
# AKS troubleshooting
10
10
11
-
When you create or manage Azure Kubernetes Service (AKS) clusters, you might occasionally encounter problems. This article details some common problems and troubleshooting steps.
11
+
When you create or manage Azure Kubernetes Service (AKS) clusters, you might occasionally come across problems. This article details some common problems and troubleshooting steps.
12
12
13
13
## In general, where do I find information about debugging Kubernetes problems?
14
14
@@ -74,7 +74,7 @@ This error occurs when clusters enter a failed state for multiple reasons. Follo
74
74
* Scaling a cluster with advanced networking and **insufficient subnet (networking) resources**. To resolve, first scale your cluster back to a stable goal state within quota. Then follow [these steps to request a resource quota increase](../azure-resource-manager/templates/error-resource-quota.md#solution) before trying to scale up again beyond initial quota limits.
75
75
2. Once the underlying cause for upgrade failure is resolved, your cluster should be in a succeeded state. Once a succeeded state is verified, retry the original operation.
76
76
77
-
## I'm receiving errors when trying to upgrade or scale that state my cluster is being currently being upgraded or has failed upgrade
77
+
## I'm receiving errors when trying to upgrade or scale that state my cluster is being upgraded or has failed upgrade
78
78
79
79
*This troubleshooting assistance is directed from https://aka.ms/aks-pending-upgrade*
80
80
@@ -95,7 +95,7 @@ If you've moved your AKS cluster to a different subscription or the cluster's su
95
95
96
96
You may receive errors that indicate your AKS cluster isn't on a virtual machine scale set, such as the following example:
97
97
98
-
**AgentPool '<agentpoolname>' has set auto scaling as enabled but isn't on Virtual Machine Scale Sets**
98
+
**AgentPool `<agentpoolname>` has set auto scaling as enabled but isn't on Virtual Machine Scale Sets**
99
99
100
100
Features such as the cluster autoscaler or multiple node pools require virtual machine scale sets as the `vm-set-type`.
101
101
@@ -132,9 +132,9 @@ Based on the output of the cluster status:
132
132
When creating an AKS cluster, it requires a service principal or managed identity to create resources on your behalf. AKS can automatically create a new service principal at cluster creation time or receive an existing one. When using an automatically created one, Azure Active Directory needs to propagate it to every region so the creation succeeds. When the propagation takes too long, the cluster will fail validation to create as it can't find an available service principal to do so.
133
133
134
134
Use the following workarounds for this issue:
135
-
1. Use an existing service principal, which has already propagated across regions and exists to pass into AKS at cluster create time.
136
-
2. If using automation scripts, add time delays between service principal creation and AKS cluster creation.
137
-
3. If using Azure portal, return to the cluster settings during create and retry the validation page after a few minutes.
135
+
* Use an existing service principal, which has already propagated across regions and exists to pass into AKS at cluster create time.
136
+
* If using automation scripts, add time delays between service principal creation and AKS cluster creation.
137
+
* If using Azure portal, return to the cluster settings during create and retry the validation page after a few minutes.
138
138
139
139
140
140
@@ -243,7 +243,7 @@ If you're using a version of Kubernetes that doesn't have the fix for this issue
243
243
244
244
### Large number of Azure Disks causes slow attach/detach
245
245
246
-
When the number of Azure Disks attached to a node VM is larger than 10, attach and detach operations may be slow. This issue is a known issue and there are no workarounds at this time.
246
+
When the numbers of Azure Disk attach/detach operations targeting a single node VM is larger than 10, or larger than 3 when targeting single virtual machine scale set pool they may be slower than expected as they are done sequentially. This issue is a known limitation and there are no workarounds at this time. [User voice item to support parallel attach/detach beyond number.](https://feedback.azure.com/forums/216843-virtual-machines/suggestions/40444528-vmss-support-for-parallel-disk-attach-detach-for).
247
247
248
248
### Azure Disk detach failure leading to potential node VM in failed state
0 commit comments