Skip to content

Commit dbeb5ee

Browse files
author
Simonx Xu
authored
Merge pull request #8652 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to main to sync with https://github.com/MicrosoftDocs/SupportArticles-docs (branch main)
2 parents 4ae6508 + e3c97e8 commit dbeb5ee

File tree

1 file changed

+47
-21
lines changed

1 file changed

+47
-21
lines changed

support/azure/azure-kubernetes/storage/fail-to-mount-azure-disk-volume.md

Lines changed: 47 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
2-
title: Unable to mount Azure disk volumes
2+
title: Unable to Mount Azure Disk Volumes
33
description: Describes errors that occur when mounting Azure disk volumes fails, and provides solutions.
4-
ms.date: 09/06/2024
4+
ms.date: 03/22/2025
55
author: genlin
66
ms.author: genli
77
ms.reviewer: chiragpa, akscsscic, v-weizhu
@@ -14,17 +14,18 @@ This article provides solutions for errors that cause the mounting of Azure disk
1414

1515
## Symptoms
1616

17-
You're trying to deploy a Kubernetes resource such as a Deployment or a StatefulSet, in an Azure Kubernetes Service (AKS) environment. The deployment will create a pod that should mount a PersistentVolumeClaim (PVC) referencing an Azure disk.
17+
You're trying to deploy a Kubernetes resource, such as a Deployment or a StatefulSet, in an Azure Kubernetes Service (AKS) environment. The deployment creates a pod that should mount a PersistentVolumeClaim (PVC) that references an Azure disk.
1818

19-
However, the pod stays in the **ContainerCreating** status. When you run the `kubectl describe pods` command, you may see one of the following errors, which causes the mounting operation to fail:
19+
However, the pod stays in the **ContainerCreating** status. When you run the `kubectl describe pods` command, you may see one of the following errors that cause the mounting operation to fail:
2020

2121
- [Disk cannot be attached to the VM because it is not in the same zone as the VM](#error1)
2222
- [Client '\<client-ID>' with object id '\<object-ID>' doesn't have authorization to perform action over scope '\<disk name>' or scope is invalid](#error2)
2323
- [Volume is already used by pod](#error3)
2424
- [StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set](#error4)
2525
- [ApplyFSGroup failed for vol](#error5)
26+
- [Node(s) exceed max volume count](#error6)
2627

27-
See the following sections for error details, possible causes and solutions.
28+
See the following sections for error details, possible causes, and solutions.
2829

2930
## <a id="error1"></a>Disk cannot be attached to the VM because it is not in the same zone as the VM
3031

@@ -47,13 +48,13 @@ RawError:
4748

4849
### Cause: Disk and node hosting pod are in different zones
4950

50-
In AKS, the default and other built-in StorageClasses for Azure disks use [locally redundant storage (LRS)](/azure/storage/common/storage-redundancy#locally-redundant-storage). These disks are deployed in [availability zones](/azure/aks/availability-zones). If you use the node pool in AKS with availability zones, and the pod is scheduled on a node that's in another availability zone different from the disk, you may get this error.
51+
In AKS, the default and other built-in storage classes for Azure disks use [locally redundant storage (LRS)](/azure/storage/common/storage-redundancy#locally-redundant-storage). These disks are deployed in [availability zones](/azure/aks/availability-zones). If you use the node pool in AKS together with availability zones, and the pod is scheduled on a node that's in another availability zone that's different from the disk, you might experience this error.
5152

52-
To resolve this error, use one of the following solutions:
53+
To resolve this error, use one of the following solutions.
5354

5455
### Solution 1: Ensure disk and node hosting the pod are in the same zone
5556

56-
To make sure the disk and node that hosts the pod are in the same availability zone, use [node affinity](https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/).
57+
To make sure that the disk and node that host the pod are in the same availability zone, use [node affinity](https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/).
5758

5859
Refer to the following script as an example:
5960

@@ -69,19 +70,19 @@ affinity:
6970
- <region>-Y
7071
```
7172
72-
\<region> is the region of the AKS cluster. `Y` represents the availability zone of the disk, for example, westeurope-3.
73+
\<region> is the region of the AKS cluster. `Y` represents the availability zone of the disk (for example, westeurope-3).
7374

7475
### Solution 2: Use zone-redundant storage (ZRS) disks
7576

7677
[ZRS](/azure/storage/common/storage-redundancy#zone-redundant-storage) disk volumes can be scheduled on all zone and non-zone agent nodes. For more information, see [Azure disk availability zone support](/azure/aks/availability-zones#azure-disk-availability-zone-support).
7778

78-
To use a ZRS disk, create a new storage class with `Premium_ZRS` or `StandardSSD_ZRS`, and then deploy the PersistentVolumeClaim (PVC) referencing the storage.
79+
To use a ZRS disk, create a storage class by using `Premium_ZRS` or `StandardSSD_ZRS`, and then deploy the PersistentVolumeClaim (PVC) that references the storage.
7980

8081
For more information about parameters, see [Driver Parameters](/azure/aks/azure-csi-files-storage-provision#storage-class-parameters-for-dynamic-persistentvolumes)
8182

8283
### Solution 3: Use Azure Files
8384

84-
[Azure Files](/azure/storage/files/storage-files-introduction) is mounted by using NFS or SMB throughout network and it's not associated with availability zones.
85+
[Azure Files](/azure/storage/files/storage-files-introduction) is mounted by using NFS or SMB throughout network. It's not associated with availability zones.
8586

8687
For more information, see the following articles:
8788

@@ -107,11 +108,11 @@ RawError:
107108

108109
### Cause: AKS identity doesn't have required authorization over disk
109110

110-
AKS cluster's identity doesn't have the required authorization over the Azure disk. This issue occurs when the disk is created in another resource group other than the infrastructure resource group of the AKS cluster.
111+
AKS cluster's identity doesn't have the required authorization over the Azure disk. This issue occurs if the disk is created in a resource group other than the infrastructure resource group of the AKS cluster.
111112

112113
### Solution: Create role assignment that includes required authorization
113114

114-
Create a role assignment that includes the authorization required as per the error. We recommend that you use a [Contributor](/azure/role-based-access-control/built-in-roles/general#contributor) role. If you want to use another built-in role, see [Azure built-in roles](/azure/role-based-access-control/built-in-roles).
115+
Create a role assignment that includes the authorization required per the error. We recommend that you use a [Contributor](/azure/role-based-access-control/built-in-roles/general#contributor) role. If you want to use another built-in role, see [Azure built-in roles](/azure/role-based-access-control/built-in-roles).
115116

116117
To assign a Contributor role, use one of the following methods:
117118

@@ -135,9 +136,9 @@ Here are details of this error:
135136

136137
### Cause: Disk is mounted to multiple pods hosted on different nodes
137138

138-
An Azure disk can be mounted only as [ReadWriteOnce](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes), which makes it available to one node in AKS. That means it can be attached to only one node and mounted only to a pod hosted by that node. If you mount the same disk to a pod on another node, you'll get this error because the disk is already attached to a node.
139+
An Azure disk can be mounted only as [ReadWriteOnce](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes). This makes it available to one node in AKS. That means that it can be attached to only one node and mounted to only a pod that's hosted by that node. If you mount the same disk to a pod on another node, you experience this error because the disk is already attached to a node.
139140

140-
### Solution: Ensure disk isn't mounted by multiple pods hosted on different nodes
141+
### Solution: Make sure disk isn't mounted by multiple pods hosted on different nodes
141142

142143
To resolve this error, refer to [Multi-Attach error](https://github.com/andyzhangx/demo/blob/master/issues/azuredisk-issues.md#25-multi-attach-error).
143144

@@ -163,11 +164,11 @@ desc = Attach volume "/subscriptions/<subscription-ID>/resourceGroups/<disk-reso
163164

164165
### Cause: Ultra disk is attached to node pool with ultra disks disabled
165166

166-
This error indicates that an [ultra disk](/azure/virtual-machines/disks-enable-ultra-ssd) is trying to be attached to a node pool with ultra disks disabled. By default, an ultra disk is disabled on AKS node pools.
167+
This error indicates that an [ultra disk](/azure/virtual-machines/disks-enable-ultra-ssd) is trying to be attached to a node pool by having ultra disks disabled. By default, an ultra disk is disabled on AKS node pools.
167168

168169
### Solution: Create a node pool that can use ultra disks
169170

170-
To use ultra disks on AKS, create a node pool with ultra disks support by using the `--enable-ultra-ssd` flag. For more information, see [Use Azure ultra disks on Azure Kubernetes Service](/azure/aks/use-ultra-disks).
171+
To use ultra disks on AKS, create a node pool that has ultra disks support by using the `--enable-ultra-ssd` flag. For more information, see [Use Azure ultra disks on Azure Kubernetes Service](/azure/aks/use-ultra-disks).
171172

172173
## <a id="error5"></a>ApplyFSGroup failed for vol
173174

@@ -177,20 +178,45 @@ Here are details of this error:
177178

178179
### Cause: Changing ownership and permissions for large volume takes much time
179180

180-
When there's a large number of files already present in the volume, if a `securityContext` with `fsGroup` is in place, this error may occur. When there are lots of files and directories under one volume, changing the group ID would consume much time. It's also mentioned in the Kubernetes official documentation [Configure volume permission and ownership change policy for Pods](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods):
181+
If there are many files already present in the volume, and if a `securityContext` that uses `fsGroup` exists, this error might occur. If there are lots of files and directories in one volume, changing the group ID would consume excessive time. Additionally, the Kubernetes official documentation [Configure volume permission and ownership change policy for Pods](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods) mentions this situation:
181182

182183
"By default, Kubernetes recursively changes ownership and permissions for the contents of each volume to match the `fsGroup` specified in a Pod's `securityContext` when that volume is mounted. For large volumes, checking and changing ownership and permissions can take much time, slowing Pod startup. You can use the `fsGroupChangePolicy` field inside a `securityContext` to control the way that Kubernetes checks and manages ownership and permissions for a volume."
183184

184185
### Solution: Set fsGroupChangePolicy field to OnRootMismatch
185186

186-
To resolve this error, we recommend that you set `fsGroupChangePolicy: "OnRootMismatch"` in the `securityContext` of a Deployment, a StatefulSet or a pod.
187+
To resolve this error, we recommend that you set `fsGroupChangePolicy: "OnRootMismatch"` in the `securityContext` of a Deployment, a StatefulSet, or a pod.
187188

188-
OnRootMismatch: Only change permissions and ownership if permission and ownership of root directory doesn't match with expected permissions of the volume. This setting could help shorten the time it takes to change ownership and permission of a volume.
189+
OnRootMismatch: Change permissions and ownership only if permission and ownership of the root directory doesn't match the expected permissions of the volume. This setting could help shorten the time that it takes to change ownership and permission of a volume.
189190

190191
For more information, see [Configure volume permission and ownership change policy for Pods](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods).
191192

192-
## More information 
193+
## <a id="error6"></a>Node(s) exceed max volume count
194+
195+
Here are details of this error:
196+
197+
```output
198+
Events:
199+
Type Reason Age From Message
200+
---- ------ ---- ---- -------
201+
Warning FailedScheduling 25s default-scheduler 0/8 nodes are available: 8 node(s) exceed max volume count. preemption: 0/8 nodes are available: 8 No preemption victims found for incoming pod..
202+
```
203+
### Cause: Maximum disk limit is reached
204+
205+
The node has reached its maximum disk capacity. In AKS, the number of disks per node depends on the VM size that's configured for the node pool.
206+
207+
### Solution
208+
209+
To resolve the issue, use one of the following methods:
210+
211+
- Add a new node pool with a VM size that supports more disk limit.
212+
- Scale the node pool.
213+
- Delete existing disks from the node.
214+
215+
Additionally, make sure that the number of disks per node does not exceed the [Kubernetes default limits](https://kubernetes.io/docs/concepts/storage/storage-limits/#kubernetes-default-limits).
216+
217+
## More information
193218

194219
For more Azure Disk known issues, see [Azure disk plugin known issues](https://github.com/andyzhangx/demo/blob/master/issues/azuredisk-issues.md).
195220

196221
[!INCLUDE [Azure Help Support](../../../includes/azure-help-support.md)]
222+

0 commit comments

Comments
 (0)