You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/azure-netapp-files-resize-capacity-pools-or-volumes.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,7 +18,7 @@ For information about monitoring a volume’s capacity, see [Monitor the capacit
18
18
## Considerations
19
19
20
20
* Resize operations on Azure NetApp Files volumes don't result in data loss.
21
-
* Volume quotas are indexed against `maxfiles` limits. Once a volume has surpassed a `maxfiles` limit, you cannot reduce the volume size below the quota that corresponds to that `maxfiles` limit. For more information and specific limits, see [`maxfiles` limits](azure-netapp-files-resource-limits.md#maxfiles-limits-).
21
+
* Volume quotas are indexed against `maxfiles` limits. Once a volume has surpassed a `maxfiles` limit, you cannot reduce the volume size below the quota that corresponds to that `maxfiles` limit. For more information and specific limits, see [`maxfiles` limits](maxfiles-concept.md).
22
22
* Capacity pools with Basic network features have a minimum size of 4 TiB. For capacity pools with Standard network features, the minimum size is 1 TiB. For more information, see [Resource limits](azure-netapp-files-resource-limits.md)
23
23
* Volume resize operations are nearly instantaneous but not always immediate. There can be a short delay for the volume's updated size to appear in the portal. Verify the size from a host perspective before re-attempting the resize operation.
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/azure-netapp-files-resource-limits.md
+5-64Lines changed: 5 additions & 64 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ services: azure-netapp-files
5
5
author: b-hchen
6
6
ms.service: azure-netapp-files
7
7
ms.topic: conceptual
8
-
ms.date: 07/23/2024
8
+
ms.date: 08/09/2024
9
9
ms.author: anfdocs
10
10
---
11
11
# Resource limits for Azure NetApp Files
@@ -35,8 +35,8 @@ The following table describes resource limits for Azure NetApp Files:
35
35
| Maximum size of a single large volume on dedicated capacity (preview) | 2,048 TiB | No |
36
36
| Maximum size of a single file | 16 TiB | No |
37
37
| Maximum size of directory metadata in a single directory | 320 MB | No |
38
-
| Maximum number of files in a single directory |*Approximately* 4 million. <br> See [Determine if a directory is approaching the limit size](#directory-limit). | No |
39
-
| Maximum number of files `maxfiles` per volume | See [`maxfiles`](#maxfiles)| Yes |
38
+
| Maximum number of files in a single directory |*Approximately* 4 million. <br> See [Determine if a directory is approaching the limit size](directory-sizes-concept.md#directory-limit). | No |
39
+
| Maximum number of files `maxfiles` per volume | See [`maxfiles`](maxfiles-concept.md)| Yes |
40
40
| Maximum number of export policy rules per volume | 5 | No |
41
41
| Maximum number of quota rules per volume | 100 | No |
42
42
| Minimum assigned throughput for a manual QoS volume | 1 MiB/s | No |
@@ -56,67 +56,6 @@ For more information, see [Capacity management FAQs](faq-capacity-management.md)
56
56
57
57
For limits and constraints related to Azure NetApp Files network features, see [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md#considerations).
58
58
59
-
## Determine if a directory is approaching the limit size <aname="directory-limit"></a>
60
-
61
-
You can use the `stat` command from a client to see whether a directory is approaching the maximum size limit for directory metadata (320 MB). If you reach the maximum size limit for a single directory for Azure NetApp Files, the error `No space left on device` occurs.
62
-
63
-
For a 320-MB directory, the number of blocks is 655,360, with each block size being 512 bytes. (That is, 320x1024x1024/512.) This number translates to approximately 4 million files maximum for a 320-MB directory. However, the actual number of maximum files might be lower, depending on factors such as the number of files with non-ASCII characters in the directory. As such, you should use the `stat` command as follows to determine whether your directory is approaching its limit.
64
-
65
-
Examples:
66
-
67
-
```console
68
-
[makam@cycrh6rtp07 ~]$ stat bin
69
-
File: 'bin'
70
-
Size: 4096 Blocks: 8 IO Block: 65536 directory
71
-
72
-
[makam@cycrh6rtp07 ~]$ stat tmp
73
-
File: 'tmp'
74
-
Size: 12288 Blocks: 24 IO Block: 65536 directory
75
-
76
-
[makam@cycrh6rtp07 ~]$ stat tmp1
77
-
File: 'tmp1'
78
-
Size: 4096 Blocks: 8 IO Block: 65536 directory
79
-
```
80
-
81
-
## `Maxfiles` limits <aname="maxfiles"></a>
82
-
83
-
Azure NetApp Files volumes have a value called `maxfiles` that refers to the maximum number of files and folders (also known as inodes) a volume can contain. When the `maxfiles` limit is reached, clients receive "out of space" messages when attempting to create new files or folders. If you experience this issue, contact Microsoft technical support.
84
-
85
-
The `maxfiles` limit for an Azure NetApp Files volume is based on the size (quota) of the volume, where the service dynamically adjusts the `maxfiles` limit for a volume based on its provisioned size and uses the following guidelines.
86
-
87
-
- For regular volumes less than or equal to 683 GiB, the default `maxfiles` limit is 21,251,126.
88
-
- For regular volumes greater than 683 GiB, the default `maxfiles` limit is approximately one file (or inode) per 32 KiB of allocated volume capacity up to a maximum of 2,147,483,632.
89
-
- For [large volumes](large-volumes-requirements-considerations.md), the default `maxfiles` limit is approximately one file (or inode) per 32 KiB of allocated volume capacity up to a default maximum of 15,938,355,048.
90
-
- Each inode uses roughly 288 bytes of capacity in the volume. Having many inodes in a volume can consume a non-trivial amount of physical space overhead on top of the capacity of the actual data.
91
-
- If a file is less than 64 bytes in size, it's stored in the inode itself and doesn't use additional capacity. This capacity is only used when files are actually allocated to the volume.
92
-
- Files larger than 64 bytes do consume additional capacity on the volume. For instance, if there are one million files greater than 64 bytes in an Azure NetApp Files volume, then approximately 274 MiB of capacity would belong to the inodes.
93
-
94
-
95
-
The following table shows examples of the relationship `maxfiles` values based on volume sizes for regular volumes.
96
-
97
-
| Volume size | Estimated maxfiles limit |
98
-
| - | - |
99
-
| 0 – 683 GiB | 21,251,126 |
100
-
| 1 TiB (1,073,741,824 KiB) | 31,876,709 |
101
-
| 10 TiB (10,737,418,240 KiB) | 318,767,099 |
102
-
| 50 TiB (53,687,091,200 KiB) | 1,593,835,519 |
103
-
| 100 TiB (107,374,182,400 KiB) | 2,147,483,632 |
104
-
105
-
The following table shows examples of the relationship `maxfiles` values based on volume sizes for large volumes.
To see the `maxfiles` allocation for a specific volume size, check the **Maximum number of files** field in the volume’s overview pane.
115
-
116
-
:::image type="content" source="./media/azure-netapp-files-resource-limits/maximum-number-files.png" alt-text="Screenshot of volume overview menu." lightbox="./media/azure-netapp-files-resource-limits/maximum-number-files.png":::
117
-
118
-
You can't set `maxfiles` limits for data protection volumes via a quota request. Azure NetApp Files automatically increases the `maxfiles` limit of a data protection volume to accommodate the number of files replicated to the volume. When a failover happens on a data protection volume, the `maxfiles` limit remains the last value before the failover. In this situation, you can submit a `maxfiles`[quota request](#request-limit-increase) for the volume.
119
-
120
59
## Request limit increase
121
60
122
61
You can create an Azure support request to increase the adjustable limits from the [Resource Limits](#resource-limits) table.
@@ -155,6 +94,8 @@ You can create an Azure support request to increase the adjustable limits from t
title: Understand directory sizes in Azure NetApp Files
3
+
description: Learn how metadata impacts directory sizes in Azure NetApp Files.
4
+
services: azure-netapp-files
5
+
author: b-ahibbard
6
+
ms.service: azure-netapp-files
7
+
ms.topic: conceptual
8
+
ms.date: 07/23/2024
9
+
ms.author: anfdocs
10
+
---
11
+
# Understand directory sizes in Azure NetApp Files
12
+
13
+
When a file is created in a directory, an entry is added to a hidden index file within the Azure NetApp Files volume. This index file helps keep track of the existing inodes in a directory and helps expedite lookup requests for directories with a high number of files. As entries are added to this file, the file size increases (but never decrease) at a rate of approximately 512 bytes per entry depending on the length of the filename. Longer file names add more size to the file. Symbolic links also add entries to this file. This concept is known as the directory size, which is a common element in all Linux-based filesystems. Directory size isn't the maximum total number of files in a single Azure NetApp Files volume. That is determined by the [`maxfiles` value](maxfiles-concept.md).
14
+
15
+
By default, when a new directory is created, it consumes 4 KiB (4,096 bytes) or eight 512-byte blocks. You can view the size of a newly created directory from a Linux client using the stat command.
16
+
17
+
```
18
+
# mkdir dirsize
19
+
# stat dirsize
20
+
File: ‘dirsize’
21
+
Size: 4096 Blocks: 8 IO Block: 32768 directory
22
+
```
23
+
24
+
Directory sizes are specific to a single directory and don't combine in sizes. For example, if there are 10 directories in a volume, each can approach the 320-MiB directory size limit in a single volume.
25
+
26
+
## Determine if a directory is approaching the limit size <aname="directory-limit"></a>
27
+
28
+
For a 320-MiB directory, the number of blocks is 655360, with each block size being 512 bytes. (That is, 320x1024x1024/512.) This number translates to approximately 4-5 million files maximum for a 320-MiB directory. However, the actual number of maximum files might be lower, depending on factors such as the number of files with non-ASCII characters in the directory.
29
+
You can use the `stat` command from a client to see whether a directory is approaching the maximum size limit for directory metadata (320 MB). If you reach the maximum size limit for a single directory for Azure NetApp Files, the error `No space left on device` occurs.
30
+
31
+
For a 320-MB directory, the number of blocks is 655,360, with each block size being 512 bytes. (That is, 320x1024x1024/512.) This number translates to approximately 4 million files maximum for a 320-MB directory. However, the actual number of maximum files might be lower, depending on factors such as the number of files with non-ASCII characters in the directory. For information on how to monitor the maxdirsize, see [Monitoring `maxdirsize`]().
32
+
33
+
## Directory size considerations
34
+
35
+
When dealing with a high-file count environment, consider the following recommendations:
36
+
37
+
- Azure NetApp Files volumes support up to 320 MiB for directory sizes. This value can't be increased.
38
+
- Once a volume’s directory size has been exceeded, clients display an out-of-space error even if there's available free space in the volume.
39
+
- For regular volumes, a 320 MiB directory size equates to roughly 4-5 million files in a single directory. This value is dependent on the file name lengths.
40
+
-[Large volumes](large-volumes-requirements-considerations.md) have a different architecture than regular volumes.
41
+
- High file counts in a single directory can present performance problems when searching. When possible, limit the total size of a single directory to 2 MiB (roughly 27,000 files) when frequent searches are needed.
42
+
- If more files are needed in a single directory, adjust search performance expectations accordingly. While Azure NetApp Files indexes the directory file listings for performance, searches can take some time with high file counts.
43
+
- When designing your file system, avoid flat directory layouts. For information about different approaches to directory layouts, see [About directory layouts](#about-directory-layouts).
44
+
- To resolve issues where the directory size has been exceeded and new files can't be created, delete or move files out of the relevant directory.
45
+
46
+
## About directory layouts
47
+
48
+
The `maxdirsize` value can create concerns when you're using flat directory structures, where a single folder contains millions of files at a single level. Folder structures where files, folders, and subfolders are interspersed have a low impact on `maxdirsize`. There are several directory structure methodologies.
49
+
50
+
A **flat directory structure** is a single directory with many files below the same directory.
51
+
52
+
:::image type="content" source="./media/directory-sizes-concept/flat-structure.png" alt-text="Diagram of a flat directory structure.":::
53
+
54
+
A **wide directory structure** contains many top-level directories with files spread across all directories.
55
+
56
+
:::image type="content" source="./media/directory-sizes-concept/wide-structure.png" alt-text="Diagram of a wide directory structure.":::
57
+
58
+
A **deep directory structure** contains fewer top-level directories with many subdirectories. Although this structure provides fewer files per folder, file path lengths can become an issue if directory layouts are too deep and file paths become too long. For details on file path lengths, see [Understand file path lengths in Azure NetApp Files](understand-path-lengths.md).
59
+
60
+
:::image type="content" source="./media/directory-sizes-concept/deep-structure.png" alt-text="Diagram of a deep directory structure.":::
61
+
62
+
### Impact of flat directory structures in Azure NetApp Files
63
+
64
+
Flat directory structures (many files in a single or few directories) have a negative effect on a wide array of file systems, Azure NetApp File volumes, or others. Potential issues include:
65
+
66
+
- Memory pressure
67
+
- CPU utilization
68
+
- Network performance/latency (especially during mass queries of files, `GETATTR` operations, `READDIR` operations)
69
+
70
+
Due to the design of Azure NetApp Files large volumes, the impact of `maxdirsize` is unique. Azure NetApp Files large volume `maxdirsize` is impacted uniquely due to its design. Unlike a regular volume, a large volume uses remote hard links inside Azure NetApp Files to help redirect traffic across different storage devices to provide more scale and performance. When using flat directories, there's a higher ratio of internal remote hard links to local files. These remote hard links count against the total `maxdirsize` value, so a large volume might approach its `maxdirsize` limit faster than a regular volume.
71
+
72
+
For example, if a single directory has millions of files and generates roughly 85% remote hard links for the file system, you can expect `maxdirsize` to be exhausted at nearly twice the amount as a regular volume would.
73
+
74
+
For best results with directory sizes in Azure NetApp Files:
75
+
76
+
-**Avoid flat directory structures in Azure NetApp Files**. **Wide or deep directory structures work best**, provided the [path length](understand-path-lengths.md) of the file or folder doesn't exceed NAS protocol standards.
77
+
- If flat directory structures are unavoidable, monitor the `maxdirsize` for the directories.
78
+
79
+
## Monitor `maxdirsize`
80
+
81
+
For a single directory, use the `stat` command to find the directory size.
82
+
83
+
```
84
+
# stat /mnt/dir_11/c5
85
+
```
86
+
87
+
Although the `stat` command can be used to check the directory size of a specific directory, it might not be as efficient to run it individually against a single directory. To see a list of the largest directory sizes sorted from largest to smallest, the following command provides that while omitting snapshot directories from the query.
In the previous, the directory size of `/mnt/dir_11/c5` is 316,084 KiB (308.6 MiB), which approaches the 320-MiB limit. That equates to around 4.1 million files.
113
+
114
+
```
115
+
# ls /mnt/dir_11/c5 | wc -l
116
+
4171624
117
+
```
118
+
119
+
In this case, consider corrective actions such as moving or deleting files.
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/faq-capacity-management.md
+5-4Lines changed: 5 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,7 +18,8 @@ Azure NetApp Files provides capacity pool and volume usage metrics. You can also
18
18
## How do I determine if a directory is approaching the limit size?
19
19
20
20
You can use the `stat` command from a client to see whether a directory is approaching the [maximum size limit](azure-netapp-files-resource-limits.md#resource-limits) for directory metadata (320 MB).
21
-
See [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md#directory-limit) for the limit and calculation.
21
+
22
+
See [Understand directory sizes in Azure NetApp Files](directory-sizes-concept.md) for the limit and calculation.
22
23
23
24
## Does snapshot space count towards the usable / provisioned capacity of a volume?
24
25
@@ -29,17 +30,17 @@ Yes, the [consumed snapshot capacity](azure-netapp-files-cost-model.md#capacity-
29
30
30
31
## Does Azure NetApp Files support auto-grow for volumes or capacity pools?
31
32
32
-
No, Azure NetApp Files volumes and capacity pool do not auto-grow upon filling up. See [Cost model for Azure NetApp Files](azure-netapp-files-cost-model.md).
33
+
No, Azure NetApp Files volumes and capacity pool don't auto-grow upon filling up. See [Cost model for Azure NetApp Files](azure-netapp-files-cost-model.md).
33
34
34
35
You can use the community supported [Logic Apps ANFCapacityManager tool](https://github.com/ANFTechTeam/ANFCapacityManager) to manage capacity-based alert rules. The tool can automatically increase volume sizes to prevent your volumes from running out of space.
35
36
36
37
## Does the destination volume of a replication count towards hard volume quota?
37
38
38
-
No, the destination volume of a replication does not count towards hard volume quota.
39
+
No, the destination volume of a replication doesn't count towards hard volume quota.
39
40
40
41
## Can I manage Azure NetApp Files through Azure Storage Explorer?
41
42
42
-
No. Azure NetApp Files is not supported by Azure Storage Explorer.
43
+
No. Azure NetApp Files isn't supported by Azure Storage Explorer.
43
44
44
45
## Why is volume space not freed up immediately after deleting large amount of data in a volume?
0 commit comments