Skip to content

Commit 8b70b55

Browse files
Merge pull request #296046 from khdownie/kendownie031025-2
metadata limits
2 parents 2378d8d + 5b85739 commit 8b70b55

File tree

3 files changed

+55
-20
lines changed

3 files changed

+55
-20
lines changed

articles/storage/files/analyze-files-metrics.md

Lines changed: 25 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ author: khdownie
55
services: storage
66
ms.service: azure-file-storage
77
ms.topic: how-to
8-
ms.date: 08/19/2024
8+
ms.date: 03/10/2025
99
ms.author: kendownie
1010
ms.custom: monitoring, devx-track-azurepowershell
1111
---
@@ -324,6 +324,30 @@ Compared against the **Bandwidth by Max MiB/s**, we achieved 123 MiB/s at peak.
324324

325325
:::image type="content" source="media/analyze-files-metrics/bandwidth-by-max-mibs.png" alt-text="Screenshot showing bandwidth by max MIBS." lightbox="media/analyze-files-metrics/bandwidth-by-max-mibs.png" border="false":::
326326

327+
### Monitor utilization by metadata IOPS
328+
329+
On Premium SSD and Standard HDD file shares, our current metadata capabilities scale up to 12K metadata IOPS. This means that running a metadata-heavy workload with a high volume of open, close, or delete operations increases the likelihood of metadata IOPS throttling. This limitation is independent of the file share's overall IOPS capacity on Standard or IOPS provisioning on Premium.
330+
331+
Because no two metadata-heavy workloads follow the same usage pattern, it can be challenging for customers to proactively monitor their workload and set accurate alerts.
332+
333+
To address this, we've introduced two metadata-specific metrics for Azure file shares:
334+
335+
- **Success with Metadata Warning:** Indicates that metadata IOPS are approaching their limit and might be throttled if they remain high or continue increasing. A rise in the volume or frequency of these warnings suggests an increasing risk of metadata throttling.
336+
337+
- **Success with Metadata Throttling:** Indicates that metadata IOPS have exceeded the file share’s capacity, resulting in throttling. While IOPS operations will never fail and will eventually succeed after retries, latency will be impacted during throttling.
338+
339+
To view in Azure Monitor, select the **Transactions** metric and **Apply splitting** on response types. The Metadata response types will only appear in the drop-down if the activity occurs within the timeframe selected.
340+
341+
The following chart illustrates a workload that experienced a sudden increase in metadata IOPS (transactions), triggering Success with Metadata Warnings, which indicates a risk of metadata throttling. In this example, the workload subsequently reduced its transaction volume, preventing metadata throttling from occurring.
342+
343+
:::image type="content" source="media/analyze-files-metrics/metadata-warnings.png" alt-text="Screenshot showing Metadata Warnings by response type." lightbox="media/analyze-files-metrics/metadata-warnings.png" border="false":::
344+
345+
If your workload encounters **Success with Metadata Warnings** or **Success with Metadata Throttling** response types, consider implementing one or more of the following recommendations:
346+
347+
- For Premium SMB file shares, enable [Metadata Caching](smb-performance.md#metadata-caching-for-premium-smb-file-shares).
348+
- Distribute (shard) your workload across multiple file shares.
349+
- Reduce the volume of metadata IOPS.
350+
327351
## Related content
328352

329353
- [Monitor Azure Files](storage-files-monitoring.md)
130 KB
Loading

articles/storage/files/storage-files-scale-targets.md

Lines changed: 30 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -4,15 +4,16 @@ description: Learn about the scalability and performance targets for Azure Files
44
author: khdownie
55
ms.service: azure-file-storage
66
ms.topic: conceptual
7-
ms.date: 03/04/2025
7+
ms.date: 03/11/2025
88
ms.author: kendownie
99
ms.custom: references_regions
1010
---
1111

1212
# Scalability and performance targets for Azure Files and Azure File Sync
13+
1314
[Azure Files](storage-files-introduction.md) offers fully managed file shares in the cloud that are accessible via the Server Message Block (SMB) and Network File System (NFS) file system protocols. This article discusses the scalability and performance targets for Azure Files and Azure File Sync.
1415

15-
The targets listed here might be affected by other variables in your deployment. For example, the performance of I/O for a file might be impacted by your SMB client's behavior and by your available network bandwidth. You should test your usage pattern to determine whether the scalability and performance of Azure Files meet your requirements.
16+
Other variables in your deployment can affect the targets listed in this article. For example, your SMB client's behavior and your available network bandwidth might impact I/O performance. You should test your usage pattern to determine whether the scalability and performance of Azure Files meet your requirements.
1617

1718
## Applies to
1819
| Management model | Billing model | Media tier | Redundancy | SMB | NFS |
@@ -44,7 +45,7 @@ Storage account scale targets apply at the storage account level. There are two
4445
| SKUs | <ul><li>Premium_LRS</li><li>Premium_ZRS</li></ul> | <ul><li>StandardV2_LRS</li><li>StandardV2_ZRS</li><li>StandardV2_GRS</li><li>StandardV2_GZRS</li></ul> | <ul><li>Standard_LRS</li><li>Standard_ZRS</li><li>Standard_GRS</li><li>Standard_GZRS</li></ul> |
4546
| Number of storage accounts per region per subscription | 250 | 250 | 250 |
4647
| Maximum storage capacity | 100 TiB | 4 PiB | 5 PiB |
47-
| Maximum number of file shares | 1024 (recommended to use 50 or fewer) | 50 | Unlimited (recommended to use 50 or fewer) |
48+
| Maximum number of file shares | 1024 (recommended using 50 or fewer) | 50 | Unlimited (recommended using 50 or fewer) |
4849
| Maximum IOPS | 102,400 IOPS | 50,000 IOPS | 20,000 IOPS |
4950
| Maximum throughput | 10,340 MiB / sec | 5,120 MiB / sec | <ul><li>Select regions:<ul><li>Ingress: 7,680 MiB / sec</li><li>Egress: 25,600 MiB / sec</li></ul></li><li>Default:<ul><li>Ingress: 3,200 MiB / sec</li><li>Egress: 6,400 MiB / sec</li></ul></li></ul> |
5051
| Maximum number of virtual network rules | 200 | 200 | 200 |
@@ -98,16 +99,18 @@ Azure file share scale targets apply at the file share level.
9899
| Minimum storage size | 100 GiB (provisioned) | 32 GiB (provisioned) | 0 bytes |
99100
| Maximum storage size | 100 TiB | 256 TiB | 100 TiB |
100101
| Maximum number of files | Unlimited | Unlimited | Unlimited |
101-
| Maximum IOPS | 102,400 IOPS (dependent on provisioning) | 50,000 IOPS (dependent on provisioning) | 20,000 IOPS |
102+
| Maximum IOPS (Data) | 102,400 IOPS (dependent on provisioning) | 50,000 IOPS (dependent on provisioning) | 20,000 IOPS |
103+
| Maximum IOPS (Metadata<sup>1</sup>) | Up to 12,000 IOPS | Up to 12,000 IOPS | Up to 12,000 IOPS |
102104
| Maximum throughput | 10,340 MiB / sec (dependent on provisioning) | 5,120 MiB / sec (dependent on provisioning) | Up to storage account limits |
103105
| Maximum number of share snapshots | 200 snapshots | 200 snapshots | 200 snapshots |
104-
| Maximum filename length<sup>3</sup> (full pathname including all directories, file names, and backslash characters) | 2,048 characters | 2,048 characters | 2,048 characters |
105-
| Maximum length of individual pathname component<sup>2</sup> (in the path \A\B\C\D, each letter represents a directory or file that is an individual component) | 255 characters | 255 characters | 255 characters |
106+
| Maximum filename length<sup>2</sup> (full pathname including all directories, file names, and backslash characters) | 2,048 characters | 2,048 characters | 2,048 characters |
107+
| Maximum length of individual pathname component (in the path \A\B\C\D, each letter represents a directory or file that is an individual component) | 255 characters | 255 characters | 255 characters |
106108
| Hard link limit (NFS only) | 178 | N/A | N/A |
107109
| Maximum number of SMB Multichannel channels | 4 | N/A | N/A |
108110
| Maximum number of stored access policies per file share | 5 | 5 | 5 |
109111

110-
<sup>3</sup> Azure Files enforces certain [naming rules](/rest/api/storageservices/naming-and-referencing-shares--directories--files--and-metadata#directory-and-file-names) for directory and file names.
112+
<sup>1</sup> Metadata IOPS (open/close/delete). See [Monitor Metadata IOPS](analyze-files-metrics.md#monitor-utilization-by-metadata-iops) for guidance.<br>
113+
<sup>2</sup> Azure Files enforces certain [naming rules](/rest/api/storageservices/naming-and-referencing-shares--directories--files--and-metadata#directory-and-file-names) for directory and file names.
111114

112115
### File scale targets
113116
File scale targets apply to individual files stored in Azure file shares.
@@ -122,7 +125,7 @@ File scale targets apply to individual files stored in Azure file shares.
122125

123126
### Azure Files sizing guidance for Azure Virtual Desktop
124127

125-
A popular use case for Azure Files is storing user profile containers and disk images for Azure Virtual Desktop, using either FSLogix or App attach. In large scale Azure Virtual Desktop deployments, you might run out of handles for the root directory or per file/directory if you're using a single Azure file share. This section describes how handles are consumed by various types of disk images, and provides sizing guidance depending on the technology you're using.
128+
A popular use case for Azure Files is storing user profile containers and disk images for Azure Virtual Desktop, using either FSLogix or App attach. In large scale Azure Virtual Desktop deployments, you might run out of handles for the root directory or per file/directory if you're using a single Azure file share. This section describes how various types of disk images consume handles. It also provides sizing guidance based on the technology you're using.
126129

127130
#### FSLogix
128131

@@ -131,9 +134,9 @@ If you're using [FSLogix with Azure Virtual Desktop](../../virtual-desktop/fslog
131134
If you're hitting the limit of 10,000 concurrent handles for the root directory or users are seeing poor performance, try using an additional Azure file share and distributing the containers between the shares.
132135

133136
> [!WARNING]
134-
> While Azure Files can support up to 10,000 concurrent users from a single file share, it's critical to properly test your workloads against the size and type of file share you've created. Your requirements might vary based on users, profile size, and workload.
137+
> While Azure Files can support up to 10,000 concurrent users from a single file share, it's critical to properly test your workloads against the size and type of file share you're using. Your requirements might vary based on users, profile size, and workload.
135138
136-
For example, if you have 2,400 concurrent users, you'd need 2,400 handles on the root directory (one for each user), which is below the limit of 10,000 open handles. For FSLogix users, reaching the limit of 2,000 open file and directory handles is unlikely. If you have a single FSLogix profile container per user, you'd only consume two file/directory handles: one for the profile directory and one for the profile container file. If users have two containers each (profile and ODFC), you'd need one additional handle for the ODFC file.
139+
For example, if you have 2,400 concurrent users, you'd need 2,400 handles on the root directory (one for each user), which is below the limit of 10,000 open handles. For FSLogix users, reaching the limit of 2,000 open file and directory handles is unlikely. If you have a single FSLogix profile container per user, you'd only consume two file/directory handles: one for the profile directory and one for the profile container file. If users have two containers each (profile and ODFC), you'd need one more handle for the ODFC file.
137140

138141
#### App attach with CimFS
139142

@@ -145,11 +148,11 @@ You might run out of file handles if the number of VMs per app exceeds 2,000. In
145148

146149
#### App attach with VHD/VHDX
147150

148-
If you're using App attach with VHD/VHDX files, the files are mounted in a system context, not a user context, and they are shared and read-only. More than one handle on the VHDX file can be consumed by a connecting system. To stay within Azure Files scale limits, the number of VMs multiplied by the number of apps must be less than 10,000, and the number of VMs per app can't exceed 2,000. So the constraint is whichever you hit first.
151+
If you're using App attach with VHD/VHDX files, the files are mounted in a system context, not a user context, and they're shared and read-only. More than one handle on the VHDX file can be consumed by a connecting system. To stay within Azure Files scale limits, the number of VMs multiplied by the number of apps must be less than 10,000, and the number of VMs per app can't exceed 2,000. So the constraint is whichever you hit first.
149152

150153
In this scenario, you could hit the per file/directory limit with 2,000 mounts of a single VHD/VHDX. Or, if the share contains multiple VHD/VHDX files, you could hit the root directory limit first. For example, 100 VMs mounting 100 shared VHDX files will hit the 10,000 handle root directory limit.
151154

152-
In another example, 100 VMs accessing 20 apps require 2,000 root directory handles (100 x 20 = 2,000), which is well within the 10,000 limit for root directory handles. You'll also need a file handle and a directory/folder handle for every VM mounting the VHD(X) image, so 200 handles in this case (100 file handles + 100 directory handles), which is comfortably below the 2,000 handle limit per file/directory.
155+
In another example, 100 VMs accessing 20 apps require 2,000 root directory handles (100 x 20 = 2,000), which is well within the 10,000 limit for root directory handles. You also need a file handle and a directory/folder handle for every VM mounting the VHD(X) image, so 200 handles in this case (100 file handles + 100 directory handles), which is comfortably below the 2,000 handle limit per file/directory.
153156

154157
If you're hitting the limits on maximum concurrent handles for the root directory or per file/directory, use an additional Azure file share.
155158

@@ -164,7 +167,7 @@ The following table indicates which targets are soft, representing the Microsoft
164167
| Sync groups per Storage Sync Service | 200 sync groups | Yes |
165168
| Registered servers per Storage Sync Service | 100 servers | Yes |
166169
| Private endpoints per Storage Sync Service | 100 private endpoints | Yes |
167-
| Cloud endpoints per sync group | 1 cloud endpoint | Yes |
170+
| Cloud endpoints per sync group | One cloud endpoint | Yes |
168171
| Server endpoints per sync group | 100 server endpoints | Yes |
169172
| Server endpoints per server | 30 server endpoints | Yes |
170173
| File system objects (directories and files) per sync group | 100 million objects | No |
@@ -178,7 +181,15 @@ The following table indicates which targets are soft, representing the Microsoft
178181
179182
## Azure File Sync performance metrics
180183

181-
Since the Azure File Sync agent runs on a Windows Server machine that connects to the Azure file shares, the effective sync performance depends upon many factors in your infrastructure: Windows Server and the underlying disk configuration, network bandwidth between the server and the Azure storage, file size, total dataset size, and the activity on the dataset. Since Azure File Sync works on the file level, the performance characteristics of an Azure File Sync-based solution should be measured by the number of objects (files and directories) processed per second.
184+
Since the Azure File Sync agent runs on a Windows Server machine that connects to the Azure file shares, the effective sync performance depends upon many factors in your infrastructure, including:
185+
186+
- Windows Server and the underlying disk configuration
187+
- Network bandwidth between the server and the Azure storage
188+
- File size
189+
- Total dataset size
190+
- Activity on the dataset
191+
192+
Because Azure File Sync works on the file level, you should measure the performance characteristics of an Azure File Sync-based solution by the number of objects (files and directories) processed per second.
182193

183194
The following table indicates the Azure File Sync performance targets:
184195

@@ -189,15 +200,15 @@ The following table indicates the Azure File Sync performance targets:
189200
| Namespace Download Throughput | 400 objects per second per server endpoint |
190201
| Full Download Throughput | 60 objects per second per server endpoint |
191202

192-
> [!Note]
203+
> [!NOTE]
193204
> The actual performance will depend on multiple factors as outlined in the beginning of this section.
194205
195206
As a general guide for your deployment, you should keep a few things in mind:
196207

197-
- The object throughput approximately scales in proportion to the number of sync groups on the server. Splitting data into multiple sync groups on a server yields better throughput, which is also limited by the server and network.
198-
- The object throughput is inversely proportional to the MiB per second throughput. For smaller files, you will experience higher throughput in terms of the number of objects processed per second, but lower MiB per second throughput. Conversely, for larger files, you will get fewer objects processed per second, but higher MiB per second throughput. The MiB per second throughput is limited by the Azure Files scale targets.
199-
- When many server endpoints in the same sync group are syncing at the same time, they are contending for cloud service resources. As a result, upload performance is impacted. In extreme cases, some sync sessions fail to access the resources, and will fail. However, those sync sessions will resume shortly and eventually succeed once the congestion is reduced.
200-
- If cloud tiering is enabled, you are likely to observe better download performance as only some of the file data is downloaded. Azure File Sync only downloads the data of cached files when they are changed on any of the endpoints. For any tiered or newly created files, the agent does not download the file data, and instead only syncs the namespace to all the server endpoints. The agent also supports partial downloads of tiered files as they are accessed by the user.
208+
- Object throughput approximately scales in proportion to the number of sync groups on the server. Splitting data into multiple sync groups on a server yields better throughput, which is also limited by the server and network.
209+
- Object throughput is inversely proportional to the MiB per second throughput. For smaller files, you experience higher throughput in terms of the number of objects processed per second, but lower MiB per second throughput. Conversely, for larger files, you get fewer objects processed per second, but higher MiB per second throughput. The MiB per second throughput is limited by the Azure Files scale targets.
210+
- When many server endpoints in the same sync group are syncing at the same time, they're contending for cloud service resources. As a result, upload performance is impacted. In extreme cases, some sync sessions fail to access the resources, and will fail. However, those sync sessions will resume shortly and eventually succeed once the congestion is reduced.
211+
- If cloud tiering is enabled, you're likely to observe better download performance as only some of the file data is downloaded. Azure File Sync only downloads the data of cached files when they're changed on any of the endpoints. For any tiered or newly created files, the agent doesn't download the file data, and instead only syncs the namespace to all the server endpoints. The agent also supports partial downloads of tiered files as they're accessed by the user.
201212

202213
## See also
203214

0 commit comments

Comments
 (0)