You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/storage/files/smb-performance.md
+29-8Lines changed: 29 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ description: Learn about ways to improve performance and throughput for SSD (pre
4
4
author: khdownie
5
5
ms.service: azure-file-storage
6
6
ms.topic: concept-article
7
-
ms.date: 01/22/2025
7
+
ms.date: 05/29/2025
8
8
ms.author: kendownie
9
9
ms.custom:
10
10
- references_regions
@@ -165,9 +165,9 @@ This feature improves the following metadata APIs and can be used from both Wind
165
165
- Close
166
166
- Delete
167
167
168
-
Currently this feature is only available for SSD file shares. There are no extra costs associated with using this feature.
168
+
Currently this feature is only available for SSD file shares. There are no extra costs associated with using this feature. You can also register to increase file handle limits for SSD file shares (preview).
169
169
170
-
### Register for the feature
170
+
### Register for the metadata caching feature
171
171
172
172
To get started, register for the feature using the Azure portal or Azure PowerShell.
> - Although listed under Preview Features, we honor GA SLAs and will soon make this the default for all accounts, removing the need for registration.
193
-
> - Allow 1-2 days for accounts to be onboarded once registration is complete.
194
-
195
-
### Regional availability
191
+
### Regional availability for metadata caching
196
192
197
193
Supported regions:
198
194
@@ -268,6 +264,31 @@ Metadata caching can increase network throughput by more than 60% for metadata-h
268
264
269
265
:::image type="content" source="media/smb-performance/metadata-caching-throughput.jpg" alt-text="Chart showing network throughput with and without metadata caching." border="false":::
270
266
267
+
## Register for increased file handle limits (preview)
268
+
269
+
To increase the maximum number of concurrent handles per file and directory for SSD SMB file shares from 2,000 to 10,000, register for the preview feature using the Azure portal or Azure PowerShell. If you have questions, email [email protected].
270
+
271
+
# [Azure portal](#tab/portal)
272
+
273
+
1. Sign in to the [Azure portal](https://portal.azure.com?azure-portal=true).
274
+
2. Search for and select **Preview features**.
275
+
3. Select the **Type** filter and select **Microsoft.Storage**.
276
+
4. Select **Azure Premium Files Increased Maximum Opened Handles Count** and then select **Register**.
277
+
278
+
# [Azure PowerShell](#tab/powershell)
279
+
280
+
To register your subscription using Azure PowerShell, run the following commands. Replace `<your-subscription-id>` and `<your-tenant-id>` with your own values.
| Maximum number of files | Unlimited | Unlimited | Unlimited |
102
102
| Maximum IOPS (Data) | 102,400 IOPS (dependent on provisioning) | 50,000 IOPS (dependent on provisioning) | 20,000 IOPS |
103
-
| Maximum IOPS (Metadata<sup>1</sup>) | Up to 35,000 IOPS | Up to 12,000 IOPS | Up to 12,000 IOPS |
103
+
| Maximum IOPS (Metadata<sup>1</sup>) | Up to 35,000 IOPS<sup>2</sup>| Up to 12,000 IOPS | Up to 12,000 IOPS |
104
104
| Maximum throughput | 10,340 MiB / sec (dependent on provisioning) | 5,120 MiB / sec (dependent on provisioning) | Up to storage account limits |
105
105
| Maximum number of share snapshots | 200 snapshots | 200 snapshots | 200 snapshots |
106
-
| Maximum filename length<sup>2</sup> (full pathname including all directories, file names, and backslash characters) | 2,048 characters | 2,048 characters | 2,048 characters |
106
+
| Maximum filename length<sup>3</sup> (full pathname including all directories, file names, and backslash characters) | 2,048 characters | 2,048 characters | 2,048 characters |
107
107
| Maximum length of individual pathname component (in the path \A\B\C\D, each letter represents a directory or file that is an individual component) | 255 characters | 255 characters | 255 characters |
108
108
| Hard link limit (NFS only) | 178 | N/A | N/A |
109
109
| Maximum number of SMB Multichannel channels | 4 | N/A | N/A |
110
110
| Maximum number of stored access policies per file share | 5 | 5 | 5 |
111
111
112
112
<sup>1</sup> Metadata IOPS (open/close/delete). See [Monitor Metadata IOPS](analyze-files-metrics.md#monitor-utilization-by-metadata-iops) for guidance.<br>
113
-
<sup>2</sup> Azure Files enforces certain [naming rules](/rest/api/storageservices/naming-and-referencing-shares--directories--files--and-metadata#directory-and-file-names) for directory and file names.
113
+
<sup>2</sup> Scaling to 35,000 IOPS for SSD file shares requires [registering for the metadata caching feature](smb-performance.md#register-for-the-metadata-caching-feature).<br>
114
+
<sup>3</sup> Azure Files enforces certain [naming rules](/rest/api/storageservices/naming-and-referencing-shares--directories--files--and-metadata#directory-and-file-names) for directory and file names.
114
115
115
116
### File scale targets
116
117
File scale targets apply to individual files stored in Azure file shares.
@@ -121,40 +122,13 @@ File scale targets apply to individual files stored in Azure file shares.
121
122
| Maximum data IOPS per file | 8,000 IOPS | 1,000 IOPS | 1,000 IOPS |
| Maximum concurrent handles for root directory | 10,000 handles | 10,000 handles | 10,000 handles |
124
-
| Maximum concurrent handles per file and directory | 2,000 handles | 2,000 handles | 2,000 handles |
125
+
| Maximum concurrent handles per file and directory | 2,000 handles\*| 2,000 handles | 2,000 handles |
125
126
126
-
### Azure Files sizing guidance for Azure Virtual Desktop
127
-
128
-
A popular use case for Azure Files is storing user profile containers and disk images for Azure Virtual Desktop, using either FSLogix or App attach. In large scale Azure Virtual Desktop deployments, you might run out of handles for the root directory or per file/directory if you're using a single Azure file share. This section describes how various types of disk images consume handles. It also provides sizing guidance based on the technology you're using.
129
-
130
-
#### FSLogix
131
-
132
-
If you're using [FSLogix with Azure Virtual Desktop](../../virtual-desktop/fslogix-containers-azure-files.md), your user profile containers are either Virtual Hard Disk (VHD) or Hyper-V Virtual Hard Disk (VHDX) files, and they're mounted in a user context, not a system context. Each user opens a single root directory handle, which should be to the file share. Azure Files can support a maximum of 10,000 users assuming you have the file share (`\\storageaccount.file.core.windows.net\sharename`) + the profile directory (`%sid%_%username%`) + profile container (`profile_%username.vhd(x)`).
133
-
134
-
If you're hitting the limit of 10,000 concurrent handles for the root directory or users are seeing poor performance, try using an additional Azure file share and distributing the containers between the shares.
135
-
136
-
> [!WARNING]
137
-
> While Azure Files can support up to 10,000 concurrent users from a single file share, it's critical to properly test your workloads against the size and type of file share you're using. Your requirements might vary based on users, profile size, and workload.
138
-
139
-
For example, if you have 2,400 concurrent users, you'd need 2,400 handles on the root directory (one for each user), which is below the limit of 10,000 open handles. For FSLogix users, reaching the limit of 2,000 open file and directory handles is unlikely. If you have a single FSLogix profile container per user, you'd only consume two file/directory handles: one for the profile directory and one for the profile container file. If users have two containers each (profile and ODFC), you'd need one more handle for the ODFC file.
140
-
141
-
#### App attach with CimFS
127
+
\* The maximum number of concurrent handles per file and directory is a soft limit for SSD SMB file shares. If you need to scale beyond this limit, you can [enable metadata caching](smb-performance.md#register-for-the-metadata-caching-feature), and register for [increased file handle limits (preview)](smb-performance.md#register-for-increased-file-handle-limits-preview).
142
128
143
-
If you're using [MSIX App attach or App attach](../../virtual-desktop/app-attach-overview.md) to dynamically attach applications, you can use Composite Image File System (CimFS) or VHD/VHDX files for [disk images](../../virtual-desktop/app-attach-overview.md#application-images). Either way, the scale limits are per VM mounting the image, not per user. The number of users is irrelevant when calculating scale limits. When a VM is booted, it mounts the disk image, even if there are zero users.
144
-
145
-
If you're using App attach with CimFS, the disk images only consume handles on the disk image files. They don't consume handles on the root directory or the directory containing the disk image. However, because a CimFS image is a combination of the .cim file and at least two other files, for every VM mounting the disk image, you need one handle each for three files in the directory. So if you have 100 VMs, you need 300 file handles.
146
-
147
-
You might run out of file handles if the number of VMs per app exceeds 2,000. In this case, use an additional Azure file share.
148
-
149
-
#### App attach with VHD/VHDX
150
-
151
-
If you're using App attach with VHD/VHDX files, the files are mounted in a system context, not a user context, and they're shared and read-only. More than one handle on the VHDX file can be consumed by a connecting system. To stay within Azure Files scale limits, the number of VMs multiplied by the number of apps must be less than 10,000, and the number of VMs per app can't exceed 2,000. So the constraint is whichever you hit first.
152
-
153
-
In this scenario, you could hit the per file/directory limit with 2,000 mounts of a single VHD/VHDX. Or, if the share contains multiple VHD/VHDX files, you could hit the root directory limit first. For example, 100 VMs mounting 100 shared VHDX files will hit the 10,000 handle root directory limit.
154
-
155
-
In another example, 100 VMs accessing 20 apps require 2,000 root directory handles (100 x 20 = 2,000), which is well within the 10,000 limit for root directory handles. You also need a file handle and a directory/folder handle for every VM mounting the VHD(X) image, so 200 handles in this case (100 file handles + 100 directory handles), which is comfortably below the 2,000 handle limit per file/directory.
129
+
### Azure Files sizing guidance for Azure Virtual Desktop
156
130
157
-
If you're hitting the limits on maximum concurrent handles for the root directory or per file/directory, use an additional Azure file share.
131
+
A popular use case for Azure Files is storing user profile containers and disk images for Azure Virtual Desktop. See [Azure Files guidance for virtual desktop workloads](virtual-desktop-workloads.md) for more information.
0 commit comments