Skip to content

Commit 7d226e0

Browse files
authored
Merge pull request #300421 from khdownie/kendownie-workloads-vdi
VDI workload Azure Files
2 parents b45a419 + e7f897a commit 7d226e0

File tree

4 files changed

+185
-45
lines changed

4 files changed

+185
-45
lines changed

articles/storage/files/TOC.yml

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -233,8 +233,6 @@
233233
href: ../common/storage-initiate-account-failover.md?toc=/azure/storage/files/toc.json
234234
- name: SSD file shares redundancy support
235235
href: redundancy-premium-file-shares.md
236-
- name: Deploy SQL Server Failover Cluster with SSD file shares
237-
href: /azure/azure-sql/virtual-machines/windows/failover-cluster-instance-premium-file-share-manually-configure?toc=/azure/storage/files/toc.json
238236
- name: Performance, scale, and cost
239237
items:
240238
- name: Understanding performance
@@ -259,6 +257,12 @@
259257
href: analyze-files-metrics.md
260258
- name: Create alerts
261259
href: files-monitoring-alerts.md
260+
- name: Workloads
261+
items:
262+
- name: Virtual desktops
263+
href: virtual-desktop-workloads.md
264+
- name: SQL Server
265+
href: /azure/azure-sql/virtual-machines/windows/failover-cluster-instance-premium-file-share-manually-configure?toc=/azure/storage/files/toc.json
262266
- name: Application development
263267
items:
264268
- name: Overview

articles/storage/files/smb-performance.md

Lines changed: 29 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: Learn about ways to improve performance and throughput for SSD (pre
44
author: khdownie
55
ms.service: azure-file-storage
66
ms.topic: concept-article
7-
ms.date: 01/22/2025
7+
ms.date: 05/29/2025
88
ms.author: kendownie
99
ms.custom:
1010
- references_regions
@@ -165,9 +165,9 @@ This feature improves the following metadata APIs and can be used from both Wind
165165
- Close
166166
- Delete
167167

168-
Currently this feature is only available for SSD file shares. There are no extra costs associated with using this feature.
168+
Currently this feature is only available for SSD file shares. There are no extra costs associated with using this feature. You can also register to increase file handle limits for SSD file shares (preview).
169169

170-
### Register for the feature
170+
### Register for the metadata caching feature
171171

172172
To get started, register for the feature using the Azure portal or Azure PowerShell.
173173

@@ -188,11 +188,7 @@ Register-AzProviderFeature -FeatureName AzurePremiumFilesMetadataCacheFeature -P
188188
```
189189
---
190190

191-
> [!IMPORTANT]
192-
> - Although listed under Preview Features, we honor GA SLAs and will soon make this the default for all accounts, removing the need for registration.
193-
> - Allow 1-2 days for accounts to be onboarded once registration is complete.
194-
195-
### Regional availability
191+
### Regional availability for metadata caching
196192

197193
Supported regions:
198194

@@ -268,6 +264,31 @@ Metadata caching can increase network throughput by more than 60% for metadata-h
268264

269265
:::image type="content" source="media/smb-performance/metadata-caching-throughput.jpg" alt-text="Chart showing network throughput with and without metadata caching." border="false":::
270266

267+
## Register for increased file handle limits (preview)
268+
269+
To increase the maximum number of concurrent handles per file and directory for SSD SMB file shares from 2,000 to 10,000, register for the preview feature using the Azure portal or Azure PowerShell. If you have questions, email [email protected].
270+
271+
# [Azure portal](#tab/portal)
272+
273+
1. Sign in to the [Azure portal](https://portal.azure.com?azure-portal=true).
274+
2. Search for and select **Preview features**.
275+
3. Select the **Type** filter and select **Microsoft.Storage**.
276+
4. Select **Azure Premium Files Increased Maximum Opened Handles Count** and then select **Register**.
277+
278+
# [Azure PowerShell](#tab/powershell)
279+
280+
To register your subscription using Azure PowerShell, run the following commands. Replace `<your-subscription-id>` and `<your-tenant-id>` with your own values.
281+
282+
```azurepowershell-interactive
283+
Connect-AzAccount -SubscriptionId <your-subscription-id> -TenantId <your-tenant-id>
284+
Register-AzProviderFeature -FeatureName HigherHandlesCountOnSmb -ProviderNamespace Microsoft.Storage
285+
```
286+
---
287+
288+
> [!IMPORTANT]
289+
> - Although listed under Preview Features, we honor GA SLAs and will soon make this the default for all accounts, removing the need for registration.
290+
> - Allow 2-6 hours for accounts to be onboarded once registration is complete.
291+
271292
## Next steps
272293

273294
- [Check SMB Multichannel status](files-smb-protocol.md#smb-multichannel)

articles/storage/files/storage-files-scale-targets.md

Lines changed: 9 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: Learn about the scalability and performance targets for Azure Files
44
author: khdownie
55
ms.service: azure-file-storage
66
ms.topic: concept-article
7-
ms.date: 03/11/2025
7+
ms.date: 05/30/2025
88
ms.author: kendownie
99
ms.custom: references_regions
1010
---
@@ -100,17 +100,18 @@ Azure file share scale targets apply at the file share level.
100100
| Maximum storage size | 100 TiB | 256 TiB | 100 TiB |
101101
| Maximum number of files | Unlimited | Unlimited | Unlimited |
102102
| Maximum IOPS (Data) | 102,400 IOPS (dependent on provisioning) | 50,000 IOPS (dependent on provisioning) | 20,000 IOPS |
103-
| Maximum IOPS (Metadata<sup>1</sup>) | Up to 35,000 IOPS | Up to 12,000 IOPS | Up to 12,000 IOPS |
103+
| Maximum IOPS (Metadata<sup>1</sup>) | Up to 35,000 IOPS<sup>2</sup> | Up to 12,000 IOPS | Up to 12,000 IOPS |
104104
| Maximum throughput | 10,340 MiB / sec (dependent on provisioning) | 5,120 MiB / sec (dependent on provisioning) | Up to storage account limits |
105105
| Maximum number of share snapshots | 200 snapshots | 200 snapshots | 200 snapshots |
106-
| Maximum filename length<sup>2</sup> (full pathname including all directories, file names, and backslash characters) | 2,048 characters | 2,048 characters | 2,048 characters |
106+
| Maximum filename length<sup>3</sup> (full pathname including all directories, file names, and backslash characters) | 2,048 characters | 2,048 characters | 2,048 characters |
107107
| Maximum length of individual pathname component (in the path \A\B\C\D, each letter represents a directory or file that is an individual component) | 255 characters | 255 characters | 255 characters |
108108
| Hard link limit (NFS only) | 178 | N/A | N/A |
109109
| Maximum number of SMB Multichannel channels | 4 | N/A | N/A |
110110
| Maximum number of stored access policies per file share | 5 | 5 | 5 |
111111

112112
<sup>1</sup> Metadata IOPS (open/close/delete). See [Monitor Metadata IOPS](analyze-files-metrics.md#monitor-utilization-by-metadata-iops) for guidance.<br>
113-
<sup>2</sup> Azure Files enforces certain [naming rules](/rest/api/storageservices/naming-and-referencing-shares--directories--files--and-metadata#directory-and-file-names) for directory and file names.
113+
<sup>2</sup> Scaling to 35,000 IOPS for SSD file shares requires [registering for the metadata caching feature](smb-performance.md#register-for-the-metadata-caching-feature).<br>
114+
<sup>3</sup> Azure Files enforces certain [naming rules](/rest/api/storageservices/naming-and-referencing-shares--directories--files--and-metadata#directory-and-file-names) for directory and file names.
114115

115116
### File scale targets
116117
File scale targets apply to individual files stored in Azure file shares.
@@ -121,40 +122,13 @@ File scale targets apply to individual files stored in Azure file shares.
121122
| Maximum data IOPS per file | 8,000 IOPS | 1,000 IOPS | 1,000 IOPS |
122123
| Maximum throughput per file | 1,024 MiB / sec | 60 MiB / sec | 60 MiB / sec |
123124
| Maximum concurrent handles for root directory | 10,000 handles | 10,000 handles | 10,000 handles |
124-
| Maximum concurrent handles per file and directory | 2,000 handles | 2,000 handles | 2,000 handles |
125+
| Maximum concurrent handles per file and directory | 2,000 handles\* | 2,000 handles | 2,000 handles |
125126

126-
### Azure Files sizing guidance for Azure Virtual Desktop
127-
128-
A popular use case for Azure Files is storing user profile containers and disk images for Azure Virtual Desktop, using either FSLogix or App attach. In large scale Azure Virtual Desktop deployments, you might run out of handles for the root directory or per file/directory if you're using a single Azure file share. This section describes how various types of disk images consume handles. It also provides sizing guidance based on the technology you're using.
129-
130-
#### FSLogix
131-
132-
If you're using [FSLogix with Azure Virtual Desktop](../../virtual-desktop/fslogix-containers-azure-files.md), your user profile containers are either Virtual Hard Disk (VHD) or Hyper-V Virtual Hard Disk (VHDX) files, and they're mounted in a user context, not a system context. Each user opens a single root directory handle, which should be to the file share. Azure Files can support a maximum of 10,000 users assuming you have the file share (`\\storageaccount.file.core.windows.net\sharename`) + the profile directory (`%sid%_%username%`) + profile container (`profile_%username.vhd(x)`).
133-
134-
If you're hitting the limit of 10,000 concurrent handles for the root directory or users are seeing poor performance, try using an additional Azure file share and distributing the containers between the shares.
135-
136-
> [!WARNING]
137-
> While Azure Files can support up to 10,000 concurrent users from a single file share, it's critical to properly test your workloads against the size and type of file share you're using. Your requirements might vary based on users, profile size, and workload.
138-
139-
For example, if you have 2,400 concurrent users, you'd need 2,400 handles on the root directory (one for each user), which is below the limit of 10,000 open handles. For FSLogix users, reaching the limit of 2,000 open file and directory handles is unlikely. If you have a single FSLogix profile container per user, you'd only consume two file/directory handles: one for the profile directory and one for the profile container file. If users have two containers each (profile and ODFC), you'd need one more handle for the ODFC file.
140-
141-
#### App attach with CimFS
127+
\* The maximum number of concurrent handles per file and directory is a soft limit for SSD SMB file shares. If you need to scale beyond this limit, you can [enable metadata caching](smb-performance.md#register-for-the-metadata-caching-feature), and register for [increased file handle limits (preview)](smb-performance.md#register-for-increased-file-handle-limits-preview).
142128

143-
If you're using [MSIX App attach or App attach](../../virtual-desktop/app-attach-overview.md) to dynamically attach applications, you can use Composite Image File System (CimFS) or VHD/VHDX files for [disk images](../../virtual-desktop/app-attach-overview.md#application-images). Either way, the scale limits are per VM mounting the image, not per user. The number of users is irrelevant when calculating scale limits. When a VM is booted, it mounts the disk image, even if there are zero users.
144-
145-
If you're using App attach with CimFS, the disk images only consume handles on the disk image files. They don't consume handles on the root directory or the directory containing the disk image. However, because a CimFS image is a combination of the .cim file and at least two other files, for every VM mounting the disk image, you need one handle each for three files in the directory. So if you have 100 VMs, you need 300 file handles.
146-
147-
You might run out of file handles if the number of VMs per app exceeds 2,000. In this case, use an additional Azure file share.
148-
149-
#### App attach with VHD/VHDX
150-
151-
If you're using App attach with VHD/VHDX files, the files are mounted in a system context, not a user context, and they're shared and read-only. More than one handle on the VHDX file can be consumed by a connecting system. To stay within Azure Files scale limits, the number of VMs multiplied by the number of apps must be less than 10,000, and the number of VMs per app can't exceed 2,000. So the constraint is whichever you hit first.
152-
153-
In this scenario, you could hit the per file/directory limit with 2,000 mounts of a single VHD/VHDX. Or, if the share contains multiple VHD/VHDX files, you could hit the root directory limit first. For example, 100 VMs mounting 100 shared VHDX files will hit the 10,000 handle root directory limit.
154-
155-
In another example, 100 VMs accessing 20 apps require 2,000 root directory handles (100 x 20 = 2,000), which is well within the 10,000 limit for root directory handles. You also need a file handle and a directory/folder handle for every VM mounting the VHD(X) image, so 200 handles in this case (100 file handles + 100 directory handles), which is comfortably below the 2,000 handle limit per file/directory.
129+
### Azure Files sizing guidance for Azure Virtual Desktop
156130

157-
If you're hitting the limits on maximum concurrent handles for the root directory or per file/directory, use an additional Azure file share.
131+
A popular use case for Azure Files is storing user profile containers and disk images for Azure Virtual Desktop. See [Azure Files guidance for virtual desktop workloads](virtual-desktop-workloads.md) for more information.
158132

159133
## Azure File Sync scale targets
160134

0 commit comments

Comments
 (0)