Skip to content

Commit cc60c7a

Browse files
Merge pull request #263087 from khdownie/kendownie011124
AVD sizing guidance on handles
2 parents 2f4a169 + daa011b commit cc60c7a

File tree

1 file changed

+37
-4
lines changed

1 file changed

+37
-4
lines changed

articles/storage/files/storage-files-scale-targets.md

Lines changed: 37 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: Learn about the capacity, IOPS, and throughput rates for Azure file
44
author: khdownie
55
ms.service: azure-file-storage
66
ms.topic: conceptual
7-
ms.date: 01/11/2024
7+
ms.date: 01/25/2024
88
ms.author: kendownie
99
---
1010

@@ -94,7 +94,40 @@ File scale targets apply to individual files stored in Azure file shares.
9494

9595
<sup>2 Subject to machine network limits, available bandwidth, I/O sizes, queue depth, and other factors. For details see [SMB Multichannel performance](./smb-performance.md).</sup>
9696

97-
<sup>3 Azure Files supports 10,000 open handles on the root directory and 2,000 open handles per file and directory within the share. The number of active users supported per share is dependent on the applications that are accessing the share. If you're using Azure Files to store disk images for large-scale virtual desktop workloads, you might run out of handles for the root directory or per file/directory. In this case, you might need to use two Azure file shares. If your applications aren't opening a handle on the root directory, Azure Files can support more than 10,000 active users per share.</sup>
97+
<sup>3 Azure Files supports 10,000 open handles on the root directory and 2,000 open handles per file and directory within the share. The number of active users supported per share is dependent on the applications that are accessing the share. If your applications aren't opening a handle on the root directory, Azure Files can support more than 10,000 active users per share. However, if you're using Azure Files to store disk images for large-scale virtual desktop workloads, you might run out of handles for the root directory or per file/directory. In this case, you might need to use multiple Azure file shares. For more information, see [Azure Files sizing guidance for Azure Virtual Desktop](#azure-files-sizing-guidance-for-azure-virtual-desktop).</sup>
98+
99+
### Azure Files sizing guidance for Azure Virtual Desktop
100+
101+
A popular use case for Azure Files is storing user profile containers and disk images for Azure Virtual Desktop, using either FSLogix or App attach. In large scale Azure Virtual Desktop deployments, you might run out of handles for the root directory or per file/directory if you're using a single Azure file share. This section describes how handles are consumed by various types of disk images, and provides sizing guidance depending on the technology you're using.
102+
103+
#### FSLogix
104+
105+
If you're using [FSLogix with Azure Virtual Desktop](../../virtual-desktop/fslogix-containers-azure-files.md), your user profile containers are either Virtual Hard Disk (VHD) or Hyper-V Virtual Hard Disk (VHDX) files, and they're mounted in a user context, not a system context. Each user will open a single root directory handle, which should be to the file share. Azure Files can support a maximum of 10,000 users assuming you have the file share (`\\storageaccount.file.core.windows.net\sharename`) + the profile directory (`%sid%_%username%`) + profile container (`profile_%username.vhd(x)`).
106+
107+
If you're hitting the limit of 10,000 concurrent handles for the root directory or users are seeing poor performance, try using an additional Azure file share and distributing the containers between the shares.
108+
109+
> [!WARNING]
110+
> While Azure Files can support up to 10,000 concurrent users from a single file share, it's critical to properly test your workloads against the size and type of file share you've created. Your requirements might vary based on users, profile size, and workload.
111+
112+
For example, if you have 2,400 concurrent users, you'd need 2,400 handles on the root directory (one for each user), which is below the limit of 10,000 open handles. For FSLogix users, reaching the limit of 2,000 open file and directory handles is extremely unlikely. If you have a single FSLogix profile container per user, you'd only consume two file/directory handles: one for the profile directory and one for the profile container file. If users have two containers each (profile and ODFC), you'd need one additional handle for the ODFC file.
113+
114+
#### App attach with CimFS
115+
116+
If you're using [MSIX App attach or App attach](../../virtual-desktop/app-attach-overview.md) to dynamically attach applications, you can use Composite Image File System (CimFS) or VHD/VHDX files for [disk images](../../virtual-desktop/app-attach-overview.md#application-images). Either way, the scale limits are per VM mounting the image, not per user. The number of users is irrelevant when calculating scale limits. When a VM is booted, it mounts the disk image, even if there are zero users.
117+
118+
If you're using App attach with CimFS, the disk images only consume handles on the disk image files. They don't consume handles on the root directory or the directory containing the disk image. However, because a CimFS image is a combination of the .cim file and at least two other files, for every VM mounting the disk image, you'll need one handle each for three files in the directory. So if you have 100 VMs, you'll need 300 file handles.
119+
120+
You might run out of file handles if the number of VMs per app exceeds 2,000. In this case, use an additional Azure file share.
121+
122+
#### App attach with VHD/VHDX
123+
124+
If you're using App attach with VHD/VHDX files, the files are mounted in a system context, not a user context, and they are shared and read-only. More than one handle on the VHDX file can be consumed by a connecting system. To stay within Azure Files scale limits, the number of VMs multiplied by the number of apps must be less than 10,000, and the number of VMs per app can't exceed 2,000. So the constraint is whichever you hit first.
125+
126+
In this scenario, you could hit the per file/directory limit with 2,000 mounts of a single VHD/VHDX. Or, if the share contains multiple VHD/VHDX files, you could hit the root directory limit first. For example, 100 VMs mounting 100 shared VHDX files will hit the 10,000 handle root directory limit.
127+
128+
In another example, 100 VMs accessing 20 apps will require 2,000 root directory handles (100 x 20 = 2,000), which is well within the 10,000 limit for root directory handles. You'll also need a file handle and a directory/folder handle for every VM mounting the VHD(X) image, so 200 handles in this case (100 file handles + 100 directory handles), which is comfortably below the 2,000 handle limit per file/directory.
129+
130+
If you're hitting the limits on maximum concurrent handles for the root directory or per file/directory, use an additional Azure file share.
98131

99132
## Azure File Sync scale targets
100133

@@ -114,8 +147,8 @@ The following table indicates which targets are soft, representing the Microsoft
114147
| File size | 100 GiB | No |
115148
| Minimum file size for a file to be tiered | Based on file system cluster size (double file system cluster size). For example, if the file system cluster size is 4 KiB, the minimum file size will be 8 KiB. | Yes |
116149

117-
> [!Note]
118-
> An Azure File Sync endpoint can scale up to the size of an Azure file share. If the Azure file share size limit is reached, sync will not be able to operate.
150+
> [!NOTE]
151+
> An Azure File Sync endpoint can scale up to the size of an Azure file share. If the Azure file share size limit is reached, sync won't be able to operate.
119152
120153
## Azure File Sync performance metrics
121154

0 commit comments

Comments
 (0)