|
1 | 1 | ---
|
2 | 2 | title: Use Azure Files for virtual desktop workloads
|
3 |
| -description: Learn how to use SMB Azure file shares for virtual desktop workloads, including profile containers and disk images for Azure Virtual Desktop, and how to optimize scale and performance. |
| 3 | +description: Learn how to use SMB Azure file shares for virtual desktop workloads, including FSLogix profile containers for Azure Virtual Desktop, and how to optimize scale and performance. |
4 | 4 | author: khdownie
|
5 | 5 | ms.service: azure-file-storage
|
6 | 6 | ms.topic: concept-article
|
7 |
| -ms.date: 05/30/2025 |
| 7 | +ms.date: 06/02/2025 |
8 | 8 | ms.author: kendownie
|
9 | 9 | ---
|
10 | 10 |
|
@@ -48,39 +48,68 @@ While Azure Files can support thousands of concurrent virtual desktop users from
|
48 | 48 |
|
49 | 49 | Virtual desktops with home directories can benefit from [metadata caching](smb-performance.md#metadata-caching-for-ssd-file-shares) on SSD file shares.
|
50 | 50 |
|
51 |
| -The following table lists our general recommendations based on the number of concurrent users. This table is based on the assumption that the user profile has a capacity of 5 GiB and a performance of 50 IOPS during sign in and 20 IOPS during steady state. |
52 |
| - |
53 |
| -| **Number of virtual desktop users** | **Recommended file storage** | |
54 |
| -|------------------------------------------------|------------------------------| |
55 |
| -| Less than 400 concurrent users | HDD pay-as-you-go file shares | |
56 |
| -| 400-1,000 concurrent users | HDD provisioned v2 file shares or multiple HDD pay-as-you-go file shares | |
57 |
| -| 1,000-2,000 concurrent users | SSD or multiple HDD file shares | |
58 |
| -| More than 2,000 concurrent users | Multiple SSD file shares | |
59 |
| - |
60 | 51 | ## Azure Files sizing guidance for Azure Virtual Desktop
|
61 | 52 |
|
62 | 53 | In large-scale VDI environments, tens of thousands of users might need to access the same file simultaneously, especially during application launches and session setups. In these situations, you might run out of handles, especially if you're using a single Azure file share. This section describes how various types of disk images consume handles and provides sizing guidance based on the technology you're using.
|
63 | 54 |
|
| 55 | +Azure Files supports both **FSLogix** and **non-FSLogix** profile storage scenarios. This guidance provides recommended file share configurations based on the number of concurrent virtual desktop users, expected IOPS per user, and storage type (HDD or SSD). In general, FSLogix enables more efficient handle usage compared to non-FSLogix. |
| 56 | + |
64 | 57 | > [!TIP]
|
65 | 58 | > Azure Files currently has a 2,000 concurrent handle limit per file and directory, and this article is written with that limit in mind. However, for SSD file shares, this is a soft limit. If you need to scale beyond this limit, you can [enable metadata caching](smb-performance.md#register-for-the-metadata-caching-feature) and register for [increased file handle limits (preview)](smb-performance.md#register-for-increased-file-handle-limits-preview).
|
66 | 59 |
|
67 |
| -### FSLogix |
| 60 | +### FSLogix profile containers |
68 | 61 |
|
69 | 62 | If you're using [FSLogix with Azure Virtual Desktop](/azure/virtual-desktop/fslogix-containers-azure-files), your user profile containers are either Virtual Hard Disk (VHD) or Hyper-V Virtual Hard Disk (VHDX) files, and they're mounted in a user context, not a system context. Each user opens a single root directory handle, which should be to the file share. Azure Files can support a maximum of 10,000 users assuming you have the file share (`\\storageaccount.file.core.windows.net\sharename`) + the profile directory (`%sid%_%username%`) + profile container (`profile_%username.vhd(x)`).
|
70 | 63 |
|
71 | 64 | If you're hitting the limit of 10,000 concurrent handles for the root directory or users are seeing poor performance, try using an additional Azure file share and distributing the containers between the shares.
|
72 | 65 |
|
73 | 66 | For example, if you have 2,400 concurrent users, you'd need 2,400 handles on the root directory (one for each user), which is below the limit of 10,000 open handles. For FSLogix users, reaching the limit on open file and directory handles is unlikely. If you have a single FSLogix profile container per user, you'd only consume two file/directory handles: one for the profile directory and one for the profile container file. If users have two containers each (profile and ODFC), you'd need one more handle for the ODFC file.
|
74 | 67 |
|
75 |
| -### App attach with CimFS |
| 68 | +The following table lists our general recommendations for **FSLogix profile containers** based on the number of concurrent users under these conditions: |
| 69 | + |
| 70 | +- Each user has 1–2 containers (profile + optional Office container) |
| 71 | +- Handles: ~2–3 per user (root directory, profile, and possibly ODFC container) |
| 72 | + |
| 73 | +| **Number of concurrent FSLogix users** | **Recommended file storage** | **Notes** | |
| 74 | +|------------------------------------------------|------------------------------|--------------| |
| 75 | +| Less than 2,000 users | HDD pay-as-you-go or provisioned v2 file shares | Acceptable for light workloads or low concurrency | |
| 76 | +| 2,000-5,000 users | 1-2 SSD file shares with [metadata caching](smb-performance.md#register-for-the-metadata-caching-feature) | SSD improves login performance; handles remain well below 10,000/share | |
| 77 | +| 5,000-10,000 users | 2-4 SSD file shares, distributed evenly | Shares must be partitioned correctly | |
| 78 | +| More than 10,000 users | Multiple SSD file shares with [metadata caching](smb-performance.md#register-for-the-metadata-caching-feature) and [increased file handle limits (preview)](smb-performance.md#register-for-increased-file-handle-limits-preview) | Register for increased handle limits and enable metadata caching for large-scale environments | |
| 79 | + |
| 80 | +### Non-FSLogix profile storage |
| 81 | + |
| 82 | +If you're not using FSLogix, you might be using roaming user profiles or folder redirection in Windows. |
| 83 | + |
| 84 | +> [!NOTE] |
| 85 | +> Non-FSLogix profiles are more likely to hit the per-directory or per-file handle limit of 2,000 concurrent handles. If you need to scale beyond this limit, you can use SSD file shares, [enable metadata caching](smb-performance.md#register-for-the-metadata-caching-feature), and register for [increased file handle limits (preview)](smb-performance.md#register-for-increased-file-handle-limits-preview). |
| 86 | +
|
| 87 | +The following table lists our general recommendations for **non-FSLogix** profile storage based on the number of concurrent users under these conditions: |
| 88 | + |
| 89 | +- Profile data is stored as many small files/folders |
| 90 | +- Higher metadata IOPS per user |
| 91 | +- Handle usage is relatively high (each file/folder accessed consumes a handle) |
| 92 | +- Profile size ~5 GiB |
| 93 | +- Peak IOPS: 50 IOPS/user during login, 20 IOPS/user during steady-state |
| 94 | + |
| 95 | +| **Number of concurrent users** | **Recommended file storage** | **Notes** | |
| 96 | +|------------------------------------------------|------------------------------|--------------| |
| 97 | +| Less than 400 users | HDD pay-as-you-go file shares | Suitable for low-concurrency workloads with minimal IOPS demands | |
| 98 | +| 400-1,000 users | HDD provisioned v2 file shares or multiple HDD pay-as-you-go file shares | Might require tuning for peak login bursts | |
| 99 | +| 1,000-2,000 users | SSD or multiple HDD file shares | SSDs recommended due to better metadata latency | |
| 100 | +| More than 2,000 users | Multiple SSD file shares with [metadata caching](smb-performance.md#register-for-the-metadata-caching-feature) and [increased file handle limits (preview)](smb-performance.md#register-for-increased-file-handle-limits-preview) | Critical to avoid handle limits and achieve consistent login performance | |
| 101 | + |
| 102 | +### App attach |
76 | 103 |
|
77 | 104 | If you're using [MSIX App attach or App attach](/azure/virtual-desktop/app-attach-overview) to dynamically attach applications, you can use Composite Image File System (CimFS) or VHD/VHDX files for [disk images](/azure/virtual-desktop/app-attach-overview#application-images). Either way, the scale limits are per VM mounting the image, not per user. The number of users is irrelevant when calculating scale limits. When a VM is booted, it mounts the disk image, even if there are zero users.
|
78 | 105 |
|
| 106 | +#### App attach with CimFS |
| 107 | + |
79 | 108 | If you're using App attach with CimFS, the disk images only consume handles on the disk image files. They don't consume handles on the root directory or the directory containing the disk image. However, because a CimFS image is a combination of the .cim file and at least two other files, for every VM mounting the disk image, you need one handle each for three files in the directory. So if you have 100 VMs, you need 300 file handles.
|
80 | 109 |
|
81 | 110 | You might run out of file handles if the number of VMs per app exceeds 2,000. In this case, use an additional Azure file share.
|
82 | 111 |
|
83 |
| -### App attach with VHD/VHDX |
| 112 | +#### App attach with VHD/VHDX |
84 | 113 |
|
85 | 114 | If you're using App attach with VHD/VHDX files, the files are mounted in a system context, not a user context, and they're shared and read-only. More than one handle on the VHDX file can be consumed by a connecting system. To stay within Azure Files scale limits, the number of VMs multiplied by the number of apps must be less than 10,000, and the number of VMs per app can't exceed 2,000. So the constraint is whichever you hit first.
|
86 | 115 |
|
|
0 commit comments