Skip to content

Commit aebb5e8

Browse files
authored
Merge pull request #295749 from jeffpatt24/patch-11
Update storage-files-scale-targets.md
2 parents 71c86af + b059363 commit aebb5e8

File tree

1 file changed

+19
-69
lines changed

1 file changed

+19
-69
lines changed

articles/storage/files/storage-files-scale-targets.md

Lines changed: 19 additions & 69 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: Learn about the scalability and performance targets for Azure Files
44
author: khdownie
55
ms.service: azure-file-storage
66
ms.topic: conceptual
7-
ms.date: 08/12/2024
7+
ms.date: 03/04/2025
88
ms.author: kendownie
99
ms.custom: references_regions
1010
---
@@ -126,20 +126,20 @@ A popular use case for Azure Files is storing user profile containers and disk i
126126

127127
#### FSLogix
128128

129-
If you're using [FSLogix with Azure Virtual Desktop](../../virtual-desktop/fslogix-containers-azure-files.md), your user profile containers are either Virtual Hard Disk (VHD) or Hyper-V Virtual Hard Disk (VHDX) files, and they're mounted in a user context, not a system context. Each user will open a single root directory handle, which should be to the file share. Azure Files can support a maximum of 10,000 users assuming you have the file share (`\\storageaccount.file.core.windows.net\sharename`) + the profile directory (`%sid%_%username%`) + profile container (`profile_%username.vhd(x)`).
129+
If you're using [FSLogix with Azure Virtual Desktop](../../virtual-desktop/fslogix-containers-azure-files.md), your user profile containers are either Virtual Hard Disk (VHD) or Hyper-V Virtual Hard Disk (VHDX) files, and they're mounted in a user context, not a system context. Each user opens a single root directory handle, which should be to the file share. Azure Files can support a maximum of 10,000 users assuming you have the file share (`\\storageaccount.file.core.windows.net\sharename`) + the profile directory (`%sid%_%username%`) + profile container (`profile_%username.vhd(x)`).
130130

131131
If you're hitting the limit of 10,000 concurrent handles for the root directory or users are seeing poor performance, try using an additional Azure file share and distributing the containers between the shares.
132132

133133
> [!WARNING]
134134
> While Azure Files can support up to 10,000 concurrent users from a single file share, it's critical to properly test your workloads against the size and type of file share you've created. Your requirements might vary based on users, profile size, and workload.
135135
136-
For example, if you have 2,400 concurrent users, you'd need 2,400 handles on the root directory (one for each user), which is below the limit of 10,000 open handles. For FSLogix users, reaching the limit of 2,000 open file and directory handles is extremely unlikely. If you have a single FSLogix profile container per user, you'd only consume two file/directory handles: one for the profile directory and one for the profile container file. If users have two containers each (profile and ODFC), you'd need one additional handle for the ODFC file.
136+
For example, if you have 2,400 concurrent users, you'd need 2,400 handles on the root directory (one for each user), which is below the limit of 10,000 open handles. For FSLogix users, reaching the limit of 2,000 open file and directory handles is unlikely. If you have a single FSLogix profile container per user, you'd only consume two file/directory handles: one for the profile directory and one for the profile container file. If users have two containers each (profile and ODFC), you'd need one additional handle for the ODFC file.
137137

138138
#### App attach with CimFS
139139

140140
If you're using [MSIX App attach or App attach](../../virtual-desktop/app-attach-overview.md) to dynamically attach applications, you can use Composite Image File System (CimFS) or VHD/VHDX files for [disk images](../../virtual-desktop/app-attach-overview.md#application-images). Either way, the scale limits are per VM mounting the image, not per user. The number of users is irrelevant when calculating scale limits. When a VM is booted, it mounts the disk image, even if there are zero users.
141141

142-
If you're using App attach with CimFS, the disk images only consume handles on the disk image files. They don't consume handles on the root directory or the directory containing the disk image. However, because a CimFS image is a combination of the .cim file and at least two other files, for every VM mounting the disk image, you'll need one handle each for three files in the directory. So if you have 100 VMs, you'll need 300 file handles.
142+
If you're using App attach with CimFS, the disk images only consume handles on the disk image files. They don't consume handles on the root directory or the directory containing the disk image. However, because a CimFS image is a combination of the .cim file and at least two other files, for every VM mounting the disk image, you need one handle each for three files in the directory. So if you have 100 VMs, you need 300 file handles.
143143

144144
You might run out of file handles if the number of VMs per app exceeds 2,000. In this case, use an additional Azure file share.
145145

@@ -149,7 +149,7 @@ If you're using App attach with VHD/VHDX files, the files are mounted in a syste
149149

150150
In this scenario, you could hit the per file/directory limit with 2,000 mounts of a single VHD/VHDX. Or, if the share contains multiple VHD/VHDX files, you could hit the root directory limit first. For example, 100 VMs mounting 100 shared VHDX files will hit the 10,000 handle root directory limit.
151151

152-
In another example, 100 VMs accessing 20 apps will require 2,000 root directory handles (100 x 20 = 2,000), which is well within the 10,000 limit for root directory handles. You'll also need a file handle and a directory/folder handle for every VM mounting the VHD(X) image, so 200 handles in this case (100 file handles + 100 directory handles), which is comfortably below the 2,000 handle limit per file/directory.
152+
In another example, 100 VMs accessing 20 apps require 2,000 root directory handles (100 x 20 = 2,000), which is well within the 10,000 limit for root directory handles. You'll also need a file handle and a directory/folder handle for every VM mounting the VHD(X) image, so 200 handles in this case (100 file handles + 100 directory handles), which is comfortably below the 2,000 handle limit per file/directory.
153153

154154
If you're hitting the limits on maximum concurrent handles for the root directory or per file/directory, use an additional Azure file share.
155155

@@ -171,83 +171,33 @@ The following table indicates which targets are soft, representing the Microsoft
171171
| Maximum number of file system objects (directories and files) in a directory **(not recursive)** | 5 million objects | Yes |
172172
| Maximum object (directories and files) security descriptor size | 64 KiB | Yes |
173173
| File size | 100 GiB | No |
174-
| Minimum file size for a file to be tiered | Based on file system cluster size (double file system cluster size). For example, if the file system cluster size is 4 KiB, the minimum file size will be 8 KiB. | Yes |
174+
| Minimum file size for a file to be tiered | Based on file system cluster size (double file system cluster size). For example, if the file system cluster size is 4 KiB, the minimum file size is 8 KiB. | Yes |
175175

176176
> [!NOTE]
177177
> An Azure File Sync endpoint can scale up to the size of an Azure file share. If the Azure file share size limit is reached, sync won't be able to operate.
178178
179179
## Azure File Sync performance metrics
180180

181-
Since the Azure File Sync agent runs on a Windows Server machine that connects to the Azure file shares, effective sync performance depends upon a number of factors in your infrastructure: Windows Server and the underlying disk configuration, network bandwidth between the server and the Azure storage, file size, total dataset size, and the activity on the dataset. Since Azure File Sync works on the file level, the performance characteristics of an Azure File Sync-based solution should be measured by the number of objects (files and directories) processed per second.
181+
Since the Azure File Sync agent runs on a Windows Server machine that connects to the Azure file shares, the effective sync performance depends upon many factors in your infrastructure: Windows Server and the underlying disk configuration, network bandwidth between the server and the Azure storage, file size, total dataset size, and the activity on the dataset. Since Azure File Sync works on the file level, the performance characteristics of an Azure File Sync-based solution should be measured by the number of objects (files and directories) processed per second.
182182

183-
For Azure File Sync, performance is critical in two stages:
183+
The following table indicates the Azure File Sync performance targets:
184184

185-
1. **Initial one-time provisioning**: To optimize performance on initial provisioning, refer to [Onboarding with Azure File Sync](../file-sync/file-sync-deployment-guide.md#onboarding-with-azure-file-sync) for the optimal deployment details.
186-
2. **Ongoing sync**: After the data is initially seeded in the Azure file shares, Azure File Sync keeps multiple endpoints in sync.
187-
188-
> [!NOTE]
189-
> When many server endpoints in the same sync group are syncing at the same time, they're contending for cloud service resources. As a result, upload performance is impacted. In extreme cases, some sync sessions will fail to access the resources, and will fail. However, those sync sessions will resume shortly and eventually succeed once the congestion is reduced.
190-
191-
## Internal test results
192-
193-
To help you plan your deployment for each of the stages (initial one-time provisioning and ongoing sync), here are the results we observed during internal testing on a system with the following configuration:
194-
195-
| System configuration | Details |
185+
| Scenario | Performance |
196186
|-|-|
197-
| CPU | 64 Virtual Cores with 64 MiB L3 cache |
198-
| Memory | 128 GiB |
199-
| Disk | SAS disks with RAID 10 with battery backed cache |
200-
| Network | 1 Gbps Network |
201-
| Workload | General Purpose File Server|
202-
203-
### Initial one-time provisioning
204-
205-
| Initial one-time provisioning | Details |
206-
|-|-|
207-
| Number of objects | 25 million objects |
208-
| Dataset Size | ~4.7 TiB |
209-
| Average File Size | ~200 KiB (Largest File: 100 GiB) |
210-
| Initial cloud change enumeration | 80 objects per second |
211-
| Upload Throughput | 20 objects per second per sync group |
212-
| Namespace Download Throughput | 400 objects per second |
213-
214-
**Initial cloud change enumeration**: When a new sync group is created, initial cloud change enumeration is the first step that executes. In this process, the system will enumerate all the items in the Azure file share. During this process, there will be no sync activity. No items will be downloaded from cloud endpoint to server endpoint, and no items will be uploaded from server endpoint to cloud endpoint. Sync activity will resume once initial cloud change enumeration completes.
215-
216-
The rate of performance is 80 objects per second. You can estimate the time it will take to complete initial cloud change enumeration by determining the number of items in the cloud share and using the following formulae to get the time in days.
217-
218-
**Time (in days) for initial cloud enumeration = (Number of objects in cloud endpoint)/(80 \* 60 \* 60 \* 24)**
187+
| Initial cloud change enumeration | 150 objects per second per sync group |
188+
| Upload Throughput | 200 objects per second per sync group |
189+
| Namespace Download Throughput | 400 objects per second per server endpoint |
190+
| Full Download Throughput | 60 objects per second per server endpoint |
219191

220-
**Initial sync of data from Windows Server to Azure File share:** Many Azure File Sync deployments start with an empty Azure file share because all the data is on the Windows Server. In these cases, the initial cloud change enumeration is fast, and the majority of time is spent syncing changes from the Windows Server into the Azure file share(s).
221-
222-
While sync uploads data to the Azure file share, there's no downtime on the local file server, and administrators can [setup network limits](../file-sync/file-sync-server-registration.md#set-azure-file-sync-network-limits) to restrict the amount of bandwidth used for background data upload.
223-
224-
Initial sync is typically limited by the initial upload rate of 20 files per second per sync group. Customers can estimate the time to upload all their data to Azure using the following formulae to get time in days:
225-
226-
**Time (in days) for uploading files to a sync group = (Number of objects in server endpoint)/(20 \* 60 \* 60 \* 24)**
227-
228-
Splitting your data into multiple server endpoints and sync groups can speed up this initial data upload, because the upload can be done in parallel for multiple sync groups at a rate of 20 items per second each. So, two sync groups would be running at a combined rate of 40 items per second. The total time to complete would be the time estimate for the sync group with the most files to sync.
229-
230-
**Namespace download throughput:** When a new server endpoint is added to an existing sync group, the Azure File Sync agent doesn't download any of the file content from the cloud endpoint. It first syncs the full namespace and then triggers background recall to download the files, either in their entirety or, if cloud tiering is enabled, to the cloud tiering policy set on the server endpoint.
231-
232-
### Ongoing sync
233-
234-
| Ongoing sync | Details |
235-
|-|--|
236-
| Number of objects synced | 125,000 objects (~1% churn) |
237-
| Dataset Size | 50 GiB |
238-
| Average File Size | ~500 KiB |
239-
| Upload Throughput | 20 objects per second per sync group |
240-
| Full Download Throughput\* | 60 objects per second |
241-
242-
\*If cloud tiering is enabled, you're likely to observe better performance as only some of the file data is downloaded. Azure File Sync only downloads the data of cached files when they're changed on any of the endpoints. For any tiered or newly created files, the agent doesn't download the file data, and instead only syncs the namespace to all the server endpoints. The agent also supports partial downloads of tiered files as they're accessed by the user.
243-
244-
> [!NOTE]
245-
> These numbers aren't an indication of the performance that you'll experience. The actual performance depends on multiple factors as outlined in the beginning of this section.
192+
> [!Note]
193+
> The actual performance will depend on multiple factors as outlined in the beginning of this section.
246194
247-
As a general guide for your deployment, keep a few things in mind:
195+
As a general guide for your deployment, you should keep a few things in mind:
248196

249197
- The object throughput approximately scales in proportion to the number of sync groups on the server. Splitting data into multiple sync groups on a server yields better throughput, which is also limited by the server and network.
250-
- The object throughput is inversely proportional to the MiB per second throughput. For smaller files, you'll experience higher throughput in terms of the number of objects processed per second, but lower MiB per second throughput. Conversely, for larger files, you'll get fewer objects processed per second, but higher MiB per second throughput. The MiB per second throughput is limited by the Azure Files scale targets.
198+
- The object throughput is inversely proportional to the MiB per second throughput. For smaller files, you will experience higher throughput in terms of the number of objects processed per second, but lower MiB per second throughput. Conversely, for larger files, you will get fewer objects processed per second, but higher MiB per second throughput. The MiB per second throughput is limited by the Azure Files scale targets.
199+
- When many server endpoints in the same sync group are syncing at the same time, they are contending for cloud service resources. As a result, upload performance is impacted. In extreme cases, some sync sessions fail to access the resources, and will fail. However, those sync sessions will resume shortly and eventually succeed once the congestion is reduced.
200+
- If cloud tiering is enabled, you are likely to observe better download performance as only some of the file data is downloaded. Azure File Sync only downloads the data of cached files when they are changed on any of the endpoints. For any tiered or newly created files, the agent does not download the file data, and instead only syncs the namespace to all the server endpoints. The agent also supports partial downloads of tiered files as they are accessed by the user.
251201

252202
## See also
253203

0 commit comments

Comments
 (0)