You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/storage/files/smb-performance.md
+27-27Lines changed: 27 additions & 27 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,10 +1,10 @@
1
1
---
2
-
title: Improve SMB Azure file share performance
2
+
title: Improve SMB Azure File Share Performance
3
3
description: Learn about ways to improve performance and throughput for SSD (premium) SMB Azure file shares, including SMB Multichannel and metadata caching.
4
4
author: khdownie
5
5
ms.service: azure-file-storage
6
6
ms.topic: concept-article
7
-
ms.date: 06/19/2025
7
+
ms.date: 07/21/2025
8
8
ms.author: kendownie
9
9
ms.custom:
10
10
- build-2025
@@ -34,23 +34,23 @@ This article explains how you can improve performance for SSD (premium) SMB Azur
34
34
35
35
The following tips might help you optimize performance:
36
36
37
-
- Ensure that your storage account and your client are co-located in the same Azure region to reduce network latency.
38
-
- Use multi-threaded applications and spread load across multiple files.
39
-
- Performance benefits of SMB Multichannel increase with the number of files distributing load.
40
-
- SSD share performance is bound by provisioned share size, including IOPS and throughput and single file limits. For details, see [understanding the provisioning v1 model](understanding-billing.md#provisioned-v1-model).
41
-
- Maximum performance of a single VM client is still bound to VM limits. For example, [Standard_D32s_v3](/azure/virtual-machines/dv3-dsv3-series)can support a maximum bandwidth of approximately 1.86 GiB / sec, egress from the VM (writes to storage) is metered. Ingress (reads from storage) is not. File share performance is subject to machine network limits, CPUs, internal storage available network bandwidth, IO sizes, parallelism, and other factors.
37
+
- Ensure that your storage account and your client are in the same Azure region to reduce network latency.
38
+
- Use multi-threaded applications and spread the load across multiple files.
39
+
- Performance benefits of SMB Multichannel increase with the number of files distributing the load.
40
+
- SSD share performance is bound by provisioned share size, including IOPS and throughput, and single file limits. For details, see [understanding the provisioning v1 model](understanding-billing.md#provisioned-v1-model).
41
+
- Maximum performance of a single VM client is still bound to VM limits. For example, [Standard_D32s_v3](/azure/virtual-machines/dv3-dsv3-series)supports a maximum bandwidth of approximately 1.86 GiB/sec. Egress from the VM (writes to storage) is metered, but ingress (reads from storage) isn't. File share performance is subject to machine network limits, CPUs, internal storage available network bandwidth, IO sizes, parallelism, and other factors.
42
42
- The initial test is usually a warm-up. Discard the results and repeat the test.
43
-
- If performance is limited by a single client and workload is still below provisioned share limits, you can achieve higher performance by spreading load over multiple clients.
43
+
- If performance is limited by a single client and workload is still below provisioned share limits, you can achieve higher performance by spreading the load over multiple clients.
44
44
45
45
### The relationship between IOPS, throughput, and I/O sizes
46
46
47
47
**Throughput = IO size * IOPS**
48
48
49
-
Higher I/O sizes drive higher throughput and will have higher latencies, resulting in a lower number of net IOPS. Smaller I/O sizes drive higher IOPS, but will result in lower net throughput and latencies. To learn more, see [Understand Azure Files performance](understand-performance.md).
49
+
Higher I/O sizes drive higher throughput and have higher latencies, resulting in a lower number of net IOPS. Smaller I/O sizes drive higher IOPS but result in lower net throughput and latencies. To learn more, see [Understand Azure Files performance](understand-performance.md).
50
50
51
51
## SMB Multichannel
52
52
53
-
SMB Multichannel enables an SMB client to establish multiple network connections to an SMB file share. Azure Files supports SMB Multichannel on SSD file shares for Windows clients. On the service side, SMB Multichannel is now enabled by default for all newly created storage accounts in all Azure regions. There's no other cost for enabling SMB Multichannel.
53
+
SMB Multichannel enables an SMB client to establish multiple network connections to an SMB file share. Azure Files supports SMB Multichannel on SSD file shares for Windows clients. On the service side, SMB Multichannel is now enabled by default for all newly created storage accounts in all Azure regions. There's no extra cost for enabling SMB Multichannel.
54
54
55
55
### Benefits
56
56
@@ -67,7 +67,7 @@ SMB Multichannel enables clients to use multiple network connections that provid
67
67
-**Cost optimization**:
68
68
Workloads can achieve higher scale from a single VM, or a small set of VMs, while connecting to SSD file shares. This could reduce the total cost of ownership by reducing the number of VMs necessary to run and manage a workload.
69
69
70
-
To learn more about SMB Multichannel, refer to the [Windows documentation](/azure-stack/hci/manage/manage-smb-multichannel).
70
+
For more information about SMB Multichannel, see the [Windows documentation](/azure-stack/hci/manage/manage-smb-multichannel).
71
71
72
72
This feature provides greater performance benefits to multi-threaded applications but typically doesn't help single-threaded applications. See the [Performance comparison](#performance-comparison) section for more details.
73
73
@@ -77,7 +77,7 @@ SMB Multichannel for Azure file shares currently has the following restrictions:
77
77
78
78
- Only available for SSD file shares. Not available for HDD Azure file shares.
79
79
- Only supported on clients that are using SMB 3.1.1. Ensure SMB client operating systems are patched to recommended levels.
80
-
- Maximum number of channels is four, for details see [here](/troubleshoot/azure/azure-storage/files-troubleshoot-performance?toc=/azure/storage/files/toc.json#cause-4-number-of-smb-channels-exceeds-four).
80
+
- Maximum number of channels is four. For details, see [here](/troubleshoot/azure/azure-storage/files-troubleshoot-performance?toc=/azure/storage/files/toc.json#cause-4-number-of-smb-channels-exceeds-four).
81
81
82
82
### Configuration
83
83
@@ -93,7 +93,7 @@ If SMB Multichannel isn't enabled on your Azure storage account, see [SMB Multic
93
93
94
94
### Disable SMB Multichannel
95
95
96
-
In most scenarios, particularly multi-threaded workloads, clients should see improved performance with SMB Multichannel. However, for some specific scenarios such as single-threaded workloads or for testing purposes, you might want to disable SMB Multichannel. See [Performance comparison](#performance-comparison) and [SMB Multichannel status](files-smb-protocol.md#smb-multichannel) for more details.
96
+
In most scenarios, particularly multi-threaded workloads, clients see improved performance with SMB Multichannel. However, for some specific scenarios such as single-threaded workloads or for testing purposes, you might want to disable SMB Multichannel. See [Performance comparison](#performance-comparison) and [SMB Multichannel status](files-smb-protocol.md#smb-multichannel) for more details.
97
97
98
98
### Verify SMB Multichannel is configured correctly
99
99
@@ -113,9 +113,9 @@ There are two categories of read/write workload patterns: single-threaded and mu
113
113
-**Multi-threaded/multiple files**:
114
114
Depending on the workload pattern, you should see significant performance improvement in read and write I/Os over multiple channels. The performance gains vary from anywhere between 2x to 4x in terms of IOPS, throughput, and latency. For this category, SMB Multichannel should be enabled for the best performance.
115
115
-**Multi-threaded/single file**:
116
-
For most use cases in this category, workloads benefit from having SMB Multichannel enabled, especially if the workload has an average I/O size > ~16k. A few example scenarios that benefit from SMB Multichannel are backup or recovery of a single large file. An exception where you might want to disable SMB Multichannel is if your workload is heavy on small I/Os. In that case, you might observe a slight performance loss of ~10%. Depending on the use case, consider spreading load across multiple files, or disable the feature. See the [Configuration](#configuration) section for details.
116
+
For most use cases in this category, workloads benefit from having SMB Multichannel enabled, especially if the workload has an average I/O size greater than 16 KiB. A few example scenarios that benefit from SMB Multichannel are backup or recovery of a single large file. An exception where you might want to disable SMB Multichannel is if your workload is heavy on small I/Os. In that case, you might observe a slight performance loss of 10%. Depending on the use case, consider spreading load across multiple files, or disable the feature. See the [Configuration](#configuration) section for details.
117
117
-**Single-threaded/multiple files or single file**:
118
-
For most single-threaded workloads, there are minimum performance benefits due to lack of parallelism. Usually there is a slight performance degradation of ~10% if SMB Multichannel is enabled. In this case, it's ideal to disable SMB Multichannel, with one exception. If the single-threaded workload can distribute load across multiple files and uses on an average larger I/O size (> ~16k), then there should be slight performance benefits from SMB Multichannel.
118
+
For most single-threaded workloads, there are minimum performance benefits due to lack of parallelism. Usually there is a slight performance degradation of 10% if SMB Multichannel is enabled. In this case, it's ideal to disable SMB Multichannel, with one exception. If the single-threaded workload can distribute load across multiple files and uses on an average larger I/O size (greater than 16 KiB), then there should be slight performance benefits from SMB Multichannel.
119
119
120
120
### Performance test configuration
121
121
@@ -131,7 +131,7 @@ Load was generated against 10 files with various IO sizes. The scale up test res
131
131
132
132
- On a single NIC, for reads, performance increase of 2x-3x was observed and for writes, gains of 3x-4x in terms of both IOPS and throughput.
133
133
- SMB Multichannel allowed IOPS and throughput to reach VM limits even with a single NIC and the four channel limit.
134
-
-Since egress (or reads to storage) is not metered, read throughput was able to exceed the VM published limit of approximately 1.86 GiB / sec. The test achieved >2.7 GiB / sec. Ingress (or writes to storage) are still subject to VM limits.
134
+
-Because egress (or reads to storage) isn't metered, read throughput was able to exceed the VM published limit of approximately 1.86 GiB / sec. The test achieved greater than 2.7 GiB / sec. Ingress (or writes to storage) are still subject to VM limits.
135
135
- Spreading load over multiple files allowed for substantial improvements.
136
136
137
137
An example command used in this testing is:
@@ -146,15 +146,15 @@ The load was generated against a single 128 GiB file. With SMB Multichannel enab
146
146
147
147
:::image type="content" source="media/smb-performance/diagram-smb-multi-channel-single-file-compared-to-single-channel-throughput-performance.png" alt-text="Diagram of single file throughput performance." lightbox="media/smb-performance/diagram-smb-multi-channel-single-file-compared-to-single-channel-throughput-performance.png":::
148
148
149
-
- On a single NIC with larger average I/O size (> ~16k), there were significant improvements in both reads and writes.
150
-
- For smaller I/O sizes, there was a slight impact of ~10% on performance with SMB Multichannel enabled. This could be mitigated by spreading the load over multiple files, or disabling the feature.
149
+
- On a single NIC with larger average I/O size (greater than 16 KiB), there were significant improvements in both reads and writes.
150
+
- For smaller I/O sizes, there was a slight impact of approximately 10% on performance with SMB Multichannel enabled. This could be mitigated by spreading the load over multiple files, or disabling the feature.
151
151
- Performance is still bound by [single file limits](storage-files-scale-targets.md#file-scale-targets).
152
152
153
153
## Metadata caching for SSD file shares
154
154
155
-
Metadata caching is an enhancement for SSD Azure file shares aimed to improve the following:
155
+
Metadata caching is an enhancement for SSD Azure file shares that reduces metadata latency and raises metadata scale limits. The feature increases latency consistency and available IOPS, and it boosts network throughput.
156
156
157
-
- Reduce metadata latency
157
+
This feature improves the performance of the following metadata APIs. Both Windows and Linux clients can use it:
158
158
- Raised metadata scale limits
159
159
- Increase latency consistency, available IOPS, and boost network throughput
160
160
@@ -165,7 +165,7 @@ This feature improves the following metadata APIs and can be used from both Wind
165
165
- Close
166
166
- Delete
167
167
168
-
Currently this feature is only available for SSD file shares. There are no extra costs associated with using this feature. You can also [register to increase file handle limits for SSD file shares (preview)](#register-for-increased-file-handle-limits-preview).
168
+
Currently, the feature is only available for SSD file shares. There are no extra costs associated with using this feature. You can also [register to increase file handle limits for SSD file shares (preview)](#register-for-increased-file-handle-limits-preview).
169
169
170
170
### Register for the metadata caching feature
171
171
@@ -174,9 +174,9 @@ To get started, register for the feature using the Azure portal or Azure PowerSh
174
174
# [Azure portal](#tab/portal)
175
175
176
176
1. Sign in to the [Azure portal](https://portal.azure.com?azure-portal=true).
177
-
2. Search for and select **Preview features**.
178
-
3. Select the **Type** filter and select **Microsoft.Storage**.
179
-
4. Select **Azure Premium Files Metadata Cache** and then select **Register**.
177
+
1. Search for and select **Preview features**.
178
+
1. Select the **Type** filter and select **Microsoft.Storage**.
179
+
1. Select **Azure Premium Files Metadata Cache** and then select **Register**.
180
180
181
181
# [Azure PowerShell](#tab/powershell)
182
182
@@ -230,9 +230,9 @@ To increase the maximum number of concurrent handles per file and directory for
230
230
# [Azure portal](#tab/portal)
231
231
232
232
1. Sign in to the [Azure portal](https://portal.azure.com?azure-portal=true).
233
-
2. Search for and select **Preview features**.
234
-
3. Select the **Type** filter and select **Microsoft.Storage**.
235
-
4. Select **Azure Premium Files Increased Maximum Opened Handles Count** and then select **Register**.
233
+
1. Search for and select **Preview features**.
234
+
1. Select the **Type** filter and select **Microsoft.Storage**.
235
+
1. Select **Azure Premium Files Increased Maximum Opened Handles Count** and then select **Register**.
0 commit comments