You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/storage/files/nfs-performance.md
+11-11Lines changed: 11 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,10 +1,10 @@
1
1
---
2
2
title: Improve NFS Azure file share performance
3
-
description: Learn how to improve the performance of NFS Azure file shares at scale using the nconnect mount option for Linux clients.
3
+
description: Learn ways to improve the performance of NFS Azure file shares at scale, including the nconnect mount option for Linux clients.
4
4
author: khdownie
5
5
ms.service: azure-file-storage
6
6
ms.topic: conceptual
7
-
ms.date: 08/31/2023
7
+
ms.date: 09/21/2023
8
8
ms.author: kendownie
9
9
---
10
10
@@ -22,7 +22,7 @@ This article explains how you can improve performance for NFS Azure file shares.
22
22
23
23
`Nconnect` is a client-side Linux mount option that increases performance at scale by allowing you to use more TCP connections between the client and the Azure Premium Files service for NFSv4.1, while maintaining the resiliency of platform as a service (PaaS).
24
24
25
-
## Benefits of `nconnect`
25
+
###Benefits of `nconnect`
26
26
27
27
With `nconnect`, you can increase performance at scale using fewer client machines to reduce total cost of ownership (TCO). `Nconnect` increases performance by using multiple TCP channels on one or more NICs, using single or multiple clients. Without `nconnect`, you'd need roughly 20 client machines in order to achieve the bandwidth scale limits (10 GiB/s) offered by the largest premium Azure file share provisioning size. With `nconnect`, you can achieve those limits using only 6-7 clients. That’s almost a 70% reduction in computing cost, while providing significant improvements to IOPS and throughput at scale (see table).
28
28
@@ -33,30 +33,30 @@ With `nconnect`, you can increase performance at scale using fewer client machin
33
33
| Throughput (write) | 64K, 1024K | 3x |
34
34
| Throughput (read) | All I/O sizes | 2-4x |
35
35
36
-
## Prerequisites
36
+
###Prerequisites
37
37
38
38
- The latest Linux distributions fully support `nconnect`. For older Linux distributions, ensure that the Linux kernel version is 5.3 or higher.
39
39
- Per-mount configuration is only supported when a single file share is used per storage account over a private endpoint.
40
40
41
-
## Performance impact of `nconnect`
41
+
###Performance impact of `nconnect`
42
42
43
43
We achieved the following performance results when using the `nconnect` mount option with NFS Azure file shares on Linux clients at scale. For more information on how we achieved these results, see [performance test configuration](#performance-test-configuration).
44
44
45
45
:::image type="content" source="media/nfs-performance/nconnect-iops-improvement.png" alt-text="Screenshot showing average improvement in IOPS when using nconnect with NFS Azure file shares." border="false":::
46
46
47
47
:::image type="content" source="media/nfs-performance/nconnect-throughput-improvement.png" alt-text="Screenshot showing average improvement in throughput when using nconnect with NFS Azure file shares." border="false":::
48
48
49
-
## Recommendations
49
+
###Recommendations for `nconnect`
50
50
51
51
Follow these recommendations to get the best results from `nconnect`.
52
52
53
-
### Set `nconnect=4`
53
+
####Set `nconnect=4`
54
54
While Azure Files supports setting `nconnect` up to the maximum setting of 16, we recommend configuring the mount options with the optimal setting of `nconnect=4`. Currently, there are no gains beyond four channels for the Azure Files implementation of `nconnect`. In fact, exceeding four channels to a single Azure file share from a single client might adversely affect performance due to TCP network saturation.
55
55
56
-
### Size virtual machines carefully
56
+
####Size virtual machines carefully
57
57
Depending on your workload requirements, it’s important to correctly size the client machines to avoid being restricted by their [expected network bandwidth](../../virtual-network/virtual-machine-network-throughput.md#expected-network-throughput). You don't need multiple NICs in order to achieve the expected network throughput. While it's common to use [general purpose VMs](../../virtual-machines/sizes-general.md) with Azure Files, various VM types are available depending on your workload needs and region availability. For more information, see [Azure VM Selector](https://azure.microsoft.com/pricing/vm-selector/).
58
58
59
-
### Keep queue depth less than or equal to 64
59
+
####Keep queue depth less than or equal to 64
60
60
Queue depth is the number of pending I/O requests that a storage resource can service. We don't recommend exceeding the optimal queue depth of 64. If you do, you won't see any more performance gains. For more information, see [Queue depth](understand-performance.md#queue-depth).
61
61
62
62
### `Nconnect` per-mount configuration
@@ -87,7 +87,7 @@ If a workload requires mounting multiple shares with one or more storage account
0 commit comments