Skip to content

Commit 5797567

Browse files
committed
Clean up nfs-performance
1 parent 5e4e634 commit 5797567

File tree

1 file changed

+27
-23
lines changed

1 file changed

+27
-23
lines changed

articles/storage/files/nfs-performance.md

Lines changed: 27 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -14,12 +14,18 @@ ms.author: kendownie
1414
This article explains how you can improve performance for network file system (NFS) Azure file shares.
1515

1616
## Applies to
17-
18-
| File share type | SMB | NFS |
19-
|-|:-:|:-:|
20-
| Standard file shares (GPv2), LRS/ZRS | ![No, this article doesn't apply to standard SMB Azure file shares LRS/ZRS.](../media/icons/no-icon.png) | ![NFS shares are only available in premium Azure file shares.](../media/icons/no-icon.png) |
21-
| Standard file shares (GPv2), GRS/GZRS | ![No, this article doesn't apply to standard SMB Azure file shares GRS/GZRS.](../media/icons/no-icon.png) | ![NFS is only available in premium Azure file shares.](../media/icons/no-icon.png) |
22-
| Premium file shares (FileStorage), LRS/ZRS | ![No, this article doesn't apply to premium SMB Azure file shares.](../media/icons/no-icon.png) | ![Yes, this article applies to premium NFS Azure file shares.](../media/icons/yes-icon.png) |
17+
| Management model | Billing model | Media tier | Redundancy | SMB | NFS |
18+
|-|-|-|-|:-:|:-:|
19+
| Microsoft.Storage | Provisioned v2 | HDD (standard) | Local (LRS) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
20+
| Microsoft.Storage | Provisioned v2 | HDD (standard) | Zone (ZRS) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
21+
| Microsoft.Storage | Provisioned v2 | HDD (standard) | Geo (GRS) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
22+
| Microsoft.Storage | Provisioned v2 | HDD (standard) | GeoZone (GZRS) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
23+
| Microsoft.Storage | Provisioned v1 | SSD (premium) | Local (LRS) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
24+
| Microsoft.Storage | Provisioned v1 | SSD (premium) | Zone (ZRS) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
25+
| Microsoft.Storage | Pay-as-you-go | HDD (standard) | Local (LRS) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
26+
| Microsoft.Storage | Pay-as-you-go | HDD (standard) | Zone (ZRS) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
27+
| Microsoft.Storage | Pay-as-you-go | HDD (standard) | Geo (GRS) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
28+
| Microsoft.Storage | Pay-as-you-go | HDD (standard) | GeoZone (GZRS) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
2329

2430
## Increase read-ahead size to improve read throughput
2531

@@ -46,13 +52,12 @@ To change this value, set the read-ahead size by adding a rule in udev, a Linux
4652
sudo udevadm control --reload
4753
```
4854

49-
## `Nconnect`
50-
51-
`Nconnect` is a client-side Linux mount option that increases performance at scale by allowing you to use more Transmission Control Protocol (TCP) connections between the client and the Azure Premium Files service for NFSv4.1.
55+
## NFS nconnect
56+
NFS nconnect is a client-side mount option for NFS file shares that allows you to use multiple TCP connections between the client and your NFS file share.
5257

53-
### Benefits of `nconnect`
58+
### Benefits
5459

55-
With `nconnect`, you can increase performance at scale using fewer client machines to reduce total cost of ownership (TCO). `Nconnect` increases performance by using multiple TCP channels on one or more NICs, using single or multiple clients. Without `nconnect`, you'd need roughly 20 client machines in order to achieve the bandwidth scale limits (10 GiB/s) offered by the largest premium Azure file share provisioning size. With `nconnect`, you can achieve those limits using only 6-7 clients, reducing compute costs by nearly 70% while providing significant improvements in I/O operations per second (IOPS) and throughput at scale. See the following table.
60+
With nconnect, you can increase performance at scale using fewer client machines to reduce total cost of ownership (TCO). The nconnect feature increases performance by using multiple TCP channels on one or more NICs, using single or multiple clients. Without nconnect, you'd need roughly 20 client machines in order to achieve the bandwidth scale limits (10 GiB/s) offered by the largest SSD file share provisioning size. With nconnect, you can achieve those limits using only 6-7 clients, reducing compute costs by nearly 70% while providing significant improvements in I/O operations per second (IOPS) and throughput at scale. See the following table.
5661

5762
| **Metric (operation)** | **I/O size** | **Performance improvement** |
5863
|------------------------|---------------|-----------------------------|
@@ -63,24 +68,23 @@ With `nconnect`, you can increase performance at scale using fewer client machin
6368

6469
### Prerequisites
6570

66-
- The latest Linux distributions fully support `nconnect`. For older Linux distributions, ensure that the Linux kernel version is 5.3 or higher.
71+
- The latest Linux distributions fully support nconnect. For older Linux distributions, ensure that the Linux kernel version is 5.3 or higher.
6772
- Per-mount configuration is only supported when a single file share is used per storage account over a private endpoint.
6873

69-
### Performance impact of `nconnect`
74+
### Performance impact
7075

71-
We achieved the following performance results when using the `nconnect` mount option with NFS Azure file shares on Linux clients at scale. For more information on how we achieved these results, see [performance test configuration](#performance-test-configuration).
76+
We achieved the following performance results when using the nconnect mount option with NFS Azure file shares on Linux clients at scale. For more information on how we achieved these results, see [performance test configuration](#performance-test-configuration).
7277

7378
:::image type="content" source="media/nfs-performance/nconnect-iops-improvement.png" alt-text="Screenshot showing average improvement in IOPS when using nconnect with NFS Azure file shares." border="false":::
7479

7580
:::image type="content" source="media/nfs-performance/nconnect-throughput-improvement.png" alt-text="Screenshot showing average improvement in throughput when using nconnect with NFS Azure file shares." border="false":::
7681

77-
### Recommendations for `nconnect`
78-
82+
### Recommendations
7983
Follow these recommendations to get the best results from `nconnect`.
8084

8185
#### Set `nconnect=4`
8286

83-
While Azure Files supports setting `nconnect` up to the maximum setting of 16, we recommend configuring the mount options with the optimal setting of `nconnect=4`. Currently, there are no gains beyond four channels for the Azure Files implementation of `nconnect`. In fact, exceeding four channels to a single Azure file share from a single client might adversely affect performance due to TCP network saturation.
87+
While Azure Files supports setting nconnect up to the maximum setting of 16, we recommend configuring the mount options with the optimal setting of nconnect=4. Currently, there are no gains beyond four channels for the Azure Files implementation of nconnect. In fact, exceeding four channels to a single Azure file share from a single client might adversely affect performance due to TCP network saturation.
8488

8589
#### Size virtual machines carefully
8690

@@ -90,18 +94,18 @@ Depending on your workload requirements, it's important to correctly size the cl
9094

9195
Queue depth is the number of pending I/O requests that a storage resource can service. We don't recommend exceeding the optimal queue depth of 64 because you won't see any more performance gains. For more information, see [Queue depth](understand-performance.md#queue-depth).
9296

93-
### `Nconnect` per-mount configuration
97+
### Per mount configuration
9498

95-
If a workload requires mounting multiple shares with one or more storage accounts with different `nconnect` settings from a single client, we can't guarantee that those settings will persist when mounting over the public endpoint. Per-mount configuration is only supported when a single Azure file share is used per storage account over the private endpoint as described in Scenario 1.
99+
If a workload requires mounting multiple shares with one or more storage accounts with different nconnect settings from a single client, we can't guarantee that those settings will persist when mounting over the public endpoint. Per-mount configuration is only supported when a single Azure file share is used per storage account over the private endpoint as described in Scenario 1.
96100

97-
#### Scenario 1: `nconnect` per-mount configuration over private endpoint with multiple storage accounts (supported)
101+
#### Scenario 1: per mount configuration over private endpoint with multiple storage accounts (supported)
98102

99103
- StorageAccount.file.core.windows.net = 10.10.10.10
100104
- StorageAccount2.file.core.windows.net = 10.10.10.11
101105
- `Mount StorageAccount.file.core.windows.net:/StorageAccount/FileShare1 nconnect=4`
102106
- `Mount StorageAccount2.file.core.windows.net:/StorageAccount2/FileShare1`
103107

104-
#### Scenario 2: `nconnect` per-mount configuration over public endpoint (not supported)
108+
#### Scenario 2: per mount configuration over public endpoint (not supported)
105109

106110
- StorageAccount.file.core.windows.net = 52.239.238.8
107111
- StorageAccount2.file.core.windows.net = 52.239.238.7
@@ -112,7 +116,7 @@ If a workload requires mounting multiple shares with one or more storage account
112116
> [!NOTE]
113117
> Even if the storage account resolves to a different IP address, we can't guarantee that address will persist because public endpoints aren't static addresses.
114118
115-
#### Scenario 3: `nconnect` per-mount configuration over private endpoint with multiple shares on single storage account (not supported)
119+
#### Scenario 3: per mount configuration over private endpoint with multiple shares on single storage account (not supported)
116120

117121
- StorageAccount.file.core.windows.net = 10.10.10.10
118122
- `Mount StorageAccount.file.core.windows.net:/StorageAccount/FileShare1 nconnect=4`
@@ -125,7 +129,7 @@ We used the following resources and benchmarking tools to achieve and measure th
125129

126130
- **Single client:** Azure VM ([DSv4-Series](/azure/virtual-machines/dv4-dsv4-series#dsv4-series)) with single NIC
127131
- **OS:** Linux (Ubuntu 20.40)
128-
- **NFS storage:** Azure Files premium file share (provisioned 30 TiB, set `nconnect=4`)
132+
- **NFS storage:** SSD file share (provisioned 30 TiB, set `nconnect=4`)
129133

130134
| **Size** | **vCPU** | **Memory** | **Temp storage (SSD)** | **Max data disks** | **Max NICs** | **Expected network bandwidth** |
131135
|-----------------|-----------|------------|------------------------|--------------------|--------------|--------------------------------|

0 commit comments

Comments
 (0)