Skip to content

Commit 0d62afc

Browse files
authored
Merge pull request #297352 from wmgries/consistent-media-tiers-1
Clean up nfs-performance
2 parents 2d63899 + a675497 commit 0d62afc

File tree

5 files changed

+43
-39
lines changed

5 files changed

+43
-39
lines changed

articles/storage/files/nfs-large-directories.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -95,7 +95,7 @@ The following graph compares the total time it takes to finish different operati
9595
:::image type="content" source="media/nfs-large-directories/default-mount-versus-actimeo.png" alt-text="Graph comparing the time to finish different operations with default mount versus setting an actimeo value of 30 for a workload with 1 million files." border="false":::
9696

9797
### NFS nconnect
98-
NFS nconnect is a client-side mount option for NFS file shares that allows you to use multiple TCP connections between the client and your NFS file share. We recommend the optimal setting of `nconnect=4` to reduce latency and improve performance. The nconnect feature can be especially useful for workloads that use asynchronous or synchronous I/O from multiple threads. [Learn more](nfs-performance.md#nconnect).
98+
NFS nconnect is a client-side mount option for NFS file shares that allows you to use multiple TCP connections between the client and your NFS file share. We recommend the optimal setting of `nconnect=4` to reduce latency and improve performance. The nconnect feature can be especially useful for workloads that use asynchronous or synchronous I/O from multiple threads. [Learn more](nfs-performance.md#nfs-nconnect).
9999

100100
## Commands and operations
101101

articles/storage/files/nfs-performance.md

Lines changed: 39 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -14,12 +14,18 @@ ms.author: kendownie
1414
This article explains how you can improve performance for network file system (NFS) Azure file shares.
1515

1616
## Applies to
17-
18-
| File share type | SMB | NFS |
19-
|-|:-:|:-:|
20-
| Standard file shares (GPv2), LRS/ZRS | ![No, this article doesn't apply to standard SMB Azure file shares LRS/ZRS.](../media/icons/no-icon.png) | ![NFS shares are only available in premium Azure file shares.](../media/icons/no-icon.png) |
21-
| Standard file shares (GPv2), GRS/GZRS | ![No, this article doesn't apply to standard SMB Azure file shares GRS/GZRS.](../media/icons/no-icon.png) | ![NFS is only available in premium Azure file shares.](../media/icons/no-icon.png) |
22-
| Premium file shares (FileStorage), LRS/ZRS | ![No, this article doesn't apply to premium SMB Azure file shares.](../media/icons/no-icon.png) | ![Yes, this article applies to premium NFS Azure file shares.](../media/icons/yes-icon.png) |
17+
| Management model | Billing model | Media tier | Redundancy | SMB | NFS |
18+
|-|-|-|-|:-:|:-:|
19+
| Microsoft.Storage | Provisioned v2 | HDD (standard) | Local (LRS) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
20+
| Microsoft.Storage | Provisioned v2 | HDD (standard) | Zone (ZRS) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
21+
| Microsoft.Storage | Provisioned v2 | HDD (standard) | Geo (GRS) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
22+
| Microsoft.Storage | Provisioned v2 | HDD (standard) | GeoZone (GZRS) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
23+
| Microsoft.Storage | Provisioned v1 | SSD (premium) | Local (LRS) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
24+
| Microsoft.Storage | Provisioned v1 | SSD (premium) | Zone (ZRS) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
25+
| Microsoft.Storage | Pay-as-you-go | HDD (standard) | Local (LRS) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
26+
| Microsoft.Storage | Pay-as-you-go | HDD (standard) | Zone (ZRS) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
27+
| Microsoft.Storage | Pay-as-you-go | HDD (standard) | Geo (GRS) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
28+
| Microsoft.Storage | Pay-as-you-go | HDD (standard) | GeoZone (GZRS) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
2329

2430
## Increase read-ahead size to improve read throughput
2531

@@ -46,41 +52,39 @@ To change this value, set the read-ahead size by adding a rule in udev, a Linux
4652
sudo udevadm control --reload
4753
```
4854

49-
## `Nconnect`
50-
51-
`Nconnect` is a client-side Linux mount option that increases performance at scale by allowing you to use more Transmission Control Protocol (TCP) connections between the client and the Azure Premium Files service for NFSv4.1.
55+
## NFS nconnect
56+
NFS nconnect is a client-side mount option for NFS file shares that allows you to use multiple TCP connections between the client and your NFS file share.
5257

53-
### Benefits of `nconnect`
58+
### Benefits
5459

55-
With `nconnect`, you can increase performance at scale using fewer client machines to reduce total cost of ownership (TCO). `Nconnect` increases performance by using multiple TCP channels on one or more NICs, using single or multiple clients. Without `nconnect`, you'd need roughly 20 client machines in order to achieve the bandwidth scale limits (10 GiB/s) offered by the largest premium Azure file share provisioning size. With `nconnect`, you can achieve those limits using only 6-7 clients, reducing compute costs by nearly 70% while providing significant improvements in I/O operations per second (IOPS) and throughput at scale. See the following table.
60+
With nconnect, you can increase performance at scale using fewer client machines to reduce total cost of ownership (TCO). The nconnect feature increases performance by using multiple TCP channels on one or more NICs, using single or multiple clients. Without nconnect, you'd need roughly 20 client machines in order to achieve the bandwidth scale limits (10 GiB / sec) offered by the largest SSD file share provisioning size. With nconnect, you can achieve those limits using only 6-7 clients, reducing compute costs by nearly 70% while providing significant improvements in I/O operations per second (IOPS) and throughput at scale. See the following table.
5661

5762
| **Metric (operation)** | **I/O size** | **Performance improvement** |
58-
|------------------------|---------------|-----------------------------|
59-
| IOPS (write) | 64K, 1024K | 3x |
60-
| IOPS (read) | All I/O sizes | 2-4x |
61-
| Throughput (write) | 64K, 1024K | 3x |
62-
| Throughput (read) | All I/O sizes | 2-4x |
63+
|-|-|-|
64+
| IOPS (write) | 64 KiB, 1,024 KiB | 3x |
65+
| IOPS (read) | All I/O sizes | 2-4x |
66+
| Throughput (write) | 64 KiB, 1,024 KiB | 3x |
67+
| Throughput (read) | All I/O sizes | 2-4x |
6368

6469
### Prerequisites
6570

66-
- The latest Linux distributions fully support `nconnect`. For older Linux distributions, ensure that the Linux kernel version is 5.3 or higher.
71+
- The latest Linux distributions fully support nconnect. For older Linux distributions, ensure that the Linux kernel version is 5.3 or higher.
6772
- Per-mount configuration is only supported when a single file share is used per storage account over a private endpoint.
6873

69-
### Performance impact of `nconnect`
74+
### Performance impact
7075

71-
We achieved the following performance results when using the `nconnect` mount option with NFS Azure file shares on Linux clients at scale. For more information on how we achieved these results, see [performance test configuration](#performance-test-configuration).
76+
We achieved the following performance results when using the nconnect mount option with NFS Azure file shares on Linux clients at scale. For more information on how we achieved these results, see [performance test configuration](#performance-test-configuration).
7277

7378
:::image type="content" source="media/nfs-performance/nconnect-iops-improvement.png" alt-text="Screenshot showing average improvement in IOPS when using nconnect with NFS Azure file shares." border="false":::
7479

7580
:::image type="content" source="media/nfs-performance/nconnect-throughput-improvement.png" alt-text="Screenshot showing average improvement in throughput when using nconnect with NFS Azure file shares." border="false":::
7681

77-
### Recommendations for `nconnect`
78-
82+
### Recommendations
7983
Follow these recommendations to get the best results from `nconnect`.
8084

8185
#### Set `nconnect=4`
8286

83-
While Azure Files supports setting `nconnect` up to the maximum setting of 16, we recommend configuring the mount options with the optimal setting of `nconnect=4`. Currently, there are no gains beyond four channels for the Azure Files implementation of `nconnect`. In fact, exceeding four channels to a single Azure file share from a single client might adversely affect performance due to TCP network saturation.
87+
While Azure Files supports setting nconnect up to the maximum setting of 16, we recommend configuring the mount options with the optimal setting of nconnect=4. Currently, there are no gains beyond four channels for the Azure Files implementation of nconnect. In fact, exceeding four channels to a single Azure file share from a single client might adversely affect performance due to TCP network saturation.
8488

8589
#### Size virtual machines carefully
8690

@@ -90,18 +94,18 @@ Depending on your workload requirements, it's important to correctly size the cl
9094

9195
Queue depth is the number of pending I/O requests that a storage resource can service. We don't recommend exceeding the optimal queue depth of 64 because you won't see any more performance gains. For more information, see [Queue depth](understand-performance.md#queue-depth).
9296

93-
### `Nconnect` per-mount configuration
97+
### Per mount configuration
9498

95-
If a workload requires mounting multiple shares with one or more storage accounts with different `nconnect` settings from a single client, we can't guarantee that those settings will persist when mounting over the public endpoint. Per-mount configuration is only supported when a single Azure file share is used per storage account over the private endpoint as described in Scenario 1.
99+
If a workload requires mounting multiple shares with one or more storage accounts with different nconnect settings from a single client, we can't guarantee that those settings persist when mounting over the public endpoint. Per mount configuration is only supported when a single Azure file share is used per storage account over the private endpoint as described in Scenario 1.
96100

97-
#### Scenario 1: `nconnect` per-mount configuration over private endpoint with multiple storage accounts (supported)
101+
#### Scenario 1: per mount configuration over private endpoint with multiple storage accounts (supported)
98102

99103
- StorageAccount.file.core.windows.net = 10.10.10.10
100104
- StorageAccount2.file.core.windows.net = 10.10.10.11
101105
- `Mount StorageAccount.file.core.windows.net:/StorageAccount/FileShare1 nconnect=4`
102106
- `Mount StorageAccount2.file.core.windows.net:/StorageAccount2/FileShare1`
103107

104-
#### Scenario 2: `nconnect` per-mount configuration over public endpoint (not supported)
108+
#### Scenario 2: per mount configuration over public endpoint (not supported)
105109

106110
- StorageAccount.file.core.windows.net = 52.239.238.8
107111
- StorageAccount2.file.core.windows.net = 52.239.238.7
@@ -110,9 +114,9 @@ If a workload requires mounting multiple shares with one or more storage account
110114
- `Mount StorageAccount2.file.core.windows.net:/StorageAccount2/FileShare1`
111115

112116
> [!NOTE]
113-
> Even if the storage account resolves to a different IP address, we can't guarantee that address will persist because public endpoints aren't static addresses.
117+
> Even if the storage account resolves to a different IP address, we can't guarantee that address persist because public endpoints aren't static addresses.
114118
115-
#### Scenario 3: `nconnect` per-mount configuration over private endpoint with multiple shares on single storage account (not supported)
119+
#### Scenario 3: per mount configuration over private endpoint with multiple shares on single storage account (not supported)
116120

117121
- StorageAccount.file.core.windows.net = 10.10.10.10
118122
- `Mount StorageAccount.file.core.windows.net:/StorageAccount/FileShare1 nconnect=4`
@@ -125,7 +129,7 @@ We used the following resources and benchmarking tools to achieve and measure th
125129

126130
- **Single client:** Azure VM ([DSv4-Series](/azure/virtual-machines/dv4-dsv4-series#dsv4-series)) with single NIC
127131
- **OS:** Linux (Ubuntu 20.40)
128-
- **NFS storage:** Azure Files premium file share (provisioned 30 TiB, set `nconnect=4`)
132+
- **NFS storage:** SSD file share (provisioned 30 TiB, set `nconnect=4`)
129133

130134
| **Size** | **vCPU** | **Memory** | **Temp storage (SSD)** | **Max data disks** | **Max NICs** | **Expected network bandwidth** |
131135
|-----------------|-----------|------------|------------------------|--------------------|--------------|--------------------------------|
@@ -153,41 +157,41 @@ fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_b
153157

154158
#### High throughput: 100% reads
155159

156-
**64k I/O size - random read - 64 queue depth**
160+
**64 KiB I/O size - random read - 64 queue depth**
157161

158162
```bash
159163
fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=64k --iodepth=64 --filesize=4G --rw=randread --group_reporting --ramp_time=300
160164
```
161165

162-
**1024k I/O size - 100% random read - 64 queue depth**
166+
**1,024 KiB I/O size - 100% random read - 64 queue depth**
163167

164168
```bash
165169
fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=1024k --iodepth=64 --filesize=4G --rw=randread --group_reporting --ramp_time=300
166170
```
167171

168172
#### High IOPS: 100% writes
169173

170-
**4k I/O size - 100% random write - 64 queue depth**
174+
**4 KiB I/O size - 100% random write - 64 queue depth**
171175

172176
```bash
173177
fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=4k --iodepth=64 --filesize=4G --rw=randwrite --group_reporting --ramp_time=300
174178
```
175179

176-
**8k I/O size - 100% random write - 64 queue depth**
180+
**8 KiB I/O size - 100% random write - 64 queue depth**
177181

178182
```bash
179183
fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=8k --iodepth=64 --filesize=4G --rw=randwrite --group_reporting --ramp_time=300
180184
```
181185

182186
#### High throughput: 100% writes
183187

184-
**64k I/O size - 100% random write - 64 queue depth**
188+
**64 KiB I/O size - 100% random write - 64 queue depth**
185189

186190
```bash
187191
fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=64k --iodepth=64 --filesize=4G --rw=randwrite --group_reporting --ramp_time=300
188192
```
189193

190-
**1024k I/O size - 100% random write - 64 queue depth**
194+
**1024 KiB I/O size - 100% random write - 64 queue depth**
191195

192196
```bash
193197
fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=1024k --iodepth=64 --filesize=4G --rw=randwrite --group_reporting --ramp_time=300

articles/storage/files/storage-files-how-to-mount-nfs-shares.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ You can mount the share using the Azure portal. You can also create a record in
6969

7070
### Mount an NFS share using the Azure portal
7171

72-
You can use the `nconnect` Linux mount option to improve performance for NFS Azure file shares at scale. For more information, see [Improve NFS Azure file share performance](nfs-performance.md#nconnect).
72+
You can use the `nconnect` Linux mount option to improve performance for NFS Azure file shares at scale. For more information, see [Improve NFS Azure file share performance](nfs-performance.md#nfs-nconnect).
7373

7474
1. Once the file share is created, select the share and select **Connect from Linux**.
7575
1. Enter the mount path you'd like to use, then copy the script.

articles/storage/files/storage-files-migration-nfs.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ This article covers the basic aspects of migrating from Linux file servers to NF
2626

2727
## Prerequisites
2828

29-
You'll need at least one NFS Azure file share mounted to a Linux virtual machine (VM). To create one, see [Create an NFS Azure file share and mount it on a Linux VM](storage-files-quick-create-use-linux.md). We recommend mounting the share with nconnect to use multiple TCP connections. For more information, see [Improve NFS Azure file share performance](nfs-performance.md#nconnect).
29+
You'll need at least one NFS Azure file share mounted to a Linux virtual machine (VM). To create one, see [Create an NFS Azure file share and mount it on a Linux VM](storage-files-quick-create-use-linux.md). We recommend mounting the share with nconnect to use multiple TCP connections. For more information, see [Improve NFS Azure file share performance](nfs-performance.md#nfs-nconnect).
3030

3131
## Migration tools
3232

articles/storage/files/storage-snapshots-files.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -368,7 +368,7 @@ az storage share list --account-name <storage-account-name> --include-snapshots
368368

369369
To mount an NFS Azure file share snapshot to a Linux VM (NFS client) and restore files, follow these steps.
370370

371-
1. Run the following command in a console. See [Mount options](storage-files-how-to-mount-nfs-shares.md#mount-options) for other recommended mount options. To improve copy performance, mount the snapshot with [nconnect](nfs-performance.md#nconnect) to use multiple TCP channels.
371+
1. Run the following command in a console. See [Mount options](storage-files-how-to-mount-nfs-shares.md#mount-options) for other recommended mount options. To improve copy performance, mount the snapshot with [nconnect](nfs-performance.md#nfs-nconnect) to use multiple TCP channels.
372372

373373
```bash
374374
sudo mount -o vers=4,minorversion=1,proto=tcp,sec=sys $server:/nfs4account/share /media/nfs

0 commit comments

Comments
 (0)