Skip to content

Commit 17f9d7a

Browse files
committed
acrolinx
1 parent f222f27 commit 17f9d7a

File tree

1 file changed

+14
-14
lines changed

1 file changed

+14
-14
lines changed

articles/storage/files/nfs-performance.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ ms.author: kendownie
1111

1212
# Improve performance for NFS Azure file shares
1313

14-
This article explains how you can improve performance for NFS Azure file shares.
14+
This article explains how you can improve performance for network file system (NFS) Azure file shares.
1515

1616
## Applies to
1717

@@ -23,9 +23,9 @@ This article explains how you can improve performance for NFS Azure file shares.
2323

2424
## Increase read-ahead size to improve read throughput
2525

26-
The `read_ahead_kb` kernel parameter in Linux represents the amount of data that should be "read ahead" or prefetched during a sequential read operation. Linux kernel versions prior to 5.4 set the read-ahead value to the equivalent of 15 times the mounted file system's `rsize`, which represents the client-side mount option for read buffer size. This sets the read-ahead value high enough to improve client sequential read throughput in most cases.
26+
The `read_ahead_kb` kernel parameter in Linux represents the amount of data that should be "read ahead" or prefetched during a sequential read operation. Linux kernel versions before 5.4 set the read-ahead value to the equivalent of 15 times the mounted file system's `rsize`, which represents the client-side mount option for read buffer size. This sets the read-ahead value high enough to improve client sequential read throughput in most cases.
2727

28-
However, beginning with Linux kernel version 5.4, the Linux NFS client uses a default `read_ahead_kb` value of 128 KiB. This small value might reduce the amount of read throughput for large files. Customers upgrading from Linux releases with the larger read-ahead value to those with the 128 KiB default might experience a decrease in sequential read performance.
28+
However, beginning with Linux kernel version 5.4, the Linux NFS client uses a default `read_ahead_kb` value of 128 KiB. This small value might reduce the amount of read throughput for large files. Customers upgrading from Linux releases with the larger read-ahead value to releases with the 128 KiB default might experience a decrease in sequential read performance.
2929

3030
For Linux kernels 5.4 or later, we recommend persistently setting the `read_ahead_kb` to 15 MiB for improved performance.
3131

@@ -48,11 +48,11 @@ To change this value, set the read-ahead size by adding a rule in udev, a Linux
4848

4949
## `Nconnect`
5050

51-
`Nconnect` is a client-side Linux mount option that increases performance at scale by allowing you to use more TCP connections between the client and the Azure Premium Files service for NFSv4.1.
51+
`Nconnect` is a client-side Linux mount option that increases performance at scale by allowing you to use more Transmission Control Protocol (TCP) connections between the client and the Azure Premium Files service for NFSv4.1.
5252

5353
### Benefits of `nconnect`
5454

55-
With `nconnect`, you can increase performance at scale using fewer client machines to reduce total cost of ownership (TCO). `Nconnect` increases performance by using multiple TCP channels on one or more NICs, using single or multiple clients. Without `nconnect`, you'd need roughly 20 client machines in order to achieve the bandwidth scale limits (10 GiB/s) offered by the largest premium Azure file share provisioning size. With `nconnect`, you can achieve those limits using only 6-7 clients. That’s almost a 70% reduction in computing cost, while providing significant improvements to IOPS and throughput at scale. See the following table.
55+
With `nconnect`, you can increase performance at scale using fewer client machines to reduce total cost of ownership (TCO). `Nconnect` increases performance by using multiple TCP channels on one or more NICs, using single or multiple clients. Without `nconnect`, you'd need roughly 20 client machines in order to achieve the bandwidth scale limits (10 GiB/s) offered by the largest premium Azure file share provisioning size. With `nconnect`, you can achieve those limits using only 6-7 clients, reducing compute costs by nearly 70% while providing significant improvements in I/O operations per second (IOPS) and throughput at scale. See the following table.
5656

5757
| **Metric (operation)** | **I/O size** | **Performance improvement** |
5858
|------------------------|---------------|-----------------------------|
@@ -84,24 +84,24 @@ While Azure Files supports setting `nconnect` up to the maximum setting of 16, w
8484

8585
#### Size virtual machines carefully
8686

87-
Depending on your workload requirements, it’s important to correctly size the client machines to avoid being restricted by their [expected network bandwidth](../../virtual-network/virtual-machine-network-throughput.md#expected-network-throughput). You don't need multiple NICs in order to achieve the expected network throughput. While it's common to use [general purpose VMs](../../virtual-machines/sizes-general.md) with Azure Files, various VM types are available depending on your workload needs and region availability. For more information, see [Azure VM Selector](https://azure.microsoft.com/pricing/vm-selector/).
87+
Depending on your workload requirements, it’s important to correctly size the client virtual machines (VMs) to avoid being restricted by their [expected network bandwidth](../../virtual-network/virtual-machine-network-throughput.md#expected-network-throughput). You don't need multiple network interface controllers (NICs) in order to achieve the expected network throughput. While it's common to use [general purpose VMs](../../virtual-machines/sizes-general.md) with Azure Files, various VM types are available depending on your workload needs and region availability. For more information, see [Azure VM Selector](https://azure.microsoft.com/pricing/vm-selector/).
8888

8989
#### Keep queue depth less than or equal to 64
9090

91-
Queue depth is the number of pending I/O requests that a storage resource can service. We don't recommend exceeding the optimal queue depth of 64. If you do, you won't see any more performance gains. For more information, see [Queue depth](understand-performance.md#queue-depth).
91+
Queue depth is the number of pending I/O requests that a storage resource can service. We don't recommend exceeding the optimal queue depth of 64 because you won't see any more performance gains. For more information, see [Queue depth](understand-performance.md#queue-depth).
9292

9393
### `Nconnect` per-mount configuration
9494

9595
If a workload requires mounting multiple shares with one or more storage accounts with different `nconnect` settings from a single client, we can't guarantee that those settings will persist when mounting over the public endpoint. Per-mount configuration is only supported when a single Azure file share is used per storage account over the private endpoint as described in Scenario 1.
9696

97-
#### Scenario 1: (supported) `nconnect` per-mount configuration over private endpoint with multiple storage accounts
97+
#### Scenario 1: `nconnect` per-mount configuration over private endpoint with multiple storage accounts (supported)
9898

9999
- StorageAccount.file.core.windows.net = 10.10.10.10
100100
- StorageAccount2.file.core.windows.net = 10.10.10.11
101101
- `Mount StorageAccount.file.core.windows.net:/StorageAccount/FileShare1 nconnect=4`
102102
- `Mount StorageAccount2.file.core.windows.net:/StorageAccount2/FileShare1`
103103

104-
#### Scenario 2: (not supported) `nconnect` per-mount configuration over public endpoint
104+
#### Scenario 2: `nconnect` per-mount configuration over public endpoint (not supported)
105105

106106
- StorageAccount.file.core.windows.net = 52.239.238.8
107107
- StorageAccount2.file.core.windows.net = 52.239.238.7
@@ -112,7 +112,7 @@ If a workload requires mounting multiple shares with one or more storage account
112112
> [!NOTE]
113113
> Even if the storage account resolves to a different IP address, we can't guarantee that address will persist because public endpoints aren't static addresses.
114114
115-
#### Scenario 3: (not supported) `nconnect` per-mount configuration over private endpoint with multiple shares on single storage account
115+
#### Scenario 3: `nconnect` per-mount configuration over private endpoint with multiple shares on single storage account (not supported)
116116

117117
- StorageAccount.file.core.windows.net = 10.10.10.10
118118
- `Mount StorageAccount.file.core.windows.net:/StorageAccount/FileShare1 nconnect=4`
@@ -200,12 +200,12 @@ When using the `nconnect` mount option, you should closely evaluate workloads th
200200
- Latency sensitive write workloads that are single threaded and/or use a low queue depth (less than 16)
201201
- Latency sensitive read workloads that are single threaded and/or use a low queue depth in combination with smaller I/O sizes
202202

203-
Not all workloads require high-scale IOPS or throughout performance. For smaller scale workloads, `nconnect` might not make sense. Use the following table to decide whether `nconnect` will be advantageous for your workload. Scenarios highlighted in green are recommended, while those highlighted in red are not. Those highlighted in yellow are neutral.
203+
Not all workloads require high-scale IOPS or throughout performance. For smaller scale workloads, `nconnect` might not make sense. Use the following table to decide whether `nconnect` is advantageous for your workload. Scenarios highlighted in green are recommended, while scenarios highlighted in red aren't. Scenarios highlighted in yellow are neutral.
204204

205205
:::image type="content" source="media/nfs-performance/nconnect-latency-comparison.png" alt-text="Screenshot showing various read and write I O scenarios with corresponding latency to indicate when nconnect is advisable." border="false":::
206206

207207
## See also
208208

209-
- For mounting instructions, see [Mount NFS file Share to Linux](storage-files-how-to-mount-nfs-shares.md).
210-
- For a comprehensive list of mount options, see [Linux NFS man page](https://linux.die.net/man/5/nfs).
211-
- For information on latency, IOPS, throughput, and other performance concepts, see [Understand Azure Files performance](understand-performance.md).
209+
- [Mount NFS file Share to Linux](storage-files-how-to-mount-nfs-shares.md)
210+
- [List of mount options](https://linux.die.net/man/5/nfs)
211+
- [Understand Azure Files performance](understand-performance.md)

0 commit comments

Comments
 (0)