Skip to content

Commit 7b5eaf2

Browse files
Merge pull request #252487 from khdownie/kendownie-readahead-0923
adjusting headers
2 parents fc74f18 + 49b6e00 commit 7b5eaf2

File tree

1 file changed

+11
-11
lines changed

1 file changed

+11
-11
lines changed

articles/storage/files/nfs-performance.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
---
22
title: Improve NFS Azure file share performance
3-
description: Learn how to improve the performance of NFS Azure file shares at scale using the nconnect mount option for Linux clients.
3+
description: Learn ways to improve the performance of NFS Azure file shares at scale, including the nconnect mount option for Linux clients.
44
author: khdownie
55
ms.service: azure-file-storage
66
ms.topic: conceptual
7-
ms.date: 08/31/2023
7+
ms.date: 09/21/2023
88
ms.author: kendownie
99
---
1010

@@ -22,7 +22,7 @@ This article explains how you can improve performance for NFS Azure file shares.
2222

2323
`Nconnect` is a client-side Linux mount option that increases performance at scale by allowing you to use more TCP connections between the client and the Azure Premium Files service for NFSv4.1, while maintaining the resiliency of platform as a service (PaaS).
2424

25-
## Benefits of `nconnect`
25+
### Benefits of `nconnect`
2626

2727
With `nconnect`, you can increase performance at scale using fewer client machines to reduce total cost of ownership (TCO). `Nconnect` increases performance by using multiple TCP channels on one or more NICs, using single or multiple clients. Without `nconnect`, you'd need roughly 20 client machines in order to achieve the bandwidth scale limits (10 GiB/s) offered by the largest premium Azure file share provisioning size. With `nconnect`, you can achieve those limits using only 6-7 clients. That’s almost a 70% reduction in computing cost, while providing significant improvements to IOPS and throughput at scale (see table).
2828

@@ -33,30 +33,30 @@ With `nconnect`, you can increase performance at scale using fewer client machin
3333
| Throughput (write) | 64K, 1024K | 3x |
3434
| Throughput (read) | All I/O sizes | 2-4x |
3535

36-
## Prerequisites
36+
### Prerequisites
3737

3838
- The latest Linux distributions fully support `nconnect`. For older Linux distributions, ensure that the Linux kernel version is 5.3 or higher.
3939
- Per-mount configuration is only supported when a single file share is used per storage account over a private endpoint.
4040

41-
## Performance impact of `nconnect`
41+
### Performance impact of `nconnect`
4242

4343
We achieved the following performance results when using the `nconnect` mount option with NFS Azure file shares on Linux clients at scale. For more information on how we achieved these results, see [performance test configuration](#performance-test-configuration).
4444

4545
:::image type="content" source="media/nfs-performance/nconnect-iops-improvement.png" alt-text="Screenshot showing average improvement in IOPS when using nconnect with NFS Azure file shares." border="false":::
4646

4747
:::image type="content" source="media/nfs-performance/nconnect-throughput-improvement.png" alt-text="Screenshot showing average improvement in throughput when using nconnect with NFS Azure file shares." border="false":::
4848

49-
## Recommendations
49+
### Recommendations for `nconnect`
5050

5151
Follow these recommendations to get the best results from `nconnect`.
5252

53-
### Set `nconnect=4`
53+
#### Set `nconnect=4`
5454
While Azure Files supports setting `nconnect` up to the maximum setting of 16, we recommend configuring the mount options with the optimal setting of `nconnect=4`. Currently, there are no gains beyond four channels for the Azure Files implementation of `nconnect`. In fact, exceeding four channels to a single Azure file share from a single client might adversely affect performance due to TCP network saturation.
5555

56-
### Size virtual machines carefully
56+
#### Size virtual machines carefully
5757
Depending on your workload requirements, it’s important to correctly size the client machines to avoid being restricted by their [expected network bandwidth](../../virtual-network/virtual-machine-network-throughput.md#expected-network-throughput). You don't need multiple NICs in order to achieve the expected network throughput. While it's common to use [general purpose VMs](../../virtual-machines/sizes-general.md) with Azure Files, various VM types are available depending on your workload needs and region availability. For more information, see [Azure VM Selector](https://azure.microsoft.com/pricing/vm-selector/).
5858

59-
### Keep queue depth less than or equal to 64
59+
#### Keep queue depth less than or equal to 64
6060
Queue depth is the number of pending I/O requests that a storage resource can service. We don't recommend exceeding the optimal queue depth of 64. If you do, you won't see any more performance gains. For more information, see [Queue depth](understand-performance.md#queue-depth).
6161

6262
### `Nconnect` per-mount configuration
@@ -87,7 +87,7 @@ If a workload requires mounting multiple shares with one or more storage account
8787
- `Mount StorageAccount.file.core.windows.net:/StorageAccount/FileShare2`
8888
- `Mount StorageAccount.file.core.windows.net:/StorageAccount/FileShare3`
8989

90-
## Performance test configuration
90+
### Performance test configuration
9191

9292
We used the following resources and benchmarking tools to achieve and measure the results outlined in this article.
9393

@@ -161,7 +161,7 @@ fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_b
161161
fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=1024k --iodepth=64 --filesize=4G --rw=randwrite --group_reporting --ramp_time=300
162162
```
163163

164-
## Performance considerations
164+
### Performance considerations for `nconnect`
165165

166166
When using the `nconnect` mount option, you should closely evaluate workloads that have the following characteristics:
167167

0 commit comments

Comments
 (0)