Skip to content

Commit f912555

Browse files
committed
added performance screen shots
1 parent f8858fb commit f912555

File tree

3 files changed

+21
-24
lines changed

3 files changed

+21
-24
lines changed
22.2 KB
Loading
20.5 KB
Loading

articles/storage/files/nfs-nconnect-performance.md

Lines changed: 21 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ ms.subservice: files
2222

2323
## Benefits of `nconnect`
2424

25-
With `nconnect` you can increase performance at scale using fewer client machines, lowering total cost of ownership (TCO). `Nconnect` accomplishes this by leveraging multiple TCP channels on one or more NICs, using single or multiple clients. Without `nconnect`, you'd need roughly 20 client machines in order to achieve the bandwidth scale limits (10 GiB/s) offered by the largest premium Azure file share provisioning size. With `nconnect`, you can achieve those limits using only 6-7 clients. That’s almost a 70% reduction in computing cost, while providing significant improvements to IOPS and throughput at scale (see table).
25+
With `nconnect` you can increase performance at scale using fewer client machines to reduce total cost of ownership (TCO). `Nconnect` accomplishes this by leveraging multiple TCP channels on one or more NICs, using single or multiple clients. Without `nconnect`, you'd need roughly 20 client machines in order to achieve the bandwidth scale limits (10 GiB/s) offered by the largest premium Azure file share provisioning size. With `nconnect`, you can achieve those limits using only 6-7 clients. That’s almost a 70% reduction in computing cost, while providing significant improvements to IOPS and throughput at scale (see table).
2626

2727
| **Metric (operation)** | **I/O size** | **Performance improvement** |
2828
|------------------------|---------------|-----------------------------|
@@ -39,7 +39,11 @@ With `nconnect` you can increase performance at scale using fewer client machine
3939

4040
## Performance impact of `nconnect`
4141

42-
We achieved the following performance results when using the `nconnect` mount option with NFS Azure file shares on Linux clients at scale. For more information, see [performance test configuration](#performance-test-configuration).
42+
We achieved the following performance results when using the `nconnect` mount option with NFS Azure file shares on Linux clients at scale. For more information on how we achieved these results, see [performance test configuration](#performance-test-configuration).
43+
44+
:::image type="content" source="media/nfs-nconnect-performance/nconnect-iops-improvement.png" alt-text="Screenshot showing average improvement in IOPS when using nconnect with NFS Azure file shares." lightbox="media/nfs-nconnect-performance/nconnect-iops-improvement.png" border="false":::
45+
46+
:::image type="content" source="media/nfs-nconnect-performance/nconnect-throughput-improvement.png" alt-text="Screenshot showing average improvement in throughput when using nconnect with NFS Azure file shares." lightbox="media/nfs-nconnect-performance/nconnect-throughput-improvement.png" border="false":::
4347

4448
## Recommendations
4549

@@ -49,18 +53,18 @@ Following these recommendations will help you get the best results from `nconnec
4953
While Azure Files supports setting `nconnect` up to the maximum setting of 16, we recommend configuring the mount options with the optimal setting of `nconnect=4`. Currently, there are no gains beyond 4 channels for the Azure Files implementation of `nconnect`. In fact, exceeding 4 channels to a single Azure file share from a single client might adversely effect performance due to TCP network saturation.
5054

5155
### Size virtual machines carefully
52-
Depending on your workload requirements, it’s important to correctly size the client machines to avoid being restricted by their [expected network bandwidth](../../virtual-network/virtual-machine-network-throughput.md#expected-network-throughput). Note that you don't need multiple NICs in order to achieve the expected network throughput. While it's common to use [general purpose VMs](../../virtual-machines/sizes-general.md) with Azure Files, a variety of VM types are available depending on your workload needs and region availability. For more information, see [Azure VM Selector](https://azure.microsoft.com/pricing/vm-selector/).
56+
Depending on your workload requirements, it’s important to correctly size the client machines to avoid being restricted by their [expected network bandwidth](../../virtual-network/virtual-machine-network-throughput.md#expected-network-throughput). You don't need multiple NICs in order to achieve the expected network throughput. While it's common to use [general purpose VMs](../../virtual-machines/sizes-general.md) with Azure Files, a variety of VM types are available depending on your workload needs and region availability. For more information, see [Azure VM Selector](https://azure.microsoft.com/pricing/vm-selector/).
5357

5458
### Keep queue depth less than or equal to 64
5559
Queue depth is the number of pending I/O requests that a storage resource can service. We don't recommend exceeding the optimal queue depth of 64. If you do, you won't see any additional performance gains. For more information, see [Queue depth](understand-performance.md#queue-depth).
5660

5761
### `Nconnect` per-mount configuration
58-
If a workload requires you to mount multiple shares with one or more storage accounts with different `nconnect` settings from a single client, we can't guarantee that those settings will persist when mounting over the public endpoint. Per-mount configuration is only supported when a single Azure file share is used per storage account over the private endpoint as described in Scenario 2.
62+
If a workload requires mounting multiple shares with one or more storage accounts with different `nconnect` settings from a single client, we can't guarantee that those settings will persist when mounting over the public endpoint. Per-mount configuration is only supported when a single Azure file share is used per storage account over the private endpoint as described in Scenario 2.
5963

6064
#### Scenario 1: (not supported) `nconnect` per-mount configuration over public endpoint
6165

62-
- StorageAccount.file.core.windows.net = 52.239.238.8
63-
- StorageAccount2.file.core.windows.net = 52.239.238.7
66+
- `StorageAccount.file.core.windows.net = 52.239.238.8`
67+
- `StorageAccount2.file.core.windows.net = 52.239.238.7`
6468
- `Mount StorageAccount.file.core.windows.net:/FileShare1 nconnect=4`
6569
- `Mount StorageAccount.file.core.windows.net:/FileShare2`
6670
- `Mount StorageAccount2.file.core.windows.net:/FileShare1`
@@ -70,8 +74,8 @@ If a workload requires you to mount multiple shares with one or more storage acc
7074
7175
#### Scenario 2: (supported) `nconnect` per-mount configuration over private endpoint with multiple storage accounts
7276

73-
- StorageAccount.file.core.windows.net = 10.10.10.10
74-
- StorageAccount2.file.core.windows.net = 10.10.10.11
77+
- `StorageAccount.file.core.windows.net = 10.10.10.10`
78+
- `StorageAccount2.file.core.windows.net = 10.10.10.11`
7579
- `Mount StorageAccount.file.core.windows.net:/FileShare1 nconnect=4`
7680
- `Mount StorageAccount2.file.core.windows.net:/FileShare1`
7781

@@ -103,34 +107,34 @@ While these tests focus on random I/O access patterns, you'll get similar result
103107
#### High IOPS: 100% reads
104108

105109
- 4k I/O size - random read - 64 queue depth
106-
`fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=4k --iodepth=64 --filesize=4G --rw=randread --group_reporting --ramp_time=300`
110+
- `fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=4k --iodepth=64 --filesize=4G --rw=randread --group_reporting --ramp_time=300`
107111

108112
- 8k I/O size - random read - 64 queue depth
109-
`fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=8k --iodepth=64 --filesize=4G --rw=randread --group_reporting --ramp_time=300`
113+
- `fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=8k --iodepth=64 --filesize=4G --rw=randread --group_reporting --ramp_time=300`
110114

111115
#### High throughput: 100% reads
112116

113117
- 64k I/O size - random read - 64 queue depth
114-
`fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=64k --iodepth=64 --filesize=4G --rw=randread --group_reporting --ramp_time=300`
118+
- `fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=64k --iodepth=64 --filesize=4G --rw=randread --group_reporting --ramp_time=300`
115119

116120
- 1024k I/O size - 100% random read - 64 queue depth
117-
`fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=1024k --iodepth=64 --filesize=4G --rw=randread --group_reporting --ramp_time=300`
121+
- `fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=1024k --iodepth=64 --filesize=4G --rw=randread --group_reporting --ramp_time=300`
118122

119123
#### High IOPS: 100% writes
120124

121125
- 4k I/O size - 100% random write - 64 queue depth
122-
`fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=4k --iodepth=64 --filesize=4G --rw=randwrite --group_reporting --ramp_time=300`
126+
- `fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=4k --iodepth=64 --filesize=4G --rw=randwrite --group_reporting --ramp_time=300`
123127

124128
- 8k I/O size - 100% random write - 64 queue depth
125-
`fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=8k --iodepth=64 --filesize=4G --rw=randwrite --group_reporting --ramp_time=300`
129+
- `fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=8k --iodepth=64 --filesize=4G --rw=randwrite --group_reporting --ramp_time=300`
126130

127131
#### High throughput: 100% writes
128132

129133
- 64k I/O size - 100% random write - 64 queue depth
130-
`fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=64k --iodepth=64 --filesize=4G --rw=randwrite --group_reporting --ramp_time=300`
134+
- `fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=64k --iodepth=64 --filesize=4G --rw=randwrite --group_reporting --ramp_time=300`
131135

132136
- 1024k I/O size - 100% random write - 64 queue depth
133-
`fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=1024k --iodepth=64 --filesize=4G --rw=randwrite --group_reporting --ramp_time=300`
137+
- `fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=1024k --iodepth=64 --filesize=4G --rw=randwrite --group_reporting --ramp_time=300`
134138

135139
## Performance considerations
136140

@@ -141,14 +145,7 @@ When using the `nconnect` mount option, you should closely evaluate workloads th
141145

142146
Not all workloads require high-scale IOPS or throughout performance. If you're running smaller scale workloads, `nconnect` might not make sense for your workload.
143147

144-
Use the following tables for reference.
145-
146-
147-
148-
149-
150-
151148
## See also
152-
- For mounting instructions, see [Mount NFS file Share to Linux](storage-files-how-to-mount-nfs-shares).
149+
- For mounting instructions, see [Mount NFS file Share to Linux](storage-files-how-to-mount-nfs-shares.md).
153150
- For additional mount options, see [Linux NFS man page](https://linux.die.net/man/5/nfs).
154151
- For information on latency, IOPS, throughput, and other performance concepts, see [Understand Azure Files performance](understand-performance.md).

0 commit comments

Comments
 (0)