Skip to content

Commit 80bb4b1

Browse files
committed
Acrolinx fixes.
1 parent 5797567 commit 80bb4b1

File tree

1 file changed

+14
-14
lines changed

1 file changed

+14
-14
lines changed

articles/storage/files/nfs-performance.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -57,14 +57,14 @@ NFS nconnect is a client-side mount option for NFS file shares that allows you t
5757

5858
### Benefits
5959

60-
With nconnect, you can increase performance at scale using fewer client machines to reduce total cost of ownership (TCO). The nconnect feature increases performance by using multiple TCP channels on one or more NICs, using single or multiple clients. Without nconnect, you'd need roughly 20 client machines in order to achieve the bandwidth scale limits (10 GiB/s) offered by the largest SSD file share provisioning size. With nconnect, you can achieve those limits using only 6-7 clients, reducing compute costs by nearly 70% while providing significant improvements in I/O operations per second (IOPS) and throughput at scale. See the following table.
60+
With nconnect, you can increase performance at scale using fewer client machines to reduce total cost of ownership (TCO). The nconnect feature increases performance by using multiple TCP channels on one or more NICs, using single or multiple clients. Without nconnect, you'd need roughly 20 client machines in order to achieve the bandwidth scale limits (10 GiB / sec) offered by the largest SSD file share provisioning size. With nconnect, you can achieve those limits using only 6-7 clients, reducing compute costs by nearly 70% while providing significant improvements in I/O operations per second (IOPS) and throughput at scale. See the following table.
6161

6262
| **Metric (operation)** | **I/O size** | **Performance improvement** |
63-
|------------------------|---------------|-----------------------------|
64-
| IOPS (write) | 64K, 1024K | 3x |
65-
| IOPS (read) | All I/O sizes | 2-4x |
66-
| Throughput (write) | 64K, 1024K | 3x |
67-
| Throughput (read) | All I/O sizes | 2-4x |
63+
|-|-|-|
64+
| IOPS (write) | 64 KiB, 1024 KiB | 3x |
65+
| IOPS (read) | All I/O sizes | 2-4x |
66+
| Throughput (write) | 64 KiB, 1024 KiB | 3x |
67+
| Throughput (read) | All I/O sizes | 2-4x |
6868

6969
### Prerequisites
7070

@@ -96,7 +96,7 @@ Queue depth is the number of pending I/O requests that a storage resource can se
9696

9797
### Per mount configuration
9898

99-
If a workload requires mounting multiple shares with one or more storage accounts with different nconnect settings from a single client, we can't guarantee that those settings will persist when mounting over the public endpoint. Per-mount configuration is only supported when a single Azure file share is used per storage account over the private endpoint as described in Scenario 1.
99+
If a workload requires mounting multiple shares with one or more storage accounts with different nconnect settings from a single client, we can't guarantee that those settings persist when mounting over the public endpoint. Per mount configuration is only supported when a single Azure file share is used per storage account over the private endpoint as described in Scenario 1.
100100

101101
#### Scenario 1: per mount configuration over private endpoint with multiple storage accounts (supported)
102102

@@ -114,7 +114,7 @@ If a workload requires mounting multiple shares with one or more storage account
114114
- `Mount StorageAccount2.file.core.windows.net:/StorageAccount2/FileShare1`
115115

116116
> [!NOTE]
117-
> Even if the storage account resolves to a different IP address, we can't guarantee that address will persist because public endpoints aren't static addresses.
117+
> Even if the storage account resolves to a different IP address, we can't guarantee that address persist because public endpoints aren't static addresses.
118118
119119
#### Scenario 3: per mount configuration over private endpoint with multiple shares on single storage account (not supported)
120120

@@ -157,41 +157,41 @@ fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_b
157157

158158
#### High throughput: 100% reads
159159

160-
**64k I/O size - random read - 64 queue depth**
160+
**64 KiB I/O size - random read - 64 queue depth**
161161

162162
```bash
163163
fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=64k --iodepth=64 --filesize=4G --rw=randread --group_reporting --ramp_time=300
164164
```
165165

166-
**1024k I/O size - 100% random read - 64 queue depth**
166+
**1024 KiB I/O size - 100% random read - 64 queue depth**
167167

168168
```bash
169169
fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=1024k --iodepth=64 --filesize=4G --rw=randread --group_reporting --ramp_time=300
170170
```
171171

172172
#### High IOPS: 100% writes
173173

174-
**4k I/O size - 100% random write - 64 queue depth**
174+
**4 KiB I/O size - 100% random write - 64 queue depth**
175175

176176
```bash
177177
fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=4k --iodepth=64 --filesize=4G --rw=randwrite --group_reporting --ramp_time=300
178178
```
179179

180-
**8k I/O size - 100% random write - 64 queue depth**
180+
**8 KiB I/O size - 100% random write - 64 queue depth**
181181

182182
```bash
183183
fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=8k --iodepth=64 --filesize=4G --rw=randwrite --group_reporting --ramp_time=300
184184
```
185185

186186
#### High throughput: 100% writes
187187

188-
**64k I/O size - 100% random write - 64 queue depth**
188+
**64 KiB I/O size - 100% random write - 64 queue depth**
189189

190190
```bash
191191
fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=64k --iodepth=64 --filesize=4G --rw=randwrite --group_reporting --ramp_time=300
192192
```
193193

194-
**1024k I/O size - 100% random write - 64 queue depth**
194+
**1024 KiB I/O size - 100% random write - 64 queue depth**
195195

196196
```bash
197197
fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=1024k --iodepth=64 --filesize=4G --rw=randwrite --group_reporting --ramp_time=300

0 commit comments

Comments
 (0)