Skip to content

Commit 7062e7b

Browse files
author
julie.chen
committed
add article Performance Benchmarks for Linux
1 parent f651d06 commit 7062e7b

File tree

1 file changed

+7
-8
lines changed

1 file changed

+7
-8
lines changed

articles/azure-netapp-files/performance-benchmarks-linux.md

Lines changed: 7 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -26,35 +26,35 @@ This section describes performance benchmarks of Linux workload throughput and w
2626

2727
### Linux workload throughput
2828

29-
The graph below represents a 64-kibibyte (KiB) sequential workload and a 1-TiB working set. It shows that a single Azure NetApp Files volume is capable of handling between ~1,600MiB/s pure sequential writes and ~4,500MiB/s pure sequential reads.
29+
The graph below represents a 64-kibibyte (KiB) sequential workload and a 1-TiB working set. It shows that a single Azure NetApp Files volume can handle between ~1,600 MiB/s pure sequential writes and ~4,500 MiB/s pure sequential reads.
3030

31-
The graph illustrates decreases in 10% at a time, from pure read to pure write. It demonstrates what you can anticipate when using varying read/write ratios (100%:0%, 90%:10%, 80%:20%, and so on).
31+
The graph illustrates decreases in 10% at a time, from pure read to pure write. It demonstrates what you can expect when using varying read/write ratios (100%:0%, 90%:10%, 80%:20%, and so on).
3232

3333
![Linux workload throughput](../media/azure-netapp-files/performance-benchmarks-linux-workload-throughput.png)
3434

3535
### Linux workload IOPS
3636

37-
The following graph represents a 4-kibibyte (KiB) random workload and a 1-TiB working set. The graph shows that an Azure NetApp Files volume is capable of handling between ~130,000 pure random writes and ~460,000 pure random reads.
37+
The following graph represents a 4-kibibyte (KiB) random workload and a 1-TiB working set. The graph shows that an Azure NetApp Files volume can handle between ~130,000 pure random writes and ~460,000 pure random reads.
3838

39-
This graph illustrates decreases in 10% at a time, from pure read to pure write. It demonstrates what you can anticipate when using varying read/write ratios (100%:0%, 90%:10%, 80%:20%, and so on).
39+
This graph illustrates decreases in 10% at a time, from pure read to pure write. It demonstrates what you can expect when using varying read/write ratios (100%:0%, 90%:10%, 80%:20%, and so on).
4040

4141
![Linux workload IOPS](../media/azure-netapp-files/performance-benchmarks-linux-workload-iops.png)
4242

4343
## Linux scale-up
4444

45-
Linux 5.3 kernel enables single-client scale-out networking for NFS `nconnect`. This feature is available on SUSE (starting with SLES12SP4) and Ubuntu (starting with the 19.10 release). It is similar in concept to both SMB multichannel and Oracle Direct NFS.
45+
Linux 5.3 kernel enables single-client scale-out networking for NFS `nconnect`. This feature is available on SUSE (starting with SLES12SP4) and Ubuntu (starting with the 19.10 release). It's similar in concept to both SMB multichannel and Oracle Direct NFS.
4646

4747
The graphs in this section show the results of validation testing for the client-side mount option with NFSv3. The graphs compare `nconnect` to a non-connected mounted volume. In the graphs, FIO generated the workload from a single D32s_v3 instance in the us-west2 Azure region.
4848

4949
### Linux read throughput
5050

51-
The following graphs compare sequential reads of ~3,500MiB/s of reads with `nconnect`, which is roughly 2.3X non-`nconnect`.
51+
The following graphs compare sequential reads of ~3,500 MiB/s of reads with `nconnect`, which is roughly 2.3X non-`nconnect`.
5252

5353
![Linux read throughput](../media/azure-netapp-files/performance-benchmarks-linux-read-throughput.png)
5454

5555
### Linux write throughput
5656

57-
The following graphs show a comparison of sequential writes. They indicate that nconnect has no noticeable benefit for sequential writes. 1,500MiB/s is roughly both the upper limit for the sequential write and the egress limit for D32s_v3 instance.
57+
The following graphs show a comparison of sequential writes. They indicate that nconnect has no noticeable benefit for sequential writes. 1,500 MiB/s is roughly both the upper limit for the sequential write and the egress limit for D32s_v3 instance.
5858

5959
![Linux write throughput](../media/azure-netapp-files/performance-benchmarks-linux-write-throughput.png)
6060

@@ -70,7 +70,6 @@ The following graphs show random writes of ~135,000 write IOPS with `nconnect`,
7070

7171
![Linux write IOPS](../media/azure-netapp-files/performance-benchmarks-linux-write-iops.png)
7272

73-
7473
## Next steps
7574

7675
- [Azure NetApp Files: Getting the Most Out of Your Cloud Storage](https://cloud.netapp.com/hubfs/Resources/ANF%20PERFORMANCE%20TESTING%20IN%20TEMPLATE.pdf?hsCtaTracking=f2f560e9-9d13-4814-852d-cfc9bf736c6a%7C764e9d9c-9e6b-4549-97ec-af930247f22f)

0 commit comments

Comments
 (0)