You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/performance-benchmarks-linux.md
+7-8Lines changed: 7 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,35 +26,35 @@ This section describes performance benchmarks of Linux workload throughput and w
26
26
27
27
### Linux workload throughput
28
28
29
-
The graph below represents a 64-kibibyte (KiB) sequential workload and a 1-TiB working set. It shows that a single Azure NetApp Files volume is capable of handling between ~1,600MiB/s pure sequential writes and ~4,500MiB/s pure sequential reads.
29
+
The graph below represents a 64-kibibyte (KiB) sequential workload and a 1-TiB working set. It shows that a single Azure NetApp Files volume can handle between ~1,600 MiB/s pure sequential writes and ~4,500 MiB/s pure sequential reads.
30
30
31
-
The graph illustrates decreases in 10% at a time, from pure read to pure write. It demonstrates what you can anticipate when using varying read/write ratios (100%:0%, 90%:10%, 80%:20%, and so on).
31
+
The graph illustrates decreases in 10% at a time, from pure read to pure write. It demonstrates what you can expect when using varying read/write ratios (100%:0%, 90%:10%, 80%:20%, and so on).
The following graph represents a 4-kibibyte (KiB) random workload and a 1-TiB working set. The graph shows that an Azure NetApp Files volume is capable of handling between ~130,000 pure random writes and ~460,000 pure random reads.
37
+
The following graph represents a 4-kibibyte (KiB) random workload and a 1-TiB working set. The graph shows that an Azure NetApp Files volume can handle between ~130,000 pure random writes and ~460,000 pure random reads.
38
38
39
-
This graph illustrates decreases in 10% at a time, from pure read to pure write. It demonstrates what you can anticipate when using varying read/write ratios (100%:0%, 90%:10%, 80%:20%, and so on).
39
+
This graph illustrates decreases in 10% at a time, from pure read to pure write. It demonstrates what you can expect when using varying read/write ratios (100%:0%, 90%:10%, 80%:20%, and so on).
Linux 5.3 kernel enables single-client scale-out networking for NFS `nconnect`. This feature is available on SUSE (starting with SLES12SP4) and Ubuntu (starting with the 19.10 release). It is similar in concept to both SMB multichannel and Oracle Direct NFS.
45
+
Linux 5.3 kernel enables single-client scale-out networking for NFS `nconnect`. This feature is available on SUSE (starting with SLES12SP4) and Ubuntu (starting with the 19.10 release). It's similar in concept to both SMB multichannel and Oracle Direct NFS.
46
46
47
47
The graphs in this section show the results of validation testing for the client-side mount option with NFSv3. The graphs compare `nconnect` to a non-connected mounted volume. In the graphs, FIO generated the workload from a single D32s_v3 instance in the us-west2 Azure region.
48
48
49
49
### Linux read throughput
50
50
51
-
The following graphs compare sequential reads of ~3,500MiB/s of reads with `nconnect`, which is roughly 2.3X non-`nconnect`.
51
+
The following graphs compare sequential reads of ~3,500 MiB/s of reads with `nconnect`, which is roughly 2.3X non-`nconnect`.
The following graphs show a comparison of sequential writes. They indicate that nconnect has no noticeable benefit for sequential writes. 1,500MiB/s is roughly both the upper limit for the sequential write and the egress limit for D32s_v3 instance.
57
+
The following graphs show a comparison of sequential writes. They indicate that nconnect has no noticeable benefit for sequential writes. 1,500 MiB/s is roughly both the upper limit for the sequential write and the egress limit for D32s_v3 instance.
-[Azure NetApp Files: Getting the Most Out of Your Cloud Storage](https://cloud.netapp.com/hubfs/Resources/ANF%20PERFORMANCE%20TESTING%20IN%20TEMPLATE.pdf?hsCtaTracking=f2f560e9-9d13-4814-852d-cfc9bf736c6a%7C764e9d9c-9e6b-4549-97ec-af930247f22f)
0 commit comments