You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/performance-benchmarks-linux.md
+53Lines changed: 53 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -100,12 +100,65 @@ As the read-write I/OP mix increases towards write-heavy, the total I/OPS decrea
100
100
101
101
:::image type="content" source="./media/performance-benchmarks-linux/8K-random-iops-no-cache.png" alt-text="Diagram of benchmark tests with 8 KiB, random, client caching excluded." lightbox="./media/performance-benchmarks-linux/8K-random-iops-no-cache.png":::
102
102
103
+
104
+
<!--
103
105
## Side-by-side comparisons
104
106
105
107
To illustrate how caching can influence the performance benchmark tests, the following graph shows total I/OPS for 4-KiB tests with and without caching mechanisms in place. As shown, caching provides a slight performance boost for I/OPS fairly consistent trending.
Results: 256 KiB sequential I/O, baseline without caching
118
+
119
+
120
+
121
+
In the following two baseline benchmarks, FIO was used to measure the amount of sequential IO (read and write) a single regular volume in Azure NetApp Files can deliver. In order to produce a baseline that reflects the true bandwidth that a fully uncached read workload can achieve, FIO was configured to run with the parameter randrepeat=0 for data set generation. In addition, each test iteration was offset by reading a completely separate large data set that was not part of the benchmark in order to clear any caching that might have occurred with the benchmark dataset.
122
+
123
+
124
+
125
+
In the graph below, testing shows that an Azure NetApp Files regular volume can handle between approximately 3,500MiB/s pure sequential 256-KiB reads and approximately 2,500MiB/s pure sequential 256-KiB writes. During the tests, a 50/50 mix showed total throughput peaked higher than a pure sequential read workload.
126
+
127
+
[keep graph]
128
+
129
+
130
+
131
+
Results: 64KiB sequential I/O, reads vs. write, baseline without caching
132
+
133
+
134
+
135
+
In this next baseline benchmark, testing demonstrates that an Azure NetApp Files regular volume can handle between approximately 3,600MiB/s pure sequential 64-KiB reads and approximately 2,400MiB/s pure sequential 64-KiB writes. During the tests, a 50/50 mix showed total throughput on par with a pure sequential read workload.
136
+
137
+
138
+
139
+
With respect to pure read, the 64 KiB baseline performed slightly better than the 256 KiB baseline. However, when it comes to pure write and all mixed read/write workloads 256 KiB baseline outperformed 64 KiB, indicating a larger block size of 256 KiB is more effective overall for high throughput workloads.
140
+
141
+
142
+
The read-write mix for the workload was adjusted by 25% for each run.
To demonstrate how caching influences performance results, FIO was used in the following micro benchmark comparison to measure the amount of sequential IO (read and write) a single regular volume in Azure NetApp Files can deliver. This is contrasted below with the benefits a partially cacheable workload may provide.
153
+
154
+
155
+
156
+
On the right, testing was designed to mitigate any caching taking place as described in the baseline benchmarks above.
157
+
On the left, FIO was used against Azure NetApp Files regular volumes without the use of the randrepeat=0 parameter and using looping test iteration logic that slowly populated the cache over time. The combination of which produced an indeterminate amount of caching boosting the overall throughput. This resulted in slightly better overall read performance numbers than tests run without caching.
158
+
159
+
160
+
The graph below displays the side by side comparison of read performance with and without the caching influence, where caching produced up to ~4500MiB/s read throughput, while no caching achieved around ~3600MiB/s.
161
+
109
162
## Specific offset, streaming random read/write workloads: scale-up tests using parallel network connections (`nconnect`)
110
163
111
164
The following tests show a high I/OP benchmark using a single client with 4-KiB random workloads and a 1-TiB dataset. The workload mix generated uses a different I/O depth each time. To boost the performance for a single client workload, the [`nconnect` mount option](performance-linux-mount-options.md#nconnect) was used to improve parallelism in comparison to client mounts without the `nconnect` mount option.
0 commit comments