You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/performance-benchmarks-linux.md
+11-9Lines changed: 11 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ author: b-hchen
6
6
ms.service: azure-netapp-files
7
7
ms.custom: linux-related-content
8
8
ms.topic: conceptual
9
-
ms.date: 01/24/2025
9
+
ms.date: 01/27/2025
10
10
ms.author: anfdocs
11
11
---
12
12
# Azure NetApp Files regular volume performance benchmarks for Linux
@@ -143,13 +143,13 @@ In the graph below, testing shows that an Azure NetApp Files regular volume can
143
143
144
144
:::image type="content" source="./media/performance-benchmarks-linux/64K-sequential-read-write.png" alt-text="Diagram of 64-KiB benchmark tests with sequential I/O and caching included." lightbox="./media/performance-benchmarks-linux/64K-sequential-read-write.png":::
145
145
146
-
### Results: 64 KiB sequential I/Owithout caching
146
+
### Results: 64 KiB sequential I/O, reads vs. write, baseline without caching
147
147
148
-
In this benchmark, FIO ran using looping logic that less aggressively populated the cache. Client caching didn't influence the results. This configuration results in slightly better write performance numbers, but lower read numbers than tests without caching.
148
+
In this baseline benchmark, testing demonstrates that an Azure NetApp Files regular volume can handle between approximately 3,600 MiB/s pure sequential 64-KiB reads and approximately 2,400 MiB/second pure sequential 64-KiB writes. During the tests, a 50/50 mix showed total throughput on par with a pure sequential read workload.
149
149
150
-
In the following graph, testing demonstrates that an Azure NetApp Files regular volume can handle between approximately 3,600MiB/s pure sequential 64-KiB reads and approximately 2,400MiB/s pure sequential 64-KiB writes. During the tests, a 50/50 mix showed total throughput on par with a pure sequential read workload.
150
+
With respect to pure read, the 64-KiB baseline performed slightly better than the 256-KiB baseline. When it comes to pure write and all mixed read/write workloads, however, the 256-KiB baseline outperformed 64 KiB, indicating a larger block size of 256 KiB is more effective overall for high throughput workloads.
151
151
152
-
The read-write mix for the workload was adjusted by 25% for each run.
152
+
The read-write mix for the workload was adjusted by 25% for each run.
153
153
154
154
:::image type="content" source="./media/performance-benchmarks-linux/64K-sequential-read-write-no-cache.png" alt-text="Diagram of 64-KiB benchmark tests with sequential I/O, caching excluded." lightbox="./media/performance-benchmarks-linux/64K-sequential-read-write-no-cache.png":::
155
155
@@ -165,14 +165,16 @@ In this graph, testing shows that an Azure NetApp Files regular volume can handl
165
165
166
166
The following tests show a high I/OP benchmark using a single client with 64-KiB random workloads and a 1-TiB dataset. The workload mix generated uses a different I/O depth each time. To boost the performance for a single client workload, the `nconnect` mount option was leveraged for better parallelism in comparison to client mounts that didn't use the `nconnect` mount option. These tests were run only with caching excluded.
167
167
168
-
### Results: 64 KiB, sequential, caching excluded, with and without `nconnect`
To demonstrate how caching influences performance results, FIO was used in the following micro benchmark comparison to measure the amount of sequential I/O (read and write) a single regular volume in Azure NetApp Files can deliver. This test is contrasted with the benefits a partially cacheable workload may provide.
170
171
171
-
The following results show a scale-up test’s results when reading and writing in 4-KiB chunks on a NFSv3 mount on a single client with and without parallelization of operations (`nconnect`). The graphs show that as the I/O depth grows, the I/OPS also increase. But when using a standard TCP connection that provides only a single path to the storage, fewer total operations are sent per second than when a mount is able to leverage more TCP connections per mount point. In addition, the total latency for the operations is generally lower when using `nconnect`.
172
+
In the result without caching, testing was designed to mitigate any caching taking place as described in the baseline benchmarks above.
173
+
In the other result, FIO was used against Azure NetApp Files regular volumes without the `randrepeat=0` parameter and using a looping test iteration logic that slowly populated the cache over time. The combination of these factors produced an indeterminate amount of caching, boosting the overall throughput. This configuration resulted in slightly better overall read performance numbers than tests run without caching.
172
174
173
-
:::image type="content" source="./media/performance-benchmarks-linux/64K-sequential-no-cache-no-nconnect.png" alt-text="Diagram comparing 64-KiB tests without nconnect or caching." lightbox="./media/performance-benchmarks-linux/64K-sequential-no-cache-no-nconnect.png":::
175
+
The test results displayed in the graph display the side-by-side comparison of read performance with and without the caching influence, where caching produced up to ~4500 MiB/second read throughput, while no caching achieved around ~3600 MiB/second.
174
176
175
-
:::image type="content" source="./media/performance-benchmarks-linux/64K-sequential-no-cache-nconnect.png" alt-text="Diagram of 64-KiB tests with nconnect but no caching." lightbox="./media/performance-benchmarks-linux/64K-sequential-no-cache-nconnect.png":::
177
+
:::image type="content" source="./media/performance-benchmarks-linux/64K-sequential-read-nconnect-compare.png" alt-text="Diagram of comparing 64-KiB sequential reads and writes." lightbox="./media/performance-benchmarks-linux/64K-sequential-read-nconnect-compare.png":::
176
178
177
179
### Side-by-side comparison (with and without `nconnect`)
0 commit comments