You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/performance-benchmarks-linux.md
+33-33Lines changed: 33 additions & 33 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,10 +18,10 @@ This article describes performance benchmarks Azure NetApp Files delivers for Li
18
18
19
19
The intent of a scale-out test is to show the performance of an Azure NetApp File volume when scaling out (or increasing) the number of clients generating simultaneous workload to the same volume. These tests are generally able to push a volume to the edge of its performance limits and are indicative of workloads such as media rendering, AI/ML, and other workloads that utilize large compute farms to perform work.
20
20
21
-
High IOP scaleout benchmark configuration
21
+
## High IOP scale-out benchmark configuration
22
22
23
23
These benchmarks used the following:
24
-
- A single Azure NetApp Files 100-TiB regular volume with a 1-TiB data set using the Ultra performance tier
24
+
- A single Azure NetApp Files 100-TiB regular volume with a 1-TiB dataset using the Ultra performance tier
25
25
-[FIO (with and without setting randrepeat=0)](testing-methodology.md)
26
26
- 4-KiB and 8-KiB block sizes
27
27
- 6 D32s_v5 virtual machines running RHEL 9.3
@@ -33,7 +33,7 @@ These benchmarks used the following:
33
33
34
34
These benchmarks used the following:
35
35
36
-
- A single Azure NetApp Files regular volume with a 1-TiB data set using the Ultra performance tier
36
+
- A single Azure NetApp Files regular volume with a 1-TiB dataset using the Ultra performance tier
37
37
FIO (with and without setting randrepeat=0)
38
38
-[FIO (with and without setting randrepeat=0)](testing-methodology.md)
39
39
- 64-KiB and 256-KiB block size
@@ -42,21 +42,21 @@ FIO (with and without setting randrepeat=0)
42
42
-[Manual QoS](manage-manual-qos-capacity-pool.md)
43
43
- Mount options: rw,nconnect=8,hard,rsize=262144,wsize=262144,vers=3,tcp,bg
- A single Azure NetApp Files regular volume with a 1-TiB data set using the Ultra performance tier
48
+
- A single Azure NetApp Files regular volume with a 1-TiB dataset using the Ultra performance tier
49
49
- FIO (with and without setting randrepeat=0)
50
50
- 4-KiB and 64-KiB wsize/rsize
51
-
- A single D32s_v4 virtual machines running RHEL 9.3
52
-
- NFSv3 with and without nconnect
51
+
- A single D32s_v4 virtual machine running RHEL 9.3
52
+
- NFSv3 with and without `nconnect`
53
53
- Mount options: rw,nconnect=8,hard,rsize=262144,wsize=262144,vers=3,tcp,bg
54
54
55
55
## Scale-up benchmark tests
56
56
57
-
The scale-up test’s intent is to show the performance of an Azure NetApp File volume when scaling up (or increasing) the number of jobs generating simultaneous workload across multiple TCP connections on a single client to the same volume (such as with [nconnect](performance-linux-mount-options.md#nconnect)).
57
+
The scale-up test’s intent is to show the performance of an Azure NetApp File volume when scaling up (or increasing) the number of jobs generating simultaneous workload across multiple TCP connections on a single client to the same volume (such as with [`nconnect`](performance-linux-mount-options.md#nconnect)).
58
58
59
-
Without nconnect, these workloads cannot push the limits of a volume’s maximum performance, since the client cannot generate enough IO or network throughput. These tests are generally indicative of what a single user’s experience might be in workloads such as media rendering, databases, AI/ML, and general file shares.
59
+
Without `nconnect`, these workloads can't push the limits of a volume’s maximum performance, since the client can't generate enough IO or network throughput. These tests are generally indicative of what a single user’s experience might be in workloads such as media rendering, databases, AI/ML, and general file shares.
60
60
61
61
## High IOP scale-out benchmarks
62
62
@@ -74,62 +74,62 @@ For more information, see [Testing methodology](testing-methodology.md).
74
74
75
75
In this benchmark, FIO ran without the `randrepeat` option to randomize data. Thus, an indeterminate amount of caching came into play. This configuration results in slightly better overall performance numbers than tests run without caching with the entire IO stack being utilized.
76
76
77
-
In the following graph, testing shows an Azure NetApp Files regular volume can handle between approximately 130,000 pure random 4-KiB writes and approximately 460,000 pure random 4-KiB reads during this benchmark. Read:write mix for the workload adjusted by 10% for each run.
77
+
In the following graph, testing shows an Azure NetApp Files regular volume can handle between approximately 130,000 pure random 4-KiB writes and approximately 460,000 pure random 4KiB reads during this benchmark. Read-write mix for the workload adjusted by 10% for each run.
78
78
79
-
As the read:write IOP mix increases towards write-heavy, the total IOPS decrease.
79
+
As the read-write IOP mix increases towards write-heavy, the total IOPS decrease.
In this benchmark, FIO was run with the setting `randrepeat=0` to randomize data, reducing the caching influence on performance. This resulted in an approximately 8% reduction in write IOPS and a approximately 17% reduction in read IOPS, but displays performance numbers more representative of what the storage can actually do.
85
+
In this benchmark, FIO was run with the setting `randrepeat=0` to randomize data, reducing the caching influence on performance. This resulted in an approximately 8% reduction in write IOPS and an approximately 17% reduction in read IOPS, but displays performance numbers more representative of what the storage can actually do.
86
86
87
-
In the following graph, testing shows an Azure NetApp Files regular volume can handle between approximately 120,000 pure random 4-KiB writes and approximately 388,000 pure random 4-KiB reads. Read:write mix for the workload adjusted by 25% for each run.
87
+
In the following graph, testing shows an Azure NetApp Files regular volume can handle between approximately 120,000 pure random 4-KiB writes and approximately 388,000 pure random 4-KiB reads. Read-write mix for the workload adjusted by 25% for each run.
88
88
89
-
As the read:write IOP mix increases towards write-heavy, the total IOPS decrease.
89
+
As the read-write IOP mix increases towards write-heavy, the total IOPS decrease.
Larger read and write sizes will result in fewer total IOPS, as more data can be sent with each operation. An 8-KiB read and write size was used to more accurately simulate what most modern applications use. For instance, many EDA applications utilize 8-KiB reads and writes.
95
95
96
-
In this benchmark, FIO ran with `randrepeat=0` to randomize data so the client caching impact was reduced. In the following graph, testing shows that an Azure NetApp Files regular volume can handle between approximately 111,000 pure random 8-KiB writes and approximately 293,000 pure random 8-KiB reads. Read:write mix for the workload adjusted by 25% for each run.
96
+
In this benchmark, FIO ran with `randrepeat=0` to randomize data so the client caching impact was reduced. In the following graph, testing shows that an Azure NetApp Files regular volume can handle between approximately 111,000 pure random 8-KiB writes and approximately 293,000 pure random 8-KiB reads. Read-write mix for the workload adjusted by 25% for each run.
97
97
98
-
As the read:write IOP mix increases towards write-heavy, the total IOPS decrease.
98
+
As the read-write IOP mix increases towards write-heavy, the total IOPS decrease.
99
99
100
100
## Side-by-side comparisons
101
101
102
102
To illustrate how caching can influence the performance benchmark tests, the following graph shows total I/OPS for 4-KiB tests with and without caching mechanisms in place. As shown, caching provides a slight performance boost for I/OPS fairly consistent trending.
103
103
104
-
## Specific offset, streaming random read/write workloads: scale-up tests using parallel network connections (nconnect)
104
+
## Specific offset, streaming random read/write workloads: scale-up tests using parallel network connections (`nconnect`)
105
105
106
-
The following tests show a high IOP benchmark using a single client with 4-KiB random workloads and a 1-TiB data set. The workload mix generated uses a different I/O depth each time. To boost the performance for a single client workload, the [nconnect mount option](performance-linux-mount-options.md#nconnect) was used to improve parallelism in comparison to client mounts without the nconnect mount option.
106
+
The following tests show a high IOP benchmark using a single client with 4-KiB random workloads and a 1-TiB dataset. The workload mix generated uses a different I/O depth each time. To boost the performance for a single client workload, the [`nconnect` mount option](performance-linux-mount-options.md#nconnect) was used to improve parallelism in comparison to client mounts without the `nconnect` mount option.
107
107
108
-
When using a standard TCP connection that provides only a single path to the storage, fewer total operations are sent per second than when a mount is able to leverage more TCP connections (such as with nconnect) per mount point. When using nconnect, he total latency for the operations is generally lower. These tests are also run with `randrepeat=0` to intentionally avoid caching. For more information on this option, see [Testing methodology](testing-methodology.md).
108
+
When using a standard TCP connection that provides only a single path to the storage, fewer total operations are sent per second than when a mount is able to leverage more TCP connections (such as with `nconnect`) per mount point. When using `nconnect`, the total latency for the operations is generally lower. These tests are also run with `randrepeat=0` to intentionally avoid caching. For more information on this option, see [Testing methodology](testing-methodology.md).
109
109
110
-
### Results: 4KiB, random, nconnect vs. no nconnect, caching excluded
110
+
### Results: 4-KiB, random, with and without `nconnect`, caching excluded
111
111
112
-
The following graphs show a side-by-side comparison of 4-KiB reads and writes with and without nconnect to highlight the performance improvements seen when using nconnect: higher overall IOPS, lower latency.
112
+
The following graphs show a side-by-side comparison of 4-KiB reads and writes with and without `nconnect` to highlight the performance improvements seen when using `nconnect`: higher overall IOPS, lower latency.
113
113
114
114
## High throughput benchmarks
115
115
116
116
The following benchmarks show the performance achieved for Azure NetApp Files with a high throughput workload.
117
117
118
-
High throughput workloads are more sequential in nature and often are read/write heavy with low metadata. Throughput is generally more important than IOPS. These workloads typically leverage larger read/write sizes (64-256K), which will generate higher latencies than smaller read/write sizes, since larger payloads will naturally take longer to be processed.
118
+
High throughput workloads are more sequential in nature and often are read/write heavy with low metadata. Throughput is generally more important than I/OPS. These workloads typically leverage larger read/write sizes (64K to 256K), which generate higher latencies than smaller read/write sizes, since larger payloads will naturally take longer to be processed.
119
119
120
120
Examples of high throughput workloads include:
121
121
122
122
- Media repositories
123
123
- High performance compute
124
124
- AI/ML/LLP
125
125
126
-
The following tests show a high throughput benchmark using both 64-KiB and 256-KiB sequential workloads and a 1-TiB data set. The workload mix generated decreases a set percentage at a time and demonstrates what you can expect when using varying read/write ratios (for instance, 100%:0%, 90%:10%, 80%:20%, and so on).
126
+
The following tests show a high throughput benchmark using both 64-KiB and 256-KiB sequential workloads and a 1-TiB dataset. The workload mix generated decreases a set percentage at a time and demonstrates what you can expect when using varying read/write ratios (for instance, 100%:0%, 90%:10%, 80%:20%, and so on).
127
127
128
128
### Results: 64-KiB sequential I/O, caching included
129
129
130
130
In this benchmark, FIO ran using looping logic that more aggressively populated the cache, so an indeterminate amount of caching influenced the results. This results in slightly better overall performance numbers than tests run without caching.
131
131
132
-
In the graph below, testing shows that an Azure NetApp Files regular volume can handle between approximately 4,500MiB/s pure sequential 64-KiB reads and approximately 1,600MiB/s pure sequential 64-KiB writes. The read:write mix for the workload was adjusted by 10% for each run.
132
+
In the graph below, testing shows that an Azure NetApp Files regular volume can handle between approximately 4,500MiB/s pure sequential 64-KiB reads and approximately 1,600MiB/s pure sequential 64-KiB writes. The read-write mix for the workload was adjusted by 10% for each run.
@@ -138,31 +138,31 @@ In this benchmark, FIO ran using looping logic that less aggressively populated
138
138
139
139
In the following graph, testing demonstrates that an Azure NetApp Files regular volume can handle between approximately 3,600MiB/s pure sequential 64-KiB reads and approximately 2,400MiB/s pure sequential 64-KiB writes. During the tests, a 50/50 mix showed total throughput on par with a pure sequential read workload.
140
140
141
-
The read:write mix for the workload was adjusted by 25% for each run.
141
+
The read-write mix for the workload was adjusted by 25% for each run.
In this benchmark, FIO ran using looping logic that less aggressively populated the cache, so caching didn't influence the results. This configuration results in slightly less write performance numbers than 64-KiB tests, but higher read numbers than the same 64-KiB tests run without caching.
146
146
147
147
In the graph below, testing shows that an Azure NetApp Files regular volume can handle between approximately 3,500MiB/s pure sequential 256-KiB reads and approximately 2,500MiB/s pure sequential 256-KiB writes. During the tests, a 50/50 mix showed total throughput peaked higher than a pure sequential read workload.
148
148
149
-
The read:write mix for the workload was adjusted in 25% increments for each run.
149
+
The read-write mix for the workload was adjusted in 25% increments for each run.
150
150
151
151
### Side-by-side comparison
152
152
153
-
To better show how caching can influence the performance benchmark tests, the following graph shows total MiB/s for 64-KiB tests with and without caching mechanisms in place. Caching provides an initial slight performance boost for total MiB/s because caching generally improves reads more so than writes. As the read/write mix changes, the total MiB/s without caching exceeds the results that utilize client caching.
153
+
To better show how caching can influence the performance benchmark tests, the following graph shows total MiB/s for 64-KiB tests with and without caching mechanisms in place. Caching provides an initial slight performance boost for total MiB/s because caching generally improves reads more so than writes. As the read/write mix changes, the total throughput without caching exceeds the results that utilize client caching.
154
154
155
-
## Parallel network connections (nconnect)
155
+
## Parallel network connections (`nconnect`)
156
156
157
-
The following tests show a high IOP benchmark using a single client with 64-KiB random workloads and a 1-TiB data set. The workload mix generated uses a different I/O depth each time. To boost the performance for a single client workload, the nconnect mount option was leveraged for better parallelism in comparison to client mounts that didn't use the nconnect mount option. These tests were run only with caching excluded.
157
+
The following tests show a high IOP benchmark using a single client with 64-KiB random workloads and a 1-TiB dataset. The workload mix generated uses a different I/O depth each time. To boost the performance for a single client workload, the `nconnect` mount option was leveraged for better parallelism in comparison to client mounts that didn't use the `nconnect` mount option. These tests were run only with caching excluded.
158
158
159
-
### Results: 64-KiB, sequential, caching excluded, with and without nconnect
159
+
### Results: 64-KiB, sequential, caching excluded, with and without `nconnect`
160
160
161
-
The following results show a scale-up test’s results when reading and writing in 4-KiB chunks on a NFSv3 mount on a single client with and without parallelization of operations (nconnect). The graphs show that as the I/O depth grows, the I/OPS also increase. But when using a standard TCP connection that provides only a single path to the storage, fewer total operations are sent per second than when a mount is able to leverage more TCP connections per mount point. In addition, the total latency for the operations is generally lower when using nconnect.
161
+
The following results show a scale-up test’s results when reading and writing in 4-KiB chunks on a NFSv3 mount on a single client with and without parallelization of operations (`nconnect`). The graphs show that as the I/O depth grows, the I/OPS also increase. But when using a standard TCP connection that provides only a single path to the storage, fewer total operations are sent per second than when a mount is able to leverage more TCP connections per mount point. In addition, the total latency for the operations is generally lower when using `nconnect`.
162
162
163
-
### Side-by-side comparison (with and without nconnect)
163
+
### Side-by-side comparison (with and without `nconnect`)
164
164
165
-
The following graphs show a side-by-side comparison of 64-KiB sequential reads and writes with and without nconnect to highlight the performance improvements seen when using nconnect: higher overall throughput, lower latency.
165
+
The following graphs show a side-by-side comparison of 64-KiB sequential reads and writes with and without `nconnect` to highlight the performance improvements seen when using `nconnect`: higher overall throughput, lower latency.
0 commit comments