You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/storage/blobs/network-file-system-protocol-performance-benchmark.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,15 +17,15 @@ This article provides benchmark testing recommendations and results for NFS 3.0
17
17
18
18
Storage performance testing is done to evaluate and compare different storage services. There are many ways to perform it, but three most common ones are:
19
19
20
-
1. using standard Linux commands, typically cp or dd,
21
-
1. using performance benchmark tools like fio, vdbench, ior, etc.,
22
-
1. using real-world application that is used in production.
20
+
- Using standard Linux commands, typically cp or dd,
21
+
- Using performance benchmark tools like fio, vdbench, ior, etc.,
22
+
- Using real-world application that is used in production.
23
23
24
24
No matter which method is used, it's always important to understand other potential bottlenecks in the environment, and make sure they aren't affecting the results. As an example, when measuring write performance, we need to make sure that source disk can read data as fast as the expected write performance. Same principle applies for read performance. Ideally, in these tests we can use a RAM disk. We need to make similar considerations for network throughput, CPU utilization, etc.
25
25
26
26
**Using standard Linux commands** is the simplest method for performance benchmark testing, but also least recommended. Method is simple as tools exist on every Linux environment and users are familiar with them. Results must be carefully analyzed since many aspects have impact on them, not only storage performance. Two commands that are typically used:
27
-
-testing with `cp` command copies one or more files from source to the destination storage service and measuring the time it takes to fully finish the operation. This command performs buffered, not direct IO and depends on buffer sizes, operating system, threading model, etc. On the other hand, some real-world applications behave in similar way and sometimes represent a good use case.
28
-
-second often used command is `dd`. Command is single threaded and in large scale bandwidth testing, results are limited by the speed of a single CPU core. It's possible to run multiple commands at the same time and assign them to different cores, but that complicates the testing and aggregating results. It's also much simpler to run than some of the performance benchmarking tools.
27
+
-Testing with `cp` command copies one or more files from source to the destination storage service and measuring the time it takes to fully finish the operation. This command performs buffered, not direct IO and depends on buffer sizes, operating system, threading model, etc. On the other hand, some real-world applications behave in similar way and sometimes represent a good use case.
28
+
-Second often used command is `dd`. Command is single threaded and in large scale bandwidth testing, results are limited by the speed of a single CPU core. It's possible to run multiple commands at the same time and assign them to different cores, but that complicates the testing and aggregating results. It's also much simpler to run than some of the performance benchmarking tools.
29
29
30
30
**Using performance benchmark tools** represents synthetic performance testing that is common in comparing different storage services. Tools are properly designed to utilize available client resources to maximize the storage throughput. Most of the tools are configurable and allow mimicking real-world applications, at least the simpler ones. Mimicking real-world applications requires detail information on application behavior and understanding their storage patterns.
31
31
@@ -79,7 +79,7 @@ Our testing setup was done in US East region with client virtual machine type [D
79
79
#### Results
80
80
81
81
> [!div class="mx-imgBorder"]
82
-
> 
82
+
> 
83
83
84
84
### Measuring sequential IOPS
85
85
@@ -94,7 +94,7 @@ Our testing setup was done in US East region with client virtual machine type [D
94
94
#### Results
95
95
96
96
> [!div class="mx-imgBorder"]
97
-
> 
97
+
> 
98
98
99
99
> [!NOTE]
100
100
> Results for sequential IOPS tests show values larger than [Storage Account limits](../common/scalability-targets-standard-account.md) for requests per second. IOPS are measured on the client side and larger values are due to service optimizations and sequential nature of the test.
@@ -112,7 +112,7 @@ Our testing setup was done in US East region with client virtual machine type [D
112
112
#### Results
113
113
114
114
> [!div class="mx-imgBorder"]
115
-
> 
115
+
> 
116
116
117
117
> [!NOTE]
118
118
> Results from random tests are added for completeness, NFS 3.0 endpoint on Azure Blob Storage is not a recommended storage service for random write workloads.
0 commit comments