Skip to content

Commit 6391539

Browse files
authored
Merge pull request #174049 from b-juche/20210929-metricsperformance-chad
Performance articles: update per Chad
2 parents c0e37c9 + db14731 commit 6391539

File tree

2 files changed

+15
-13
lines changed

2 files changed

+15
-13
lines changed

articles/azure-netapp-files/azure-netapp-files-performance-metrics-volumes.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.author: b-juche
66
ms.service: azure-netapp-files
77
ms.workload: storage
88
ms.topic: conceptual
9-
ms.date: 08/07/2019
9+
ms.date: 09/29/2021
1010

1111
---
1212
# Performance benchmark test recommendations for Azure NetApp Files
@@ -15,15 +15,17 @@ This article provides benchmark testing recommendations for volume performance a
1515

1616
## Overview
1717

18-
To understand the performance characteristics of an Azure NetApp Files volume, you can use the open-source tool [FIO](https://github.com/axboe/fio) to run a series of benchmarks to simulate a variety of workloads. FIO can be installed on both Linux and Windows-based operating systems. It is an excellent tool to get a quick snapshot of both IOPS and throughput for a volume.
18+
To understand the performance characteristics of an Azure NetApp Files volume, you can use the open-source tool [FIO](https://github.com/axboe/fio) to run a series of benchmarks to simulate various workloads. FIO can be installed on both Linux and Windows-based operating systems. It is an excellent tool to get a quick snapshot of both IOPS and throughput for a volume.
19+
20+
Azure NetApp Files does *not* recommend using the `dd` utility as a baseline benchmarking tool. You should use an actual application workload, workload simulation, and benchmarking and analyzing tools (for example, Oracle AWR with Oracle, or the IBM equivalent for DB2) to establish and analyze optimal infrastructure performance. Tools such as FIO, vdbench, and iometer have their places in determining virtual machines to storage limits, matching the parameters of the test to the actual application workload mixtures for most useful results. However, it is always best to test with the real-world application.
1921

2022
### VM instance sizing
2123

2224
For best results, ensure that you are using a virtual machine (VM) instance that is appropriately sized to perform the tests. The following examples use a Standard_D32s_v3 instance. For more information about VM instance sizes, see [Sizes for Windows virtual machines in Azure](../virtual-machines/sizes.md?toc=%2fazure%2fvirtual-network%2ftoc.json) for Windows-based VMs, and [Sizes for Linux virtual machines in Azure](../virtual-machines/sizes.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) for Linux-based VMs.
2325

2426
### Azure NetApp Files volume sizing
2527

26-
Ensure that you choose the correct service level and volume quota size for the expected performance level. See [Service levels for Azure NetApp Files](azure-netapp-files-service-levels.md) for more information.
28+
Ensure that you choose the correct service level and volume quota size for the expected performance level. For more information, see [Service levels for Azure NetApp Files](azure-netapp-files-service-levels.md).
2729

2830
### Virtual network (VNet) recommendations
2931

@@ -119,4 +121,4 @@ The following example shows a GET URL for viewing logical volume size:
119121
## Next steps
120122

121123
- [Service levels for Azure NetApp Files](azure-netapp-files-service-levels.md)
122-
- [Performance benchmarks for Linux](performance-benchmarks-linux.md)
124+
- [Performance benchmarks for Linux](performance-benchmarks-linux.md)

articles/azure-netapp-files/performance-benchmarks-linux.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ ms.workload: storage
1313
ms.tgt_pltfrm: na
1414
ms.devlang: na
1515
ms.topic: conceptual
16-
ms.date: 04/29/2020
16+
ms.date: 09/29/2021
1717
ms.author: b-juche
1818
---
1919
# Azure NetApp Files performance benchmarks for Linux
@@ -26,47 +26,47 @@ This section describes performance benchmarks of Linux workload throughput and w
2626

2727
### Linux workload throughput
2828

29-
The graph below represents a 64-kibibyte (KiB) sequential workload and a 1-TiB working set. It shows that a single Azure NetApp Files volume can handle between ~1,600 MiB/s pure sequential writes and ~4,500 MiB/s pure sequential reads.
29+
The graph below represents a 64-kibibyte (KiB) sequential workload and a 1 TiB working set. It shows that a single Azure NetApp Files volume can handle between ~1,600 MiB/s pure sequential writes and ~4,500 MiB/s pure sequential reads.
3030

3131
The graph illustrates decreases in 10% at a time, from pure read to pure write. It demonstrates what you can expect when using varying read/write ratios (100%:0%, 90%:10%, 80%:20%, and so on).
3232

3333
![Linux workload throughput](../media/azure-netapp-files/performance-benchmarks-linux-workload-throughput.png)
3434

3535
### Linux workload IOPS
3636

37-
The following graph represents a 4-kibibyte (KiB) random workload and a 1-TiB working set. The graph shows that an Azure NetApp Files volume can handle between ~130,000 pure random writes and ~460,000 pure random reads.
37+
The following graph represents a 4-kibibyte (KiB) random workload and a 1 TiB working set. The graph shows that an Azure NetApp Files volume can handle between ~130,000 pure random writes and ~460,000 pure random reads.
3838

3939
This graph illustrates decreases in 10% at a time, from pure read to pure write. It demonstrates what you can expect when using varying read/write ratios (100%:0%, 90%:10%, 80%:20%, and so on).
4040

4141
![Linux workload IOPS](../media/azure-netapp-files/performance-benchmarks-linux-workload-iops.png)
4242

4343
## Linux scale-up
4444

45-
Linux 5.3 kernel enables single-client scale-out networking for NFS-`nconnect`. The graphs in this section show the validation testing results for the client-side mount option with NFSv3. The feature is available on SUSE (starting with SLES12SP4) and Ubuntu (starting with the 19.10 release). It's similar in concept to both SMB multichannel and Oracle Direct NFS.
45+
The graphs in this section show the validation testing results for the client-side mount option with NFSv3. For more information, see [`nconnect` section of Linux mount options](performance-linux-mount-options.md#nconnect).
4646

47-
The graphs compare the advantages of `nconnect` to a non-connected mounted volume. In the graphs, FIO generated the workload from a single D32s_v3 instance in the us-west2 Azure region.
47+
The graphs compare the advantages of `nconnect` to a non-`connected` mounted volume. In the graphs, FIO generated the workload from a single D32s_v4 instance in the us-west2 Azure region using a 64-KiB sequential workload – the largest I/O size supported by Azure NetApp Files at the time of the testing represented here. Azure NetApp Files now supports larger I/O sizes. For more details, see [`rsize` and `wsize` section of Linux mount options](performance-linux-mount-options.md#rsize-and-wsize).
4848

4949
### Linux read throughput
5050

51-
The following graphs show sequential reads of ~3,500 MiB/s reads with `nconnect`, roughly 2.3X non-`nconnect`.
51+
The following graphs show 64-KiB sequential reads of ~3,500 MiB/s reads with `nconnect`, roughly 2.3X non-`nconnect`.
5252

5353
![Linux read throughput](../media/azure-netapp-files/performance-benchmarks-linux-read-throughput.png)
5454

5555
### Linux write throughput
5656

57-
The following graphs show sequential writes. They indicate that `nconnect` has no noticeable benefit for sequential writes. 1,500 MiB/s is roughly both the sequential write volume upper limit and the D32s_v3 instance egress limit.
57+
The following graphs show sequential writes. They indicate that `nconnect` has no noticeable benefit for sequential writes. 1,500 MiB/s is roughly both the sequential write volume upper limit and the D32s_v4 instance egress limit.
5858

5959
![Linux write throughput](../media/azure-netapp-files/performance-benchmarks-linux-write-throughput.png)
6060

6161
### Linux read IOPS
6262

63-
The following graphs show random reads of ~200,000 read IOPS with `nconnect`, roughly 3X non-`nconnect`.
63+
The following graphs show 4-KiB random reads of ~200,000 read IOPS with `nconnect`, roughly 3X non-`nconnect`.
6464

6565
![Linux read IOPS](../media/azure-netapp-files/performance-benchmarks-linux-read-iops.png)
6666

6767
### Linux write IOPS
6868

69-
The following graphs show random writes of ~135,000 write IOPS with `nconnect`, roughly 3X non-`nconnect`.
69+
The following graphs show 4-KiB random writes of ~135,000 write IOPS with `nconnect`, roughly 3X non-`nconnect`.
7070

7171
![Linux write IOPS](../media/azure-netapp-files/performance-benchmarks-linux-write-iops.png)
7272

0 commit comments

Comments
 (0)