You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/azure-netapp-files-performance-metrics-volumes.md
+7-47Lines changed: 7 additions & 47 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ author: b-hchen
5
5
ms.author: anfdocs
6
6
ms.service: azure-netapp-files
7
7
ms.topic: conceptual
8
-
ms.date: 05/08/2023
8
+
ms.date: 10/31/2024
9
9
---
10
10
# Performance benchmark test recommendations for Azure NetApp Files
11
11
@@ -16,9 +16,9 @@ This article provides benchmark testing recommendations for volume performance a
16
16
To understand the performance characteristics of an Azure NetApp Files volume, you can use the open-source tool [FIO](https://github.com/axboe/fio) to run a series of benchmarks to simulate various workloads. FIO can be installed on both Linux and Windows-based operating systems. It is an excellent tool to get a quick snapshot of both IOPS and throughput for a volume.
17
17
18
18
> [!IMPORTANT]
19
-
> Azure NetApp Files does *not* recommend using the `dd` utility as a baseline benchmarking tool. You should use an actual application workload, workload simulation, and benchmarking and analyzing tools (for example, Oracle AWR with Oracle, or the IBM equivalent for DB2) to establish and analyze optimal infrastructure performance. Tools such as FIO, vdbench, and iometer have their places in determining virtual machines to storage limits, matching the parameters of the test to the actual application workload mixtures for most useful results. However, it is always best to test with the real-world application.
19
+
> Azure NetApp Files does *not* recommend using the `dd` utility as a baseline benchmarking tool. You should use an actual application workload, workload simulation, and benchmarking and analyzing tools (for example, Oracle AWR with Oracle, or the IBM equivalent for Db2) to establish and analyze optimal infrastructure performance. Tools such as FIO, vdbench, and iometer have their places in determining virtual machines to storage limits, matching the parameters of the test to the actual application workload mixtures for most useful results. However, it is always best to test with the real-world application.
20
20
21
-
### VM instance sizing
21
+
### Virtual machine (VM) instance sizing
22
22
23
23
For best results, ensure that you are using a virtual machine (VM) instance that is appropriately sized to perform the tests. The following examples use a Standard_D32s_v3 instance. For more information about VM instance sizes, see [Sizes for Windows virtual machines in Azure](/azure/virtual-machines/sizes?toc=%2fazure%2fvirtual-network%2ftoc.json) for Windows-based VMs, and [Sizes for Linux virtual machines in Azure](/azure/virtual-machines/sizes?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) for Linux-based VMs.
24
24
@@ -48,50 +48,9 @@ Follow the Getting started section in the SSB README file to install for the pla
48
48
49
49
### FIO
50
50
51
-
Flexible I/O Tester (FIO) is a free and open-source disk I/O tool used both for benchmark and stress/hardware verification.
51
+
Flexible I/O Tester (FIO) is a free and open-source disk I/O tool used both for benchmark and stress/hardware verification. FIO is available in binary format for both Linux and Windows.
52
52
53
-
FIO is available in binary format for both Linux and Windows.
54
-
55
-
#### Installation of FIO
56
-
57
-
Follow the Binary Packages section in the [FIO README file](https://github.com/axboe/fio#readme) to install for the platform of your choice.
58
-
59
-
#### FIO examples for IOPS
60
-
61
-
The FIO examples in this section use the following setup:
62
-
* VM instance size: D32s_v3
63
-
* Capacity pool service level and size: Premium / 50 TiB
64
-
* Volume quota size: 48 TiB
65
-
66
-
The following examples show the FIO random reads and writes.
For official benchmark results for how FIO performs in Azure NetApp Files, see [Azure NetApp Files performance benchmarks for Linux](performance-benchmarks-linux.md).
79
-
80
-
#### FIO examples for bandwidth
81
-
82
-
The examples in this section show the FIO sequential reads and writes.
For official benchmark results for how FIO performs in Azure NetApp Files, see [Azure NetApp Files performance benchmarks for Linux](performance-benchmarks-linux.md).
53
+
For more information, see [Understand Azure NetApp Files testing methodology](testing-methodology.md).
95
54
96
55
## Volume metrics
97
56
@@ -122,10 +81,11 @@ You can access Azure NetApp Files counters by using REST API calls. See [Support
122
81
The following example shows a GET URL for viewing logical volume size:
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/large-volumes.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -55,10 +55,11 @@ Large volumes allow workloads to extend beyond the current limitations of regula
55
55
| Volume type | Primary use cases |
56
56
| - | -- |
57
57
| Regular volumes | <ul><li>General file shares</li><li>SAP HANA and databases (Oracle, SQL Server, Db2, and others)</li><li>VDI/Azure VMware Service</li><li>Capacities less than 50 TiB</li></ul> |
58
-
| Large volumes | <ul><li>General file shares</li><li>High file count or high metadata workloads (such as electronic design automation, software development, FSI)</li><li>High capacity workloads (such as AI/ML/LLP, oil & gas, media, healthcare images, backup, and archives)</li><li>Large-scale workloads (many client connections such as FSLogix profiles)</li><li>High performance workloads</li><li>Capacity quotas between 50 TiB and 1 PiB</li></ul> |
58
+
| Large volumes | <ul><li>General file shares</li><li>High file count or high metadata workloads (such as electronic design automation, software development, financial services)</li><li>High capacity workloads (such as AI/ML/LLP, oil & gas, media, healthcare images, backup, and archives)</li><li>Large-scale workloads (many client connections such as FSLogix profiles)</li><li>High performance workloads</li><li>Capacity quotas between 50 TiB and 1 PiB</li></ul> |
59
59
60
60
## More information
61
61
62
62
*[Requirements and considerations for large volumes](large-volumes-requirements-considerations.md)
63
63
*[Storage hierarchy of Azure NetApp Files](azure-netapp-files-understand-storage-hierarchy.md)
64
64
*[Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md)
65
+
*[Understand workload types in Azure NetApp Files](workload-types.md)
0 commit comments