You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/azure-netapp-files-performance-metrics-volumes.md
+6-46Lines changed: 6 additions & 46 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ author: b-hchen
5
5
ms.author: anfdocs
6
6
ms.service: azure-netapp-files
7
7
ms.topic: conceptual
8
-
ms.date: 05/08/2023
8
+
ms.date: 10/31/2024
9
9
---
10
10
# Performance benchmark test recommendations for Azure NetApp Files
11
11
@@ -16,9 +16,9 @@ This article provides benchmark testing recommendations for volume performance a
16
16
To understand the performance characteristics of an Azure NetApp Files volume, you can use the open-source tool [FIO](https://github.com/axboe/fio) to run a series of benchmarks to simulate various workloads. FIO can be installed on both Linux and Windows-based operating systems. It is an excellent tool to get a quick snapshot of both IOPS and throughput for a volume.
17
17
18
18
> [!IMPORTANT]
19
-
> Azure NetApp Files does *not* recommend using the `dd` utility as a baseline benchmarking tool. You should use an actual application workload, workload simulation, and benchmarking and analyzing tools (for example, Oracle AWR with Oracle, or the IBM equivalent for DB2) to establish and analyze optimal infrastructure performance. Tools such as FIO, vdbench, and iometer have their places in determining virtual machines to storage limits, matching the parameters of the test to the actual application workload mixtures for most useful results. However, it is always best to test with the real-world application.
19
+
> Azure NetApp Files does *not* recommend using the `dd` utility as a baseline benchmarking tool. You should use an actual application workload, workload simulation, and benchmarking and analyzing tools (for example, Oracle AWR with Oracle, or the IBM equivalent for Db2) to establish and analyze optimal infrastructure performance. Tools such as FIO, vdbench, and iometer have their places in determining virtual machines to storage limits, matching the parameters of the test to the actual application workload mixtures for most useful results. However, it is always best to test with the real-world application.
20
20
21
-
### VM instance sizing
21
+
### Virtual machine (VM) instance sizing
22
22
23
23
For best results, ensure that you are using a virtual machine (VM) instance that is appropriately sized to perform the tests. The following examples use a Standard_D32s_v3 instance. For more information about VM instance sizes, see [Sizes for Windows virtual machines in Azure](/azure/virtual-machines/sizes?toc=%2fazure%2fvirtual-network%2ftoc.json) for Windows-based VMs, and [Sizes for Linux virtual machines in Azure](/azure/virtual-machines/sizes?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) for Linux-based VMs.
24
24
@@ -48,50 +48,9 @@ Follow the Getting started section in the SSB README file to install for the pla
48
48
49
49
### FIO
50
50
51
-
Flexible I/O Tester (FIO) is a free and open-source disk I/O tool used both for benchmark and stress/hardware verification.
51
+
Flexible I/O Tester (FIO) is a free and open-source disk I/O tool used both for benchmark and stress/hardware verification. FIO is available in binary format for both Linux and Windows.
52
52
53
-
FIO is available in binary format for both Linux and Windows.
54
-
55
-
#### Installation of FIO
56
-
57
-
Follow the Binary Packages section in the [FIO README file](https://github.com/axboe/fio#readme) to install for the platform of your choice.
58
-
59
-
#### FIO examples for IOPS
60
-
61
-
The FIO examples in this section use the following setup:
62
-
* VM instance size: D32s_v3
63
-
* Capacity pool service level and size: Premium / 50 TiB
64
-
* Volume quota size: 48 TiB
65
-
66
-
The following examples show the FIO random reads and writes.
For official benchmark results for how FIO performs in Azure NetApp Files, see [Azure NetApp Files performance benchmarks for Linux](performance-benchmarks-linux.md).
79
-
80
-
#### FIO examples for bandwidth
81
-
82
-
The examples in this section show the FIO sequential reads and writes.
For official benchmark results for how FIO performs in Azure NetApp Files, see [Azure NetApp Files performance benchmarks for Linux](performance-benchmarks-linux.md).
53
+
For more information, see [Understand Azure NetApp Files testing methodology](testing-methodology.md).
95
54
96
55
## Volume metrics
97
56
@@ -129,3 +88,4 @@ The following example shows a GET URL for viewing logical volume size:
129
88
130
89
-[Service levels for Azure NetApp Files](azure-netapp-files-service-levels.md)
131
90
-[Performance benchmarks for Linux](performance-benchmarks-linux.md)
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/large-volumes.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -55,10 +55,11 @@ Large volumes allow workloads to extend beyond the current limitations of regula
55
55
| Volume type | Primary use cases |
56
56
| - | -- |
57
57
| Regular volumes | <ul><li>General file shares</li><li>SAP HANA and databases (Oracle, SQL Server, Db2, and others)</li><li>VDI/Azure VMware Service</li><li>Capacities less than 50 TiB</li></ul> |
58
-
| Large volumes | <ul><li>General file shares</li><li>High file count or high metadata workloads (such as electronic design automation, software development, FSI)</li><li>High capacity workloads (such as AI/ML/LLP, oil & gas, media, healthcare images, backup, and archives)</li><li>Large-scale workloads (many client connections such as FSLogix profiles)</li><li>High performance workloads</li><li>Capacity quotas between 50 TiB and 1 PiB</li></ul> |
58
+
| Large volumes | <ul><li>General file shares</li><li>High file count or high metadata workloads (such as electronic design automation, software development, financial services)</li><li>High capacity workloads (such as AI/ML/LLP, oil & gas, media, healthcare images, backup, and archives)</li><li>Large-scale workloads (many client connections such as FSLogix profiles)</li><li>High performance workloads</li><li>Capacity quotas between 50 TiB and 1 PiB</li></ul> |
59
59
60
60
## More information
61
61
62
62
*[Requirements and considerations for large volumes](large-volumes-requirements-considerations.md)
63
63
*[Storage hierarchy of Azure NetApp Files](azure-netapp-files-understand-storage-hierarchy.md)
64
64
*[Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md)
65
+
*[Understand workload types in Azure NetApp Files](workload-types.md)
title: Understand performance testing methodology in Azure NetApp Files
3
+
description: Learn how Azure NetApp Files benchmark tests are conducted.
4
+
services: azure-netapp-files
5
+
author: b-ahibbard
6
+
ms.service: azure-netapp-files
7
+
ms.topic: conceptual
8
+
ms.date: 10/31/2024
9
+
ms.author: anfdocs
10
+
---
11
+
12
+
# Understand performance testing methodology in Azure NetApp Files
13
+
14
+
The benchmark tool used in these tests is called [Flexible I/O Tester (FIO)](https://fio.readthedocs.io/en/latest/fio_doc.html).
15
+
16
+
When testing the edges of performance limits for storage, workload generation must be **highly parallelized** to achieve the maximum results possible.
17
+
18
+
That means:
19
+
- one, to many clients
20
+
- multiple CPUs
21
+
- multiple threads
22
+
- performing I/O to multiple files
23
+
- multi-threaded network connections (such as nconnect)
24
+
25
+
The end goal is to push the storage system as far as it can go before operations must begin to wait for other operations to finish. Use of a single client traversing a single network flow, or reading/writing from/to a single file (for instance, using dd or diskspd on a single client) doesn't deliver results indicative of Azure NetApp Files' capability. Instead, these setups show the performance of a single file, which generally trends with line speed and/or the Azure NetApp File [QoS settings](azure-netapp-files-understand-storage-hierarchy.md#qos_types).
26
+
27
+
In addition, caching must be minimized as much as possible to achieve accurate, representative results of what the storage can accomplish. However, caching is a very real tool for modern applications to perform at their absolute best. These cover scenarios with some caching and with caching bypassed for random I/O workloads by using randomization of the workload via FIO options (specifically, `randrepeat=0` to prevent caching on the storage and [directio](performance-linux-direct-io.md) to prevent client caching).
28
+
29
+
## About Flexible I/O tester
30
+
31
+
Flexible I/O tester (FIO) is an open source workload generation tool commonly used for storage benchmarking due to its ease of use and flexibility in defining workload patterns. For information about its use with Azure NetApp Files, see [Performance benchmark test recommendations for Azure NetApp Files](azure-netapp-files-performance-metrics-volumes.md).
32
+
33
+
### Installation of FIO
34
+
35
+
Follow the Binary Packages section in the [FIO README file](https://github.com/axboe/fio#readme) to install for the platform of your choice.
36
+
37
+
### FIO examples for IOPS
38
+
39
+
The FIO examples in this section use the following setup:
40
+
* VM instance size: D32s_v3
41
+
* Capacity pool service level and size: Premium / 50 TiB
42
+
* Volume quota size: 48 TiB
43
+
44
+
The following examples show the FIO random reads and writes.
For official benchmark results for how FIO performs in Azure NetApp Files, see [Azure NetApp Files performance benchmarks for Linux](performance-benchmarks-linux.md).
57
+
58
+
### FIO examples for bandwidth
59
+
60
+
The examples in this section show the FIO sequential reads and writes.
For official benchmark results for how FIO performs in Azure NetApp Files, see [Azure NetApp Files performance benchmarks for Linux](performance-benchmarks-linux.md).
73
+
74
+
## Caching with FIO
75
+
76
+
FIO can be run with specific options to control how a performance benchmark reads and writes files. In the benchmarks tests with caching excluded, the FIO flag `randrepeat=0` was used to avoid caching by running a true random workload rather than a repeated pattern.
By default, when `randrepeat` isn't defined, the FIO tool sets the value to "true," meaning that the data produced in the files isn't truly random. Thus, filesystem caches aren't utilized to improve overall performance of the workload.
81
+
82
+
In the original benchmarks for Azure NetApp Files, `randrepeat` wasn't defined, so some filesystem caching was implemented. In the new tests, this option is set to “0” (or, false) to ensure there is adequate randomness in the data to avoid filesystem caches in the Azure NetApp Files service. This results in slightly lower overall numbers, but is a more accurate representation of what the storage itself is capable of when caching is bypassed.
83
+
84
+
## Next steps
85
+
86
+
*[Performance benchmark test recommendations for Azure NetApp Files](azure-netapp-files-performance-metrics-volumes.md)
87
+
*[Azure NetApp Files regular volume performance benchmarks for Linux](performance-benchmarks-linux.md)
88
+
*[Azure NetApp Files large volume performance benchmarks for Linux](performance-large-volumes-linux.md)
title: Understand workload types in Azure NetApp Files
3
+
description: Choose the correct volume type depending on your Azure NetApp Files workload.
4
+
services: azure-netapp-files
5
+
author: b-ahibbard
6
+
ms.service: azure-netapp-files
7
+
ms.topic: conceptual
8
+
ms.date: 10/31/2024
9
+
ms.author: anfdocs
10
+
---
11
+
12
+
# Understand workload types in Azure NetApp Files
13
+
14
+
When considering use cases for cloud storage, industry silos can often be broken down into workload types, since there can be commonalities across industries in specific workloads. For instance, a media workload can have a similar workload profile to an AI/ML training set with heavy sequential reads and writes.
15
+
16
+
Azure NetApp Files is well suited for any type of workload from the low to high I/O, low to high throughput – from home directory to electronic design automation (EDA). Learn about the different workload types and develop an understanding of which Azure NetApp Files [volumes types](azure-netapp-files-understand-storage-hierarchy.md) are best suited for those workloads.
17
+
18
+
For more information, see [Understand large volumes in Azure NetApp Files](large-volumes.md)
19
+
20
+
## Workload types
21
+
22
+
***Specific offset, streaming random read/write workloads:** OLTP Databases are typical here. A signature of an OLTP workload is a dependence on random read to find the desired file offset (such as a database table row) and write performance against a small number of files. With this type of workload, tens of thousands to hundreds of thousands of I/O operations are common. Application vendors and database administrators typically have specific latency targets for these workloads. Azure NetApp Files regular volumes are generally best suited for this workload.
23
+
24
+
* **Whole file streaming workloads:** Examples include post-production media rendering of media repositories, high-performance computing suites such as those seen in computer-aided engineering/design suites (for example, computational fluid dynamics), oil and gas suites, and machine learning fine-tuning frameworks. A hallmark of this type of workload is larger files read or written in a continuous manner. For these workloads, storage throughput is the most critical attribute as it has the biggest impact on time to completion. Latency sensitivity is common here as workloads typically use a fixed amount of concurrency, thus throughput is determined by latency. Workloads typical of post-production are latency sensitive to the degree that framerate is only achieved when specific latency values are met. Both Azure NetApp Files regular volumes and Azure NetApp Files large volumes are appropriate for these workloads, with large volumes providing [more capacity](azure-netapp-files-resource-limits) and [higher file count possibilities](maxfiles-concept.md).
25
+
26
+
***Metadata rich, high file count workloads:** Examples include software development, EDA, and financial services (FSI) applications. A typical signature of these types of workloads is millions of smaller files being created, statted alone or statted then read or written. In high file count workload, remote procedure calls (RPC) other than read and write typically represent the majority of I/O. I/O rate (I/OPS) is typically the most important attribute for these workloads. Latency is often less important as concurrency might be controlled by scaling out at the application. Some customers have latency expectations of 1 ms, while others might expect 10 ms. As long as the I/O rate is achieved, so is satisfaction. This type of workload is ideally suited for _Azure NetApp Files large volumes_.
27
+
28
+
For more information on EDA workloads in Azure NetApp Files, see [Benefits of using Azure NetApp Files for Electronic Design Automation](solutions-benefits-azure-netapp-files-electronic-design-automation.md).
29
+
30
+
## More information
31
+
32
+
*[General performance considerations for Azure NetApp Files](azure-netapp-files-performance-considerations.md)
33
+
*[Performance benchmark test recommendations for Azure NetApp Files](azure-netapp-files-performance-metrics-volumes.md)
34
+
*[Azure NetApp Files regular volume performance benchmarks for Linux](performance-benchmarks-linux.md)
35
+
*[Azure NetApp Files large volume performance benchmarks for Linux](performance-large-volumes-linux.md)
36
+
*[Oracle database performance on Azure NetApp Files single volumes](performance-oracle-single-volumes.md)
37
+
*[Oracle database performance on Azure NetApp Files multiple volumes](performance-oracle-multiple-volumes.md)
38
+
*[Azure NetApp Files datastore performance benchmarks for Azure VMware Solution](performance-azure-vmware-solution-datastore.md)
0 commit comments