Skip to content

Commit 1064b96

Browse files
committed
testing methodology topic
1 parent 322cc41 commit 1064b96

File tree

5 files changed

+139
-47
lines changed

5 files changed

+139
-47
lines changed

articles/azure-netapp-files/TOC.yml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -109,6 +109,8 @@
109109
items:
110110
- name: General performance considerations for Azure NetApp Files
111111
href: azure-netapp-files-performance-considerations.md
112+
- name: Understand workload types
113+
href: workload-types.md
112114
- name: Linux direct I/O best practices
113115
href: performance-linux-direct-io.md
114116
- name: Linux filesystem cache best practices
@@ -133,6 +135,8 @@
133135
items:
134136
- name: Performance benchmark test recommendations for Azure NetApp Files
135137
href: azure-netapp-files-performance-metrics-volumes.md
138+
- name: Testing methodology
139+
href: testing-methodology.md
136140
- name: Regular volume performance benchmarks for Linux
137141
href: performance-benchmarks-linux.md
138142
- name: Large volume performance benchmarks for Linux

articles/azure-netapp-files/azure-netapp-files-performance-metrics-volumes.md

Lines changed: 6 additions & 46 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ author: b-hchen
55
ms.author: anfdocs
66
ms.service: azure-netapp-files
77
ms.topic: conceptual
8-
ms.date: 05/08/2023
8+
ms.date: 10/31/2024
99
---
1010
# Performance benchmark test recommendations for Azure NetApp Files
1111

@@ -16,9 +16,9 @@ This article provides benchmark testing recommendations for volume performance a
1616
To understand the performance characteristics of an Azure NetApp Files volume, you can use the open-source tool [FIO](https://github.com/axboe/fio) to run a series of benchmarks to simulate various workloads. FIO can be installed on both Linux and Windows-based operating systems. It is an excellent tool to get a quick snapshot of both IOPS and throughput for a volume.
1717

1818
> [!IMPORTANT]
19-
> Azure NetApp Files does *not* recommend using the `dd` utility as a baseline benchmarking tool. You should use an actual application workload, workload simulation, and benchmarking and analyzing tools (for example, Oracle AWR with Oracle, or the IBM equivalent for DB2) to establish and analyze optimal infrastructure performance. Tools such as FIO, vdbench, and iometer have their places in determining virtual machines to storage limits, matching the parameters of the test to the actual application workload mixtures for most useful results. However, it is always best to test with the real-world application.
19+
> Azure NetApp Files does *not* recommend using the `dd` utility as a baseline benchmarking tool. You should use an actual application workload, workload simulation, and benchmarking and analyzing tools (for example, Oracle AWR with Oracle, or the IBM equivalent for Db2) to establish and analyze optimal infrastructure performance. Tools such as FIO, vdbench, and iometer have their places in determining virtual machines to storage limits, matching the parameters of the test to the actual application workload mixtures for most useful results. However, it is always best to test with the real-world application.
2020
21-
### VM instance sizing
21+
### Virtual machine (VM) instance sizing
2222

2323
For best results, ensure that you are using a virtual machine (VM) instance that is appropriately sized to perform the tests. The following examples use a Standard_D32s_v3 instance. For more information about VM instance sizes, see [Sizes for Windows virtual machines in Azure](/azure/virtual-machines/sizes?toc=%2fazure%2fvirtual-network%2ftoc.json) for Windows-based VMs, and [Sizes for Linux virtual machines in Azure](/azure/virtual-machines/sizes?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) for Linux-based VMs.
2424

@@ -48,50 +48,9 @@ Follow the Getting started section in the SSB README file to install for the pla
4848

4949
### FIO
5050

51-
Flexible I/O Tester (FIO) is a free and open-source disk I/O tool used both for benchmark and stress/hardware verification.
51+
Flexible I/O Tester (FIO) is a free and open-source disk I/O tool used both for benchmark and stress/hardware verification. FIO is available in binary format for both Linux and Windows.
5252

53-
FIO is available in binary format for both Linux and Windows.
54-
55-
#### Installation of FIO
56-
57-
Follow the Binary Packages section in the [FIO README file](https://github.com/axboe/fio#readme) to install for the platform of your choice.
58-
59-
#### FIO examples for IOPS
60-
61-
The FIO examples in this section use the following setup:
62-
* VM instance size: D32s_v3
63-
* Capacity pool service level and size: Premium / 50 TiB
64-
* Volume quota size: 48 TiB
65-
66-
The following examples show the FIO random reads and writes.
67-
68-
##### FIO: 8k block size 100% random reads
69-
70-
`fio --name=8krandomreads --rw=randread --direct=1 --ioengine=libaio --bs=8k --numjobs=4 --iodepth=128 --size=4G --runtime=600 --group_reporting`
71-
72-
##### FIO: 8k block size 100% random writes
73-
74-
`fio --name=8krandomwrites --rw=randwrite --direct=1 --ioengine=libaio --bs=8k --numjobs=4 --iodepth=128 --size=4G --runtime=600 --group_reporting`
75-
76-
##### Benchmark results
77-
78-
For official benchmark results for how FIO performs in Azure NetApp Files, see [Azure NetApp Files performance benchmarks for Linux](performance-benchmarks-linux.md).
79-
80-
#### FIO examples for bandwidth
81-
82-
The examples in this section show the FIO sequential reads and writes.
83-
84-
##### FIO: 64k block size 100% sequential reads
85-
86-
`fio --name=64kseqreads --rw=read --direct=1 --ioengine=libaio --bs=64k --numjobs=4 --iodepth=128 --size=4G --runtime=600 --group_reporting`
87-
88-
##### FIO: 64k block size 100% sequential writes
89-
90-
`fio --name=64kseqwrites --rw=write --direct=1 --ioengine=libaio --bs=64k --numjobs=4 --iodepth=128 --size=4G --runtime=600 --group_reporting`
91-
92-
##### Benchmark results
93-
94-
For official benchmark results for how FIO performs in Azure NetApp Files, see [Azure NetApp Files performance benchmarks for Linux](performance-benchmarks-linux.md).
53+
For more information, see [Understand Azure NetApp Files testing methodology](testing-methodology.md).
9554

9655
## Volume metrics
9756

@@ -129,3 +88,4 @@ The following example shows a GET URL for viewing logical volume size:
12988

13089
- [Service levels for Azure NetApp Files](azure-netapp-files-service-levels.md)
13190
- [Performance benchmarks for Linux](performance-benchmarks-linux.md)
91+
- [Understand Azure NetApp Files testing methodology](testing-methodology.md)

articles/azure-netapp-files/large-volumes.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,10 +55,11 @@ Large volumes allow workloads to extend beyond the current limitations of regula
5555
| Volume type | Primary use cases |
5656
| - | -- |
5757
| Regular volumes | <ul><li>General file shares</li><li>SAP HANA and databases (Oracle, SQL Server, Db2, and others)</li><li>VDI/Azure VMware Service</li><li>Capacities less than 50 TiB</li></ul> |
58-
| Large volumes | <ul><li>General file shares</li><li>High file count or high metadata workloads (such as electronic design automation, software development, FSI)</li><li>High capacity workloads (such as AI/ML/LLP, oil & gas, media, healthcare images, backup, and archives)</li><li>Large-scale workloads (many client connections such as FSLogix profiles)</li><li>High performance workloads</li><li>Capacity quotas between 50 TiB and 1 PiB</li></ul> |
58+
| Large volumes | <ul><li>General file shares</li><li>High file count or high metadata workloads (such as electronic design automation, software development, financial services)</li><li>High capacity workloads (such as AI/ML/LLP, oil & gas, media, healthcare images, backup, and archives)</li><li>Large-scale workloads (many client connections such as FSLogix profiles)</li><li>High performance workloads</li><li>Capacity quotas between 50 TiB and 1 PiB</li></ul> |
5959

6060
## More information
6161

6262
* [Requirements and considerations for large volumes](large-volumes-requirements-considerations.md)
6363
* [Storage hierarchy of Azure NetApp Files](azure-netapp-files-understand-storage-hierarchy.md)
6464
* [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md)
65+
* [Understand workload types in Azure NetApp Files](workload-types.md)
Lines changed: 89 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,89 @@
1+
---
2+
title: Understand performance testing methodology in Azure NetApp Files
3+
description: Learn how Azure NetApp Files benchmark tests are conducted.
4+
services: azure-netapp-files
5+
author: b-ahibbard
6+
ms.service: azure-netapp-files
7+
ms.topic: conceptual
8+
ms.date: 10/31/2024
9+
ms.author: anfdocs
10+
---
11+
12+
# Understand performance testing methodology in Azure NetApp Files
13+
14+
The benchmark tool used in these tests is called [Flexible I/O Tester (FIO)](https://fio.readthedocs.io/en/latest/fio_doc.html).
15+
16+
When testing the edges of performance limits for storage, workload generation must be **highly parallelized** to achieve the maximum results possible.
17+
18+
That means:
19+
- one, to many clients
20+
- multiple CPUs
21+
- multiple threads
22+
- performing I/O to multiple files
23+
- multi-threaded network connections (such as nconnect)
24+
25+
The end goal is to push the storage system as far as it can go before operations must begin to wait for other operations to finish. Use of a single client traversing a single network flow, or reading/writing from/to a single file (for instance, using dd or diskspd on a single client) doesn't deliver results indicative of Azure NetApp Files' capability. Instead, these setups show the performance of a single file, which generally trends with line speed and/or the Azure NetApp File [QoS settings](azure-netapp-files-understand-storage-hierarchy.md#qos_types).
26+
27+
In addition, caching must be minimized as much as possible to achieve accurate, representative results of what the storage can accomplish. However, caching is a very real tool for modern applications to perform at their absolute best. These cover scenarios with some caching and with caching bypassed for random I/O workloads by using randomization of the workload via FIO options (specifically, `randrepeat=0` to prevent caching on the storage and [directio](performance-linux-direct-io.md) to prevent client caching).
28+
29+
## About Flexible I/O tester
30+
31+
Flexible I/O tester (FIO) is an open source workload generation tool commonly used for storage benchmarking due to its ease of use and flexibility in defining workload patterns. For information about its use with Azure NetApp Files, see [Performance benchmark test recommendations for Azure NetApp Files](azure-netapp-files-performance-metrics-volumes.md).
32+
33+
### Installation of FIO
34+
35+
Follow the Binary Packages section in the [FIO README file](https://github.com/axboe/fio#readme) to install for the platform of your choice.
36+
37+
### FIO examples for IOPS
38+
39+
The FIO examples in this section use the following setup:
40+
* VM instance size: D32s_v3
41+
* Capacity pool service level and size: Premium / 50 TiB
42+
* Volume quota size: 48 TiB
43+
44+
The following examples show the FIO random reads and writes.
45+
46+
#### FIO: 8k block size 100% random reads
47+
48+
`fio --name=8krandomreads --rw=randread --direct=1 --ioengine=libaio --bs=8k --numjobs=4 --iodepth=128 --size=4G --runtime=600 --group_reporting`
49+
50+
#### FIO: 8k block size 100% random writes
51+
52+
`fio --name=8krandomwrites --rw=randwrite --direct=1 --ioengine=libaio --bs=8k --numjobs=4 --iodepth=128 --size=4G --runtime=600 --group_reporting`
53+
54+
#### Benchmark results
55+
56+
For official benchmark results for how FIO performs in Azure NetApp Files, see [Azure NetApp Files performance benchmarks for Linux](performance-benchmarks-linux.md).
57+
58+
### FIO examples for bandwidth
59+
60+
The examples in this section show the FIO sequential reads and writes.
61+
62+
#### FIO: 64k block size 100% sequential reads
63+
64+
`fio --name=64kseqreads --rw=read --direct=1 --ioengine=libaio --bs=64k --numjobs=4 --iodepth=128 --size=4G --runtime=600 --group_reporting`
65+
66+
#### FIO: 64k block size 100% sequential writes
67+
68+
`fio --name=64kseqwrites --rw=write --direct=1 --ioengine=libaio --bs=64k --numjobs=4 --iodepth=128 --size=4G --runtime=600 --group_reporting`
69+
70+
#### Benchmark results
71+
72+
For official benchmark results for how FIO performs in Azure NetApp Files, see [Azure NetApp Files performance benchmarks for Linux](performance-benchmarks-linux.md).
73+
74+
## Caching with FIO
75+
76+
FIO can be run with specific options to control how a performance benchmark reads and writes files. In the benchmarks tests with caching excluded, the FIO flag `randrepeat=0` was used to avoid caching by running a true random workload rather than a repeated pattern.
77+
78+
**[`randrepeat`](https://fio.readthedocs.io/latest/fio_doc.html#cmdoption-arg-randrepeat)**
79+
80+
By default, when `randrepeat` isn't defined, the FIO tool sets the value to "true," meaning that the data produced in the files isn't truly random. Thus, filesystem caches aren't utilized to improve overall performance of the workload.
81+
82+
In the original benchmarks for Azure NetApp Files, `randrepeat` wasn't defined, so some filesystem caching was implemented. In the new tests, this option is set to “0” (or, false) to ensure there is adequate randomness in the data to avoid filesystem caches in the Azure NetApp Files service. This results in slightly lower overall numbers, but is a more accurate representation of what the storage itself is capable of when caching is bypassed.
83+
84+
## Next steps
85+
86+
* [Performance benchmark test recommendations for Azure NetApp Files](azure-netapp-files-performance-metrics-volumes.md)
87+
* [Azure NetApp Files regular volume performance benchmarks for Linux](performance-benchmarks-linux.md)
88+
* [Azure NetApp Files large volume performance benchmarks for Linux](performance-large-volumes-linux.md)
89+
Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
---
2+
title: Understand workload types in Azure NetApp Files
3+
description: Choose the correct volume type depending on your Azure NetApp Files workload.
4+
services: azure-netapp-files
5+
author: b-ahibbard
6+
ms.service: azure-netapp-files
7+
ms.topic: conceptual
8+
ms.date: 10/31/2024
9+
ms.author: anfdocs
10+
---
11+
12+
# Understand workload types in Azure NetApp Files
13+
14+
When considering use cases for cloud storage, industry silos can often be broken down into workload types, since there can be commonalities across industries in specific workloads. For instance, a media workload can have a similar workload profile to an AI/ML training set with heavy sequential reads and writes.
15+
16+
Azure NetApp Files is well suited for any type of workload from the low to high I/O, low to high throughput – from home directory to electronic design automation (EDA). Learn about the different workload types and develop an understanding of which Azure NetApp Files [volumes types](azure-netapp-files-understand-storage-hierarchy.md) are best suited for those workloads.
17+
18+
For more information, see [Understand large volumes in Azure NetApp Files](large-volumes.md)
19+
20+
## Workload types
21+
22+
* **Specific offset, streaming random read/write workloads:** OLTP Databases are typical here. A signature of an OLTP workload is a dependence on random read to find the desired file offset (such as a database table row) and write performance against a small number of files. With this type of workload, tens of thousands to hundreds of thousands of I/O operations are common. Application vendors and database administrators typically have specific latency targets for these workloads. Azure NetApp Files regular volumes are generally best suited for this workload.
23+
24+
* **Whole file streaming workloads:** Examples include post-production media rendering of media repositories, high-performance computing suites such as those seen in computer-aided engineering/design suites (for example, computational fluid dynamics), oil and gas suites, and machine learning fine-tuning frameworks. A hallmark of this type of workload is larger files read or written in a continuous manner. For these workloads, storage throughput is the most critical attribute as it has the biggest impact on time to completion. Latency sensitivity is common here as workloads typically use a fixed amount of concurrency, thus throughput is determined by latency. Workloads typical of post-production are latency sensitive to the degree that framerate is only achieved when specific latency values are met. Both Azure NetApp Files regular volumes and Azure NetApp Files large volumes are appropriate for these workloads, with large volumes providing [more capacity](azure-netapp-files-resource-limits) and [higher file count possibilities](maxfiles-concept.md).
25+
26+
* **Metadata rich, high file count workloads:** Examples include software development, EDA, and financial services (FSI) applications. A typical signature of these types of workloads is millions of smaller files being created, statted alone or statted then read or written. In high file count workload, remote procedure calls (RPC) other than read and write typically represent the majority of I/O. I/O rate (I/OPS) is typically the most important attribute for these workloads. Latency is often less important as concurrency might be controlled by scaling out at the application. Some customers have latency expectations of 1 ms, while others might expect 10 ms. As long as the I/O rate is achieved, so is satisfaction. This type of workload is ideally suited for _Azure NetApp Files large volumes_.
27+
28+
For more information on EDA workloads in Azure NetApp Files, see [Benefits of using Azure NetApp Files for Electronic Design Automation](solutions-benefits-azure-netapp-files-electronic-design-automation.md).
29+
30+
## More information
31+
32+
* [General performance considerations for Azure NetApp Files](azure-netapp-files-performance-considerations.md)
33+
* [Performance benchmark test recommendations for Azure NetApp Files](azure-netapp-files-performance-metrics-volumes.md)
34+
* [Azure NetApp Files regular volume performance benchmarks for Linux](performance-benchmarks-linux.md)
35+
* [Azure NetApp Files large volume performance benchmarks for Linux](performance-large-volumes-linux.md)
36+
* [Oracle database performance on Azure NetApp Files single volumes](performance-oracle-single-volumes.md)
37+
* [Oracle database performance on Azure NetApp Files multiple volumes](performance-oracle-multiple-volumes.md)
38+
* [Azure NetApp Files datastore performance benchmarks for Azure VMware Solution](performance-azure-vmware-solution-datastore.md)

0 commit comments

Comments
 (0)