Skip to content

Commit 2f85ce7

Browse files
committed
new reg vol benchmark
1 parent 966ddac commit 2f85ce7

File tree

3 files changed

+39
-6
lines changed

3 files changed

+39
-6
lines changed

articles/azure-netapp-files/performance-benchmarks-linux.md

Lines changed: 32 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,13 +6,44 @@ author: b-hchen
66
ms.service: azure-netapp-files
77
ms.custom: linux-related-content
88
ms.topic: conceptual
9-
ms.date: 03/24/2024
9+
ms.date: 10/31/2024
1010
ms.author: anfdocs
1111
---
1212
# Azure NetApp Files regular volume performance benchmarks for Linux
1313

1414
This article describes performance benchmarks Azure NetApp Files delivers for Linux with a [regular volume](azure-netapp-files-understand-storage-hierarchy.md#volumes).
1515

16+
17+
## Whole file streaming workloads (scale-out benchmark tests)
18+
19+
The intent of a scale-out test is to show the performance of an Azure NetApp File volume when scaling out (or increasing) the number of clients generating simultaneous workload to the same volume. These tests are generally able to push a volume to the edge of its performance limits and are indicative of workloads such as media rendering, AI/ML, and other workloads that utilize large compute farms to perform work.
20+
21+
High IOP scale out benchmark configuration
22+
23+
These benchmarks used the following:
24+
- A single Azure NetApp Files 100-TiB regular volume with a 1-TiB data set using the Ultra performance tier
25+
- [FIO (with and without setting randrepeat=0)](testing-methodology.md)
26+
- 4-KiB and 8-KiB block sizes
27+
- 6 D32s_v5 virtual machines running RHEL 9.3
28+
- NFSv3
29+
- [Manual QoS](manage-manual-qos-capacity-pool.md)
30+
- Mount options: rw,nconnect=8,hard,rsize=262144,wsize=262144,vers=3,tcp,bg
31+
32+
## High throughput scale-out benchmark configuration
33+
34+
These benchmarks used the following:
35+
36+
- A single Azure NetApp Files regular volume with a 1-TiB data set using the Ultra performance tier
37+
FIO (with and without setting randrepeat=0)
38+
- [FIO (with and without setting randrepeat=0)](testing-methodology.md)
39+
- 64-KiB and 256-KiB block size
40+
- 6 D32s_v5 virtual machines running RHEL 9.3
41+
- NFSv3
42+
- [Manual QoS](manage-manual-qos-capacity-pool.md)
43+
- Mount options: rw,nconnect=8,hard,rsize=262144,wsize=262144,vers=3,tcp,bg
44+
45+
<!-- -->
46+
1647
## Linux scale-out
1748

1849
This section describes performance benchmarks of Linux workload throughput and workload IOPS.

articles/azure-netapp-files/testing-methodology.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ FIO can be run with specific options to control how a performance benchmark read
7979

8080
By default, when `randrepeat` isn't defined, the FIO tool sets the value to "true," meaning that the data produced in the files isn't truly random. Thus, filesystem caches aren't utilized to improve overall performance of the workload.
8181

82-
In the original benchmarks for Azure NetApp Files, `randrepeat` wasn't defined, so some filesystem caching was implemented. In the new tests, this option is set to “0” (or, false) to ensure there is adequate randomness in the data to avoid filesystem caches in the Azure NetApp Files service. This results in slightly lower overall numbers, but is a more accurate representation of what the storage itself is capable of when caching is bypassed.
82+
In earlier benchmarks for Azure NetApp Files, `randrepeat` wasn't defined, so some filesystem caching was implemented. In more up-to-date tests, this option is set to “0” (false) to ensure there is adequate randomness in the data to avoid filesystem caches in the Azure NetApp Files service. This modification results in slightly lower overall numbers, but is a more accurate representation of what the storage service is capable of when caching is bypassed.
8383

8484
## Next steps
8585

articles/azure-netapp-files/workload-types.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
title: Understand workload types in Azure NetApp Files
3-
description: Choose the correct volume type depending on your Azure NetApp Files workload.
3+
description: Choose the correct volume type depending on your Azure NetApp Files workload.
44
services: azure-netapp-files
55
author: b-ahibbard
66
ms.service: azure-netapp-files
@@ -11,18 +11,20 @@ ms.author: anfdocs
1111

1212
# Understand workload types in Azure NetApp Files
1313

14-
When considering use cases for cloud storage, industry silos can often be broken down into workload types, since there can be commonalities across industries in specific workloads. For instance, a media workload can have a similar workload profile to an AI/ML training set with heavy sequential reads and writes.
14+
When considering use cases for cloud storage, industry silos can often be broken down into workload types, since there can be commonalities across industries in specific workloads. For instance, a media workload can have a similar workload profile to an AI/ML training set with heavy sequential reads and writes.
1515

1616
Azure NetApp Files is well suited for any type of workload from the low to high I/O, low to high throughput – from home directory to electronic design automation (EDA). Learn about the different workload types and develop an understanding of which Azure NetApp Files [volumes types](azure-netapp-files-understand-storage-hierarchy.md) are best suited for those workloads.
1717

1818
For more information, see [Understand large volumes in Azure NetApp Files](large-volumes.md)
1919

2020
## Workload types
2121

22-
* **Specific offset, streaming random read/write workloads:** OLTP Databases are typical here. A signature of an OLTP workload is a dependence on random read to find the desired file offset (such as a database table row) and write performance against a small number of files. With this type of workload, tens of thousands to hundreds of thousands of I/O operations are common. Application vendors and database administrators typically have specific latency targets for these workloads. Azure NetApp Files regular volumes are generally best suited for this workload.
22+
* **Specific offset, streaming random read/write workloads:** Online transaction processing (OLTP) databases are typical here. A signature of an OLTP workload is a dependence on random read to find the desired file offset (such as a database table row) and write performance against a small number of files. With this type of workload, tens of thousands to hundreds of thousands of I/O operations are common. Application vendors and database administrators typically have specific latency targets for these workloads. In most cases, Azure NetApp Files regular volumes are best suited for this workload.
2323

24-
* **Whole file streaming workloads:** Examples include post-production media rendering of media repositories, high-performance computing suites such as those seen in computer-aided engineering/design suites (for example, computational fluid dynamics), oil and gas suites, and machine learning fine-tuning frameworks. A hallmark of this type of workload is larger files read or written in a continuous manner. For these workloads, storage throughput is the most critical attribute as it has the biggest impact on time to completion. Latency sensitivity is common here as workloads typically use a fixed amount of concurrency, thus throughput is determined by latency. Workloads typical of post-production are latency sensitive to the degree that framerate is only achieved when specific latency values are met. Both Azure NetApp Files regular volumes and Azure NetApp Files large volumes are appropriate for these workloads, with large volumes providing [more capacity](azure-netapp-files-resource-limits) and [higher file count possibilities](maxfiles-concept.md).
24+
* **Whole file streaming workloads:** Examples include post-production media rendering of media repositories, high-performance computing suites such as those seen in computer-aided engineering/design suites (for example, computational fluid dynamics), oil and gas suites, and machine learning fine-tuning frameworks. A hallmark of this type of workload is larger files read or written in a continuous manner. For these workloads, storage throughput is the most critical attribute as it has the biggest impact on time to completion. Latency sensitivity is common here as workloads typically use a fixed amount of concurrency, thus throughput is determined by latency. Workloads typical of post-production are latency sensitive to the degree that framerate is only achieved when specific latency values are met. Both Azure NetApp Files regular volumes and Azure NetApp Files large volumes are appropriate for these workloads, with large volumes providing [more capacity](azure-netapp-files-resource-limits) and [higher file count possibilities](maxfiles-concept.md).
2525

26+
27+
<!-- linux `stat` -->
2628
* **Metadata rich, high file count workloads:** Examples include software development, EDA, and financial services (FSI) applications. A typical signature of these types of workloads is millions of smaller files being created, statted alone or statted then read or written. In high file count workload, remote procedure calls (RPC) other than read and write typically represent the majority of I/O. I/O rate (I/OPS) is typically the most important attribute for these workloads. Latency is often less important as concurrency might be controlled by scaling out at the application. Some customers have latency expectations of 1 ms, while others might expect 10 ms. As long as the I/O rate is achieved, so is satisfaction. This type of workload is ideally suited for _Azure NetApp Files large volumes_.
2729

2830
For more information on EDA workloads in Azure NetApp Files, see [Benefits of using Azure NetApp Files for Electronic Design Automation](solutions-benefits-azure-netapp-files-electronic-design-automation.md).

0 commit comments

Comments
 (0)