Skip to content

Commit d282cdc

Browse files
committed
remove leftover content
1 parent eefedfb commit d282cdc

File tree

2 files changed

+5
-36
lines changed

2 files changed

+5
-36
lines changed

articles/azure-netapp-files/performance-large-volumes-linux.md

Lines changed: 5 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -19,54 +19,27 @@ ms.author: anfdocs
1919

2020
This article describes the tested performance capabilities of a single [Azure NetApp Files large volumes](large-volumes-requirements-considerations.md) as it pertains to Linux use cases. The tests explored scenarios for both scale-out and scale-up read and write workloads, involving one and many virtual machines (VMs). Knowing the performance envelope of large volumes helps you facilitate volume sizing.
2121

22-
## Test methodologies
22+
## Testing summary
2323

2424
* The Azure NetApp Files large volumes feature offers three service levels, each with throughput limits. The service levels can be scaled up or down nondisruptively as your performance needs change.
2525

2626
* Ultra service level: 10,240 MiB/s
2727
* Premium service level: 6,400 MiB/s
2828
* Standard service level: 1,600 MiB/s
29-
* The Ultra service level was used for these tests.
29+
30+
The Ultra service level was used in these tests.
3031

3132
* Sequential I/O: 100% sequential writes max out at 8,500 MiB/second, while a single large volume is capable of 10 GiB/second (10,240 MiB/second) throughput.
3233

3334
* Random I/O: The same single large volume delivers over 700,000 operations per second.
3435

3536
* Metadata-heavy workloads are advantageous for Azure NetApp File large volumes due to the large volume’s increased parallelism. Performance benefits are noticeable in workloads heavy in file creation, unlink, and file renames as typical with VCS applications, and EDA workloads where there are high file counts present. For more information on performance of high metadata workloads, see [Benefits of using Azure NetApp Files for electronic design automation](solutions-benefits-azure-netapp-files-electronic-design-automation.md).
3637

37-
[FIO](https://fio.readthedocs.io/en/latest/fio_doc.html), a synthetic workload generator designed as a storage stress test, was used to drive these test results.
38-
39-
* There are fundamentally two models of storage performance testing:
38+
* [FIO](https://fio.readthedocs.io/en/latest/fio_doc.html), a synthetic workload generator designed as a storage stress test, was used to drive these test results. There are fundamentally two models of storage performance testing:
4039

4140
* **Scale-out compute**, which refers to using multiple VMs to generate the maximum load possible on a single Azure NetApp Files volume.
4241
* **Scale-up compute**, which refers to using a large VM to test the upper boundaries of a single client on a single Azure NetApp Files volume.
4342

44-
## Summary
45-
46-
* A single large volume can deliver sequential throughput up to the service level limits in all but the pure sequential write scenario. For sequential writes, the synthetic tests found the upper limit to be 8500 MiB/s.
47-
<!-- * Using 8-KiB random workloads, 10,240 MiB/s isn't achievable. As such, more than 700,000 8-KiB operations were achieved. -->
48-
49-
As I/O types shift toward metadata intensive operations, the scenario changes again. Metadata workloads are particularly advantageous for Azure NetApp File large volumes. When you run workloads rich in file creation, unlink, and file renames, you can notice a significant amount of performance. Typical of such primitives are the VCS application and EDA workloads where files are created, renamed, or linked at very high rates.
50-
51-
<!--
52-
## Test methodologies and tools
53-
54-
All scenarios documented in this article used FIO, which is a synthetic workload generator designed as a storage stress test. For the purposes of this testing, we used storage stress tests.
55-
56-
Fundamentally, there are two models of storage performance testing:
57-
58-
* **Application level**
59-
For application-level testing, the efforts are to drive I/O through client buffer caches in the same way that a typical application drives I/O. In general, when testing in this manner, direct I/O isn't used.
60-
* Except for databases (for example, Oracle, SAP HANA, MySQL (InnoDB storage engine), PostgreSQL, and Teradata), few applications use direct I/O. Instead, most applications use a large memory cache for repeated reads and a write-behind cache for asynchronous writes.
61-
* SPECstorage 2020 (EDA, VDA, AI, genomics, and software build), HammerDB for SQL Server, and Login VSI are typical examples of application-level testing tools. None of them uses direct I/O.
62-
63-
* **Storage stress test**
64-
The most common parameter used in storage performance benchmarking is direct I/O. It's supported by FIO and Vdbench. DISKSPD offers support for the similar construct of memory-mapped I/O. With direct I/O, the filesystem cache is bypassed, operations for direct memory access copy are avoided, and storage tests are made fast and simple.
65-
* Using the direct I/O parameter makes storage testing easy. No data is read from the filesystem cache on the client. As such, the test stresses the storage protocol and service itself rather than the memory access speeds. Also, without the DMA memory copies, read and write operations are efficient from a processing perspective.
66-
* Take the Linux `dd` command as an example workload. Without the optional `odirect` flag, all I/O generated by `dd` is served from the Linux buffer cache. Reads with the blocks already in memory aren't retrieved from storage. Reads resulting in a buffer-cache miss end up being read from storage using NFS read-ahead with varying results, depending on factors as mount `rsize` and client read-ahead tunables. When writes are sent through the buffer cache, they use a write-behind mechanism, which is untuned and uses a significant amount of parallelism to send the data to the storage device. You might attempt to run two independent streams of I/O, one `dd` for reads and one `dd` for writes. However, the operating system, being untuned, favors writes over reads and uses more parallelism for it.
67-
* Except for database, few applications use direct I/O. Instead, they take advantage of a large memory cache for repeated reads and a write-behind cache for asynchronous writes. In short, using direct I/O turns the test into a micro benchmark.
68-
-->
69-
7043
## Linux scale-out test
7144

7245
Tests observed performance thresholds of a single large volume on scale-out and were conducted with the following configuration:
@@ -79,7 +52,7 @@ Tests observed performance thresholds of a single large volume on scale-out and
7952
| Large volume size | 101 TiB Ultra (10,240 MiB/s throughput) |
8053
| Mount options | hard,rsize=65536,wsize=65536,vers=3 <br /> **NOTE:** Use of both 262144 and 65536 had similar performance results. |
8154

82-
### 256 KiB sequential workloads (MiB/s)
55+
### 256-KiB sequential workloads (MiB/s)
8356

8457
The graph represents a 256 KiB sequential workload and a 1 TiB working set. It shows that a single Azure NetApp Files large volume can handle between approximately 8,518 MiB/s pure sequential writes and 9,970 MiB/s pure sequential reads.
8558

@@ -110,7 +83,6 @@ The graphs in this section show the results for the client-side mount option of
11083

11184
The following graphs compare the advantages of `nconnect` with an NFS-mounted volume without `nconnect`. In the tests, FIO generated the workload from a single E104id-v5 instance in the East US Azure region using a 64-KiB sequential workload; a 256 I/0 size was used, which is the largest I/O size recommended by Azure NetApp Files resulted in comparable performance numbers. For more information, see [`rsize` and `wsize`](performance-linux-mount-options.md#rsize-and-wsize).
11285

113-
11486
### Linux read throughput
11587

11688
The following graphs show 256-KiB sequential reads of ~10,000MiB/s with `nconnect`, which is roughly ten times the throughput achieved without `nconnect`.
@@ -125,12 +97,10 @@ The following graphs show sequential writes. Using `nconnect` provides observabl
12597

12698
:::image type="content" source="./media/performance-large-volumes-linux/write-throughput-comparison.png" alt-text="Comparison of write throughput with and without nconnect." lightbox="./media/performance-large-volumes-linux/write-throughput-comparison.png":::
12799

128-
129100
### Linux read IOPS
130101

131102
The following graphs show 8-KiB random reads of ~426,000 read IOPS with `nconnect`, roughly seven times what is observed without `nconnect`.
132103

133-
134104
:::image type="content" source="./media/performance-large-volumes-linux/read-iops-comparison.png" alt-text="Charts comparing read IOPS with and without IOPS." lightbox="./media/performance-large-volumes-linux/read-iops-comparison.png":::
135105

136106
### Linux write IOPS

articles/azure-netapp-files/solutions-benefits-azure-netapp-files-electronic-design-automation.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,6 @@ The EDA workload in this test was generated using a standard industry benchmark
6161

6262
:::image type="content" source="./media/solutions-benefits-azure-netapp-files-electronic-design-automation/pie-chart-large-volume.png" alt-text="Pie chart depicting frontend OP type." lightbox="./media/solutions-benefits-azure-netapp-files-electronic-design-automation/pie-chart-large-volume.png":::
6363

64-
6564
| EDA Frontend OP Type | Percentage of Total |
6665
| - | - |
6766
| Stat | 39% |

0 commit comments

Comments
 (0)