|
| 1 | +--- |
| 2 | +title: Azure NetApp Files performance benchmarks for Linux | Microsoft Docs |
| 3 | +description: Describes performance benchmarks Azure NetApp Files delivers for Linux. |
| 4 | +services: azure-netapp-files |
| 5 | +documentationcenter: '' |
| 6 | +author: b-juche |
| 7 | +manager: '' |
| 8 | +editor: '' |
| 9 | + |
| 10 | +ms.assetid: |
| 11 | +ms.service: azure-netapp-files |
| 12 | +ms.workload: storage |
| 13 | +ms.tgt_pltfrm: na |
| 14 | +ms.devlang: na |
| 15 | +ms.topic: conceptual |
| 16 | +ms.date: 04/29/2020 |
| 17 | +ms.author: b-juche |
| 18 | +--- |
| 19 | +# Azure NetApp Files performance benchmarks for Linux |
| 20 | + |
| 21 | +This article describes performance benchmarks Azure NetApp Files delivers for Linux. |
| 22 | + |
| 23 | +## Linux scale-out |
| 24 | + |
| 25 | +This section describes performance benchmarks of Linux workload throughput and workload IOPS. |
| 26 | + |
| 27 | +### Linux workload throughput |
| 28 | + |
| 29 | +The graph below represents a 64-kibibyte (KiB) sequential workload and a 1-TiB working set. It shows that a single Azure NetApp Files volume is capable of handling between ~1,600MiB/s pure sequential writes and ~4,500MiB/s pure sequential reads. |
| 30 | + |
| 31 | +The graph illustrates decreases in 10% at a time, from pure read to pure write. It demonstrates what you can anticipate when using varying read/write ratios (100%:0%, 90%:10%, 80%:20%, and so on). |
| 32 | + |
| 33 | + |
| 34 | + |
| 35 | +### Linux workload IOPS |
| 36 | + |
| 37 | +The following graph represents a 4-kibibyte (KiB) random workload and a 1-TiB working set. The graph shows that an Azure NetApp Files volume is capable of handling between ~130,000 pure random writes and ~460,000 pure random reads. |
| 38 | + |
| 39 | +This graph illustrates decreases in 10% at a time, from pure read to pure write. It demonstrates what you can anticipate when using varying read/write ratios (100%:0%, 90%:10%, 80%:20%, and so on). |
| 40 | + |
| 41 | + |
| 42 | + |
| 43 | +## Linux scale-up |
| 44 | + |
| 45 | +Linux 5.3 kernel enables single-client scale-out networking for NFS `nconnect`. This feature is available on SUSE (starting with SLES12SP4) and Ubuntu (starting with the 19.10 release). It is similar in concept to both SMB multichannel and Oracle Direct NFS. |
| 46 | + |
| 47 | +The graphs in this section show the results of validation testing for the client-side mount option with NFSv3. The graphs compare `nconnect` to a non-connected mounted volume. In the graphs, FIO generated the workload from a single D32s_v3 instance in the us-west2 Azure region. |
| 48 | + |
| 49 | +### Linux read throughput |
| 50 | + |
| 51 | +The following graphs compare sequential reads of ~3,500MiB/s of reads with `nconnect`, which is roughly 2.3X non-`nconnect`. |
| 52 | + |
| 53 | + |
| 54 | + |
| 55 | +### Linux write throughput |
| 56 | + |
| 57 | +The following graphs show a comparison of sequential writes. They indicate that nconnect has no noticeable benefit for sequential writes. 1,500MiB/s is roughly both the upper limit for the sequential write and the egress limit for D32s_v3 instance. |
| 58 | + |
| 59 | + |
| 60 | + |
| 61 | +### Linux read IOPS |
| 62 | + |
| 63 | +The following graphs show random reads of ~200,000 read IOPS with `nconnect`, which is roughly 3X non-`nconnect`. |
| 64 | + |
| 65 | + |
| 66 | + |
| 67 | +### Linux write IOPS |
| 68 | + |
| 69 | +The following graphs show random writes of ~135,000 write IOPS with `nconnect`, which is roughly 3X non-`nconnect`. |
| 70 | + |
| 71 | + |
| 72 | + |
| 73 | + |
| 74 | +## Next steps |
| 75 | + |
| 76 | +- [Azure NetApp Files: Getting the Most Out of Your Cloud Storage](https://cloud.netapp.com/hubfs/Resources/ANF%20PERFORMANCE%20TESTING%20IN%20TEMPLATE.pdf?hsCtaTracking=f2f560e9-9d13-4814-852d-cfc9bf736c6a%7C764e9d9c-9e6b-4549-97ec-af930247f22f) |
0 commit comments