Skip to content

Commit d7a45a0

Browse files
committed
large volume throughput limits
1 parent 3728bb9 commit d7a45a0

File tree

4 files changed

+7
-13
lines changed

4 files changed

+7
-13
lines changed

articles/azure-netapp-files/large-volumes-requirements-considerations.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: b-ahibbard
66
ms.service: azure-netapp-files
77
ms.custom: references_regions
88
ms.topic: conceptual
9-
ms.date: 02/25/2025
9+
ms.date: 03/10/2025
1010
ms.author: anfdocs
1111
---
1212
# Requirements and considerations for large volumes
@@ -24,7 +24,7 @@ The following requirements and considerations apply to large volumes. For perfor
2424
* Large volumes are currently not supported with Azure NetApp Files backup.
2525
* You can't create a large volume with application volume groups.
2626
* Currently, large volumes aren't suited for database (HANA, Oracle, SQL Server, etc.) data and log volumes. For database workloads requiring more than a single volume’s throughput limit, consider deploying multiple regular volumes. To optimize multiple volume deployments for databases, use [application volume groups](application-volume-group-concept.md).
27-
* Throughput ceilings for the three performance tiers (Standard, Premium, and Ultra) of large volumes are based on the existing 100-TiB maximum capacity targets. You're able to grow to 1 PiB with the throughput ceiling per the following table:
27+
* Throughput ceilings for the three performance tiers (Standard, Premium, and Ultra) of large volumes are based on the existing 100-TiB maximum capacity targets. You're able to grow to one PiB with the throughput ceiling per the following table:
2828

2929
<table><thead>
3030
<tr>

articles/azure-netapp-files/large-volumes.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -21,15 +21,15 @@ All resources in Azure NetApp files have [limits](azure-netapp-files-resource-li
2121
| - | - |
2222
| Capacity | <ul><li>50 GiB minimum</li><li>100 TiB maximum</li></ul> |
2323
| File count | 2,147,483,632 |
24-
| Performance | <ul><li>Standard: 1,600</li><li>Premium: 1,600</li><li>Ultra: 4,500</li></ul> |
24+
| Performance (MiB/s) | <ul><li>Standard: 1,600</li><li>Premium: 6,400</li><li>Ultra: 12,800</li></ul> |
2525

2626
Large volumes have the following limits:
2727

2828
| Limit type | Values |
2929
| - | - |
3030
| Capacity | <ul><li>50 TiB minimum</li><li>1 PiB maximum (or [2 PiB by special request](azure-netapp-files-resource-limits.md#request-limit-increase))</li></ul> |
3131
| File count | 15,938,355,048 |
32-
| Performance | <ul><li>Standard: 1,600</li><li>Premium: 6,400</li><li>Ultra: 12,800</li></ul> |
32+
| Performance | The large volume performance limit is 12,800 MiB/s on all service levels. |
3333

3434

3535
## Large volumes effect on performance

articles/azure-netapp-files/performance-benchmarks-linux.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -139,7 +139,7 @@ The following tests show a high throughput benchmark using both 64-KiB and 256-K
139139

140140
In this benchmark, FIO ran using looping logic that more aggressively populated the cache, so an indeterminate amount of caching influenced the results. This results in slightly better overall performance numbers than tests run without caching.
141141

142-
In the graph below, testing shows that an Azure NetApp Files regular volume can handle between approximately 4,500MiB/s pure sequential 64-KiB reads and approximately 1,600MiB/s pure sequential 64-KiB writes. The read-write mix for the workload was adjusted by 10% for each run.
142+
In the graph below, testing shows that an Azure NetApp Files regular volume can handle between approximately 4,500MiB/s pure sequential 64-KiB reads and approximately 1,600 MiB/s pure sequential 64-KiB writes. The read-write mix for the workload was adjusted by 10% for each run.
143143

144144
:::image type="content" source="./media/performance-benchmarks-linux/64K-sequential-read-write.png" alt-text="Diagram of 64-KiB benchmark tests with sequential I/O and caching included." lightbox="./media/performance-benchmarks-linux/64K-sequential-read-write.png":::
145145

articles/azure-netapp-files/performance-large-volumes-linux.md

Lines changed: 2 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ ms.workload: storage
1313
ms.tgt_pltfrm: na
1414
ms.custom: linux-related-content
1515
ms.topic: conceptual
16-
ms.date: 10/25/2024
16+
ms.date: 03/10/2025
1717
ms.author: anfdocs
1818
---
1919
# Azure NetApp Files large volume performance benchmarks for Linux
@@ -22,13 +22,7 @@ This article describes the tested performance capabilities of a single [Azure Ne
2222

2323
## Testing summary
2424

25-
* The Azure NetApp Files large volumes feature offers three service levels, each with throughput limits. The service levels can be scaled up or down nondisruptively as your performance needs change.
26-
27-
* Ultra service level: 12,800 MiB/s
28-
* Premium service level: 6,400 MiB/s
29-
* Standard service level: 1,600 MiB/s
30-
31-
The Ultra service level was used in these tests.
25+
* Azure NetApp Files offers three services levels. All three service levels support large volumes. The service levels can be scaled up or down nondisruptively. Although the throughput levels differ on _regular_ volumes, the large volume performance limit is 12,800 MiB/s on all service levels. The Ultra service level was used in these tests.
3226

3327
* Sequential writes: 100% sequential writes maxed out at ~8,500 MiB/second in these benchmarks. (A single large volume’s maximum throughput is capped at 12,800 MiB/second by the service, so more potential throughput is possible.)
3428
* Sequential reads: 100% sequential reads maxed out at ~12,761 MiB/second in these benchmarks. (A single large volume's throughput is capped at 12,800 MiB/second. This result is near the maximum achievable throughput at this time.)

0 commit comments

Comments
 (0)