Skip to content

Commit d29571d

Browse files
Merge pull request #250040 from b-hchen/patch-120
Clarify azure-netapp-files-performance-considerations.md is for regul…
2 parents 101c90d + 77b3717 commit d29571d

File tree

2 files changed

+14
-8
lines changed

2 files changed

+14
-8
lines changed

articles/azure-netapp-files/azure-netapp-files-performance-considerations.md

Lines changed: 11 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -12,24 +12,28 @@ ms.service: azure-netapp-files
1212
ms.workload: storage
1313
ms.tgt_pltfrm: na
1414
ms.topic: conceptual
15-
ms.date: 08/02/2022
15+
ms.date: 08/31/2023
1616
ms.author: anfdocs
1717
---
1818
# Performance considerations for Azure NetApp Files
1919

20-
The [throughput limit](azure-netapp-files-service-levels.md) for a volume with automatic QoS is determined by a combination of the quota assigned to the volume and the service level selected. For volumes with manual QoS, the throughput limit can be defined individually. When you make performance plans about Azure NetApp Files, you need to understand several considerations.
20+
> [!IMPORTANT]
21+
> This article addresses performance considerations for *regular volumes* only.
22+
> For *large volumes*, see [Requirements and considerations for large volumes](large-volumes-requirements-considerations.md#requirements-and-considerations).
23+
24+
The combination of the quota assigned to the volume and the selected service level determins the [throughput limit](azure-netapp-files-service-levels.md) for a volume with automatic QoS . For volumes with manual QoS, the throughput limit can be defined individually. When you make performance plans about Azure NetApp Files, you need to understand several considerations.
2125

2226
## Quota and throughput
2327

24-
Throughput limits are a combination of read and write speed. The throughput limit is only one determinant of the actual performance that will be realized.
28+
Throughput limits are a combination of read and write speed. The throughput limit is only one determinant of the actual performance to be realized.
2529

26-
Typical storage performance considerations, including read and write mix, the transfer size, random or sequential patterns, and many other factors will contribute to the total performance delivered.
30+
Typical storage performance considerations contribute to the total performance delivered. The considerations include read and write mix, the transfer size, random or sequential patterns, and many other factors.
2731

2832
Metrics are reported as aggregates of multiple data points collected during a five-minute interval. For more information about metrics aggregation, see [Azure Monitor Metrics aggregation and display explained](../azure-monitor/essentials/metrics-aggregation-explained.md).
2933

3034
The maximum empirical throughput that has been observed in testing is 4,500 MiB/s. At the Premium storage tier, an automatic QoS volume quota of 70.31 TiB will provision a throughput limit that is high enough to achieve this level of performance.
3135

32-
In the case of automatic QoS volumes, if you are considering assigning volume quota amounts beyond 70.31 TiB, additional quota may be assigned to a volume for storing additional data. However, the added quota will not result in a further increase in actual throughput.
36+
For automatic QoS volumes, if you are considering assigning volume quota amounts beyond 70.31 TiB, additional quota may be assigned to a volume for storing more data. However, the added quota doesn't result in a further increase in actual throughput.
3337

3438
The same empirical throughput ceiling applies to volumes with manual QoS. The maximum throughput can assign to a volume is 4,500 MiB/s.
3539

@@ -43,15 +47,15 @@ If a workload’s performance is throughput-limit bound, it is possible to overp
4347

4448
For example, if an automatic QoS volume in the Premium storage tier has only 500 GiB of data but requires 128 MiB/s of throughput, you can set the quota to 2 TiB so that the throughput level is set accordingly (64 MiB/s per TB * 2 TiB = 128 MiB/s).
4549

46-
If you consistently overprovision a volume for achieving a higher throughput, consider using the manual QoS volumes or using a higher service level instead. In the example above, you can achieve the same throughput limit with half the automatic QoS volume quota by using the Ultra storage tier instead (128 MiB/s per TiB * 1 TiB = 128 MiB/s).
50+
If you consistently overprovision a volume for achieving a higher throughput, consider using the manual QoS volumes or using a higher service level instead. In this example, you can achieve the same throughput limit with half the automatic QoS volume quota by using the Ultra storage tier instead (128 MiB/s per TiB * 1 TiB = 128 MiB/s).
4751

4852
### Dynamically increasing or decreasing volume quota
4953

5054
If your performance requirements are temporary in nature, or if you have increased performance needs for a fixed period of time, you can dynamically increase or decrease volume quota to instantaneously adjust the throughput limit. Note the following considerations:
5155

5256
* Volume quota can be increased or decreased without any need to pause IO, and access to the volume is not interrupted or impacted.
5357

54-
You can adjust the quota during an active I/O transaction against a volume. Note that volume quota can never be decreased below the amount of logical data that is stored in the volume.
58+
You can adjust the quota during an active I/O transaction against a volume. Volume quota can never be decreased below the amount of logical data that is stored in the volume.
5559

5660
* When volume quota is changed, the corresponding change in throughput limit is nearly instantaneous.
5761

articles/azure-netapp-files/large-volumes-requirements-considerations.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ ms.workload: storage
1313
ms.custom: references_regions
1414
ms.tgt_pltfrm: na
1515
ms.topic: conceptual
16-
ms.date: 03/27/2023
16+
ms.date: 08/31/2023
1717
ms.author: anfdocs
1818
---
1919
# Requirements and considerations for large volumes (preview)
@@ -28,6 +28,8 @@ To enroll in the preview for large volumes, use the [large volumes preview sign-
2828

2929
## Requirements and considerations
3030

31+
The following requirements and considerations apply to large volumes. For performance considerations of *regular volumes*, see [Performance considerations for Azure NetApp Files](azure-netapp-files-performance-considerations.md).
32+
3133
* Existing regular volumes can't be resized over 100 TiB.
3234
* You can't convert regular Azure NetApp Files volumes to large volumes.
3335
* You must create a large volume at a size greater than 100 TiB. A single volume can't exceed 500 TiB.

0 commit comments

Comments
 (0)