You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/azure-netapp-files-performance-considerations.md
+14-7Lines changed: 14 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,7 +14,7 @@ ms.author: anfdocs
14
14
> This article addresses performance considerations for *regular volumes* only.
15
15
> For *large volumes*, see [Requirements and considerations for large volumes](large-volumes-requirements-considerations.md#requirements-and-considerations).
16
16
17
-
The combination of the quota assigned to the volume and the selected service level determines the [throughput limit](azure-netapp-files-service-levels.md) for a volume with automatic QoS. For volumes with manual QoS, the throughput limit can be defined individually. When you make performance plans about Azure NetApp Files, you need to understand several considerations.
17
+
The combination of the quota assigned to the volume and the selected service level determines the [throughput limit](azure-netapp-files-service-levels.md) for a volume with automatic QoS. For volumes with manual QoS, the throughput limit can be defined individually. When you make performance plans about Azure NetApp Files, you need to understand several considerations.
18
18
19
19
## Quota and throughput
20
20
@@ -24,23 +24,23 @@ Typical storage performance considerations contribute to the total performance d
24
24
25
25
Metrics are reported as aggregates of multiple data points collected during a five-minute interval. For more information about metrics aggregation, see [Azure Monitor Metrics aggregation and display explained](../azure-monitor/essentials/metrics-aggregation-explained.md).
26
26
27
-
The maximum empirical throughput that has been observed in testing is 4,500 MiB/s. At the Premium storage tier, an automatic QoS volume quota of 70.31 TiB will provision a throughput limit that is high enough to achieve this level of performance.
27
+
The maximum empirical throughput that has been observed in testing is 4,500 MiB/s. At the Premium storage tier, an automatic QoS volume quota of 70.31 TiB provisions a throughput limit high enough to achieve this performance level.
28
28
29
-
For automatic QoS volumes, if you are considering assigning volume quota amounts beyond 70.31 TiB, additional quota may be assigned to a volume for storing more data. However, the added quota doesn't result in a further increase in actual throughput.
29
+
For automatic QoS volumes, if you're considering assigning volume quota amounts beyond 70.31 TiB, additional quota may be assigned to a volume for storing more data. However, the added quota doesn't result in a further increase in actual throughput.
30
30
31
31
The same empirical throughput ceiling applies to volumes with manual QoS. The maximum throughput can assign to a volume is 4,500 MiB/s.
32
32
33
33
## Automatic QoS volume quota and throughput
34
34
35
-
This section describes quota management and throughput for volumes with the automatic QoS type.
35
+
Learn about quota management and throughput for volumes with the automatic QoS type.
36
36
37
37
### Overprovisioning the volume quota
38
38
39
-
If a workload’s performance is throughput-limit bound, it is possible to overprovision the automatic QoS volume quota to set a higher throughput level and achieve higher performance.
39
+
If a workload’s performance is throughput-limit bound, it's possible to overprovision the automatic QoS volume quota to set a higher throughput level and achieve higher performance.
40
40
41
-
For example, if an automatic QoS volume in the Premium storage tier has only 500 GiB of data but requires 128 MiB/s of throughput, you can set the quota to 2 TiB so that the throughput level is set accordingly (64 MiB/s per TB * 2 TiB = 128 MiB/s).
41
+
For example, if an automatic QoS volume in the Premium storage tier has only 500 GiB of data but requires 128 MiB/s of throughput, you can set the quota to 2 TiB so the throughput level is set accordingly (64 MiB/s per TB * 2 TiB = 128 MiB/s).
42
42
43
-
If you consistently overprovision a volume for achieving a higher throughput, consider using the manual QoS volumes or using a higher service level instead. In this example, you can achieve the same throughput limit with half the automatic QoS volume quota by using the Ultra storage tier instead (128 MiB/s per TiB * 1 TiB = 128 MiB/s).
43
+
If you consistently overprovision a volume for achieving a higher throughput, consider using the manual QoS volumes or using a higher service level instead. In this example, you can achieve the same throughput limit with half the automatic QoS volume quota by using the Ultra storage tier instead (128 MiB/s per TiB * 1 TiB = 128 MiB/s).
44
44
45
45
### Dynamically increasing or decreasing volume quota
46
46
@@ -62,6 +62,13 @@ If your performance requirements are temporary in nature, or if you have increas
62
62
63
63
If you use manual QoS volumes, you don’t have to overprovision the volume quota to achieve a higher throughput because the throughput can be assigned to each volume independently. However, you still need to ensure that the capacity pool is pre-provisioned with sufficient throughput for your performance needs. The throughput of a capacity pool is provisioned according to its size and service level. See [Service levels for Azure NetApp Files](azure-netapp-files-service-levels.md) for more details.
64
64
65
+
## Monitoring volumes for performance
66
+
67
+
Azure NetApp Files volumes can be monitored using available [Performance metrics](azure-netapp-files-metrics.md#performance-metrics-for-volumes).
68
+
69
+
When volume throughput reaches its maximum (as determined by the QoS setting), the volume response times (latency) increase. This effect can be incorrectly perceived as a performance issue caused by the storage. Increasing the volume QoS setting (manual QoS) or increasing the volume size (auto QoS) increases the allowable volume throughput.
70
+
71
+
To check if the maximum throughput limit has been reached, monitor the metric [Throughput limit reached](azure-netapp-files-metrics.md#volumes). For more recommendations, see [Performance FAQs for Azure NetApp Files](faq-performance.md#what-should-i-do-to-optimize-or-tune-azure-netapp-files-performance).
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/faq-performance.md
+6-2Lines changed: 6 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,6 +21,10 @@ You can take the following actions per the performance requirements:
21
21
22
22
There is no need to set accelerated networking for the NICs in the dedicated subnet of Azure NetApp Files. [Accelerated networking](../virtual-network/virtual-machine-network-throughput.md) is a capability that only applies to Azure virtual machines. Azure NetApp Files NICs are optimized by design.
23
23
24
+
## How do I monitor Azure NetApp Files volume performance
25
+
26
+
Azure NetApp Files volumes performance can be monitored through [available metrics](azure-netapp-files-metrics.md).
27
+
24
28
## How do I convert throughput-based service levels of Azure NetApp Files to IOPS?
25
29
26
30
You can convert MB/s to IOPS by using the following formula:
@@ -49,11 +53,11 @@ No, Azure NetApp Files does not support SMB Direct.
49
53
50
54
## Is NIC Teaming supported in Azure?
51
55
52
-
NIC Teaming is not supported in Azure. Although multiple network interfaces are supported on Azure virtual machines, they represent a logical rather than a physical construct. As such, they provide no fault tolerance. Also, the bandwidth available to an Azure virtual machine is calculated for the machine itself and not any individual network interface.
56
+
NIC Teaming isn't supported in Azure. Although multiple network interfaces are supported on Azure virtual machines, they represent a logical rather than a physical construct. As such, they provide no fault tolerance. Also, the bandwidth available to an Azure virtual machine is calculated for the machine itself and not any individual network interface.
53
57
54
58
## Are jumbo frames supported?
55
59
56
-
Jumbo frames are not supported with Azure virtual machines.
60
+
Jumbo frames aren't supported with Azure virtual machines.
0 commit comments