You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/performance-considerations-cool-access.md
+6-41Lines changed: 6 additions & 41 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ services: azure-netapp-files
5
5
author: b-ahibbard
6
6
ms.service: azure-netapp-files
7
7
ms.topic: conceptual
8
-
ms.date: 09/05/2024
8
+
ms.date: 10/24/2024
9
9
ms.author: anfdocs
10
10
---
11
11
# Performance considerations for Azure NetApp Files storage with cool access
@@ -20,6 +20,9 @@ When the default cool access retrieval policy is selected, sequential I/O reads
20
20
21
21
In a recent test performed using Standard storage with cool access for Azure NetApp Files, the following results were obtained.
22
22
23
+
>[!NOTE]
24
+
>All results published are for reference purposes only. Results are not guaranteed as performance in production workloads can vary due to numerous factors.
25
+
23
26
## 100% sequential reads on hot/cool tier (single job)
24
27
25
28
In the following scenario, a single job on one D32_V5 virtual machine (VM) was used on a 50-TiB Azure NetApp Files volume using the Ultra performance tier. Different block sizes were used to test performance on hot and cool tiers.
@@ -39,65 +42,27 @@ This graph shows a side-by-side comparison of cool and hot tier performance with
39
42
40
43
:::image type="content" source="./media/performance-considerations-cool-access/throughput-graph.png" alt-text="Chart of throughput at varying `iodepths` with one job." lightbox="./media/performance-considerations-cool-access/throughput-graph.png":::
41
44
42
-
## 100% sequential reads on hot/cool tier (multiple jobs)
43
-
44
-
For this scenario, the test was conducted with 16 job using a 256=KB block size on a single D32_V5 VM on a 50-TiB Azure NetApp Files volume using the Ultra performance tier.
45
-
46
-
>[!NOTE]
47
-
>The maximum for the Ultra service level is 128 MiB/s per tebibyte of allocated capacity. An Azure NetApp Files regular volume can manage a throughput of up to approximately 5,000 MiB/s.
48
-
49
-
It's possible to push for more throughput for the hot and cool tiers using a single VM when running multiple jobs. The performance difference between hot and cool tiers is less drastic when running multiple jobs. The following graph displays results for hot and cool tiers when running 16 jobs with 16 threads at a 256-KB block size.
50
-
51
-
:::image type="content" source="./media/performance-considerations-cool-access/throughput-sixteen-jobs.png" alt-text="Chart of throughput at varying `iodepths` with 16 jobs." lightbox="./media/performance-considerations-cool-access/throughput-sixteen-jobs.png":::
52
-
53
-
- Throughput improved by nearly three times for the hot tier.
54
-
- Throughput improved by 6.5 times for the cool tier.
55
-
- The performance difference for the hot and cool tier decreased from 2.9x to just 1.3x.
56
-
57
-
## Maximum viable job scale for cool tier – 100% sequential reads
58
-
59
-
The cool tier has a limit of how many jobs can be pushed to a single Azure NetApp Files volume before latency starts to spike to levels that are generally unusable for most workloads.
60
-
61
-
In the case of cool tiering, that limit is around 16 jobs with a queue depth of no more than 15. The following graph shows that latency spikes from approximately 23 milliseconds (ms) with 16 jobs/15 queue depth with slightly less throughput than with a queue depth of 14. Latency spikes as high as about 63 ms when pushing 32 jobs and throughput drops by roughly 14%.
62
-
63
-
:::image type="content" source="./media/performance-considerations-cool-access/sixteen-jobs-line-graph.png" alt-text="Chart of throughput and latency for tests with 16 jobs." lightbox="./media/performance-considerations-cool-access/sixteen-jobs-line-graph.png":::
64
-
65
45
## What causes latency in hot and cool tiers?
66
46
67
47
Latency in the hot tier is a factor of the storage system itself, where system resources are exhausted when more I/O is sent to the service than can be handled at any given time. As a result, operations need to queue until previously sent operations can be complete.
68
48
69
49
Latency in the cool tier is generally seen with the cloud retrieval operations: either requests over the network for I/O to the object store (sequential workloads) or cool block rehydration into the hot tier (random workloads).
70
50
71
-
## Mixed workload: sequential and random
72
-
73
-
A mixed workload contains both random and sequential I/O patterns. In mixed workloads, performance profiles for hot and cool tiers can have drastically different results compared to a purely sequential I/O workload but are very similar to a workload that's 100% random.
74
-
75
-
The following graph shows the results using 16 jobs on a single VM with a queue depth of one and varying random/sequential ratios.
76
-
77
-
:::image type="content" source="./media/performance-considerations-cool-access/mixed-workload-throughput.png" alt-text="Chart showing throughput for mixed workloads." lightbox="./media/performance-considerations-cool-access/mixed-workload-throughput.png":::
78
-
79
-
The impact on performance when mixing workloads can also be observed when looking at the latency as the workload mix changes. The graphs show how latency impact for cool and hot tiers as the workload mix goes from 100% sequential to 100% random. Latency starts to spike for the cool tier at around a 60/40 sequential/random mix (greater than 12 ms), while latency remains the same (under 2 ms) for the hot tier.
80
-
81
-
:::image type="content" source="./media/performance-considerations-cool-access/mixed-workload-throughput-latency.png" alt-text="Chart showing throughput and latency for mixed workloads." lightbox="./media/performance-considerations-cool-access/mixed-workload-throughput-latency.png":::
82
-
83
-
84
51
## Results summary
85
52
86
53
- When a workload is 100% sequential, the cool tier's throughput decreases by roughly 47% versus the hot tier (3330 MiB/s compared to 1742 MiB/s).
87
54
- When a workload is 100% random, the cool tier’s throughput decreases by roughly 88% versus the hot tier (2,479 MiB/s compared to 280 MiB/s).
88
55
- The performance drop for hot tier when doing 100% sequential (3,330 MiB/s) and 100% random (2,479 MiB/s) workloads was roughly 25%. The performance drop for the cool tier when doing 100% sequential (1,742 MiB/s) and 100% random (280 MiB/s) workloads was roughly 88%.
89
-
- Hot tier throughput maintains about 2,300 MiB/s regardless of the workload mix.
90
56
- When a workload contains any percentage of random I/O, overall throughput for the cool tier is closer to 100% random than 100% sequential.
91
57
- Reads from cool tier dropped by about 50% when moving from 100% sequential to an 80/20 sequential/random mix.
92
58
- Sequential I/O can take advantage of a `readahead` cache in Azure NetApp Files that random I/O doesn't. This benefit to sequential I/O helps reduce the overall performance differences between the hot and cool tiers.
93
59
94
-
## General recommendations
95
-
96
-
To avoid worst-case scenario performance with cool access in Azure NetApp Files, follow these recommendations:
60
+
## Considerations and recommendations
97
61
98
62
- If your workload frequently changes access patterns in an unpredictable manner, cool access may not be ideal due to the performance differences between hot and cool tiers.
99
63
- If your workload contains any percentage of random I/O, performance expectations when accessing data on the cool tier should be adjusted accordingly.
100
64
- Configure the coolness window and cool access retrieval settings to match your workload patterns and to minimize the amount of cool tier retrieval.
65
+
- Performance from cool access can vary depending on the dataset and system load where the application is running. It's recommended to conduct relevant tests with your dataset to understand and account for performance variability from cool access.
101
66
102
67
## Next steps
103
68
*[Azure NetApp Files storage with cool access](cool-access-introduction.md)
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/performance-large-volumes-linux.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,7 @@ ms.workload: storage
13
13
ms.tgt_pltfrm: na
14
14
ms.custom: linux-related-content
15
15
ms.topic: conceptual
16
-
ms.date: 10/16/2024
16
+
ms.date: 10/24/2024
17
17
ms.author: anfdocs
18
18
---
19
19
# Azure NetApp Files large volume performance benchmarks for Linux
@@ -30,8 +30,8 @@ This article describes the tested performance capabilities of a single [Azure Ne
30
30
31
31
The Ultra service level was used in these tests.
32
32
33
-
* Sequential writes: 100% sequential writes maxed out at 8,500 MiB/second in these benchmarks. (A single large volume’s maximum throughput is capped at 12,800 MiB/second by the service.)
34
-
* Sequential reads: 100% sequential reads maxed out at 10,000 MiB/second in these benchmarks. (At the time of these benchmarks, this limit was the maximum allowed throughput. The limit has increased to 12,800 MiB/second.)
33
+
* Sequential writes: 100% sequential writes maxed out at ~8,500 MiB/second in these benchmarks. (A single large volume’s maximum throughput is capped at 12,800 MiB/second by the service, so more potential throughput is possible.)
34
+
* Sequential reads: 100% sequential reads maxed out at ~12,761 MiB/second in these benchmarks. (A single large volume's throughput is capped at 12,800 MiB/second. This result is near the maximum achievable throughput at this time.)
35
35
36
36
* Random I/O: The same single large volume delivers over 700,000 operations per second.
37
37
@@ -56,7 +56,7 @@ Tests observed performance thresholds of a single large volume on scale-out and
56
56
57
57
### 256-KiB sequential workloads (MiB/s)
58
58
59
-
The graph represents a 256KiB sequential workload and a 1 TiB working set. It shows that a single Azure NetApp Files large volume can handle between approximately 8,518 MiB/s pure sequential writes and 9,970 MiB/s pure sequential reads.
59
+
The graph represents a 256-KiB sequential workload using 12 virtual machines reading and writing to a single large volume using a 1-TiB working set. The graph shows that a single Azure NetApp Files large volume can handle between approximately 8,518 MiB/s pure sequential writes and 12,761 MiB/s pure sequential reads.
60
60
61
61
:::image type="content" source="./media/performance-large-volumes-linux/256-kib-sequential-reads.png" alt-text="Bar chart of a 256-KiB sequential workload on a large volume." lightbox="./media/performance-large-volumes-linux/256-kib-sequential-reads.png":::
Copy file name to clipboardExpand all lines: articles/azure-signalr/howto-enable-geo-replication.md
+10Lines changed: 10 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -192,3 +192,13 @@ Specifically, if your application typically broadcasts to larger groups (size >1
192
192
To ensure effective failover management, it is recommended to set each replica's unit size to handle all traffic. Alternatively, you could enable [autoscaling](signalr-howto-scale-autoscale.md) to manage this.
193
193
194
194
For more performance evaluation, refer to [Performance](signalr-concept-performance.md).
195
+
196
+
## Non-Inherited and Inherited Configurations
197
+
Replicas inherit most configurations from the primary resource; however, some settings must be configured directly on the replicas. Below is the list of those configurations:
198
+
199
+
1.**SKU**: Each replica has its own SKU name and unit size. The autoscaling rules for replicas must be configured separately based on their individual metrics.
200
+
2.**Shared private endpoints**: While shared private endpoints are automatically replicated to replicas, separate approvals are required on target private link resources. To add or remove shared private endpoints, manage them on the primary resource. **Do not** enable the replica until its shared private endpoint has been approved.
201
+
3.**Log Destination Settings**. If not configured on the replicas, only logs from the primary resource will be transferred.
202
+
4.**Alerts**.
203
+
204
+
All other configurations are inherited from the primary resource. For example, access keys, identity, application firewall, custom domains, private endpoints, and access control.
Copy file name to clipboardExpand all lines: articles/azure-web-pubsub/howto-enable-geo-replication.md
+10Lines changed: 10 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -200,4 +200,14 @@ To ensure effective failover management, it is recommended to set each replica's
200
200
201
201
For more performance evaluation, refer to [Performance](concept-performance.md).
202
202
203
+
## Non-Inherited and Inherited Configurations
204
+
Replicas inherit most configurations from the primary resource; however, some settings must be configured directly on the replicas. Below is the list of those configurations:
205
+
206
+
1.**SKU**: Each replica has its own SKU name and unit size. The autoscaling rules for replicas must be configured separately based on their individual metrics.
207
+
2.**Shared private endpoints**: While shared private endpoints are automatically replicated to replicas, separate approvals are required on target private link resources. To add or remove shared private endpoints, manage them on the primary resource. **Do not** enable the replica until its shared private endpoint has been approved.
208
+
3.**Log Destination Settings**. If not configured on the replicas, only logs from the primary resource will be transferred.
209
+
4.**Alerts**.
210
+
211
+
All other configurations are inherited from the primary resource. For example, access keys, identity, application firewall, custom domains, private endpoints, and access control.
0 commit comments