Skip to content

Commit 991db89

Browse files
authored
Merge pull request #289205 from MicrosoftDocs/main
10/25 11:00 AM IST Publish
2 parents c4ce87b + ba34eca commit 991db89

File tree

55 files changed

+1527
-1178
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

55 files changed

+1527
-1178
lines changed
-22 KB
Loading

articles/azure-netapp-files/performance-considerations-cool-access.md

Lines changed: 6 additions & 41 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ services: azure-netapp-files
55
author: b-ahibbard
66
ms.service: azure-netapp-files
77
ms.topic: conceptual
8-
ms.date: 09/05/2024
8+
ms.date: 10/24/2024
99
ms.author: anfdocs
1010
---
1111
# Performance considerations for Azure NetApp Files storage with cool access
@@ -20,6 +20,9 @@ When the default cool access retrieval policy is selected, sequential I/O reads
2020

2121
In a recent test performed using Standard storage with cool access for Azure NetApp Files, the following results were obtained.
2222

23+
>[!NOTE]
24+
>All results published are for reference purposes only. Results are not guaranteed as performance in production workloads can vary due to numerous factors.
25+
2326
## 100% sequential reads on hot/cool tier (single job)
2427

2528
In the following scenario, a single job on one D32_V5 virtual machine (VM) was used on a 50-TiB Azure NetApp Files volume using the Ultra performance tier. Different block sizes were used to test performance on hot and cool tiers.
@@ -39,65 +42,27 @@ This graph shows a side-by-side comparison of cool and hot tier performance with
3942

4043
:::image type="content" source="./media/performance-considerations-cool-access/throughput-graph.png" alt-text="Chart of throughput at varying `iodepths` with one job." lightbox="./media/performance-considerations-cool-access/throughput-graph.png":::
4144

42-
## 100% sequential reads on hot/cool tier (multiple jobs)
43-
44-
For this scenario, the test was conducted with 16 job using a 256=KB block size on a single D32_V5 VM on a 50-TiB Azure NetApp Files volume using the Ultra performance tier.
45-
46-
>[!NOTE]
47-
>The maximum for the Ultra service level is 128 MiB/s per tebibyte of allocated capacity. An Azure NetApp Files regular volume can manage a throughput of up to approximately 5,000 MiB/s.
48-
49-
It's possible to push for more throughput for the hot and cool tiers using a single VM when running multiple jobs. The performance difference between hot and cool tiers is less drastic when running multiple jobs. The following graph displays results for hot and cool tiers when running 16 jobs with 16 threads at a 256-KB block size.
50-
51-
:::image type="content" source="./media/performance-considerations-cool-access/throughput-sixteen-jobs.png" alt-text="Chart of throughput at varying `iodepths` with 16 jobs." lightbox="./media/performance-considerations-cool-access/throughput-sixteen-jobs.png":::
52-
53-
- Throughput improved by nearly three times for the hot tier.
54-
- Throughput improved by 6.5 times for the cool tier.
55-
- The performance difference for the hot and cool tier decreased from 2.9x to just 1.3x.
56-
57-
## Maximum viable job scale for cool tier – 100% sequential reads
58-
59-
The cool tier has a limit of how many jobs can be pushed to a single Azure NetApp Files volume before latency starts to spike to levels that are generally unusable for most workloads.
60-
61-
In the case of cool tiering, that limit is around 16 jobs with a queue depth of no more than 15. The following graph shows that latency spikes from approximately 23 milliseconds (ms) with 16 jobs/15 queue depth with slightly less throughput than with a queue depth of 14. Latency spikes as high as about 63 ms when pushing 32 jobs and throughput drops by roughly 14%.
62-
63-
:::image type="content" source="./media/performance-considerations-cool-access/sixteen-jobs-line-graph.png" alt-text="Chart of throughput and latency for tests with 16 jobs." lightbox="./media/performance-considerations-cool-access/sixteen-jobs-line-graph.png":::
64-
6545
## What causes latency in hot and cool tiers?
6646

6747
Latency in the hot tier is a factor of the storage system itself, where system resources are exhausted when more I/O is sent to the service than can be handled at any given time. As a result, operations need to queue until previously sent operations can be complete.
6848

6949
Latency in the cool tier is generally seen with the cloud retrieval operations: either requests over the network for I/O to the object store (sequential workloads) or cool block rehydration into the hot tier (random workloads).
7050

71-
## Mixed workload: sequential and random
72-
73-
A mixed workload contains both random and sequential I/O patterns. In mixed workloads, performance profiles for hot and cool tiers can have drastically different results compared to a purely sequential I/O workload but are very similar to a workload that's 100% random.
74-
75-
The following graph shows the results using 16 jobs on a single VM with a queue depth of one and varying random/sequential ratios.
76-
77-
:::image type="content" source="./media/performance-considerations-cool-access/mixed-workload-throughput.png" alt-text="Chart showing throughput for mixed workloads." lightbox="./media/performance-considerations-cool-access/mixed-workload-throughput.png":::
78-
79-
The impact on performance when mixing workloads can also be observed when looking at the latency as the workload mix changes. The graphs show how latency impact for cool and hot tiers as the workload mix goes from 100% sequential to 100% random. Latency starts to spike for the cool tier at around a 60/40 sequential/random mix (greater than 12 ms), while latency remains the same (under 2 ms) for the hot tier.
80-
81-
:::image type="content" source="./media/performance-considerations-cool-access/mixed-workload-throughput-latency.png" alt-text="Chart showing throughput and latency for mixed workloads." lightbox="./media/performance-considerations-cool-access/mixed-workload-throughput-latency.png":::
82-
83-
8451
## Results summary
8552

8653
- When a workload is 100% sequential, the cool tier's throughput decreases by roughly 47% versus the hot tier (3330 MiB/s compared to 1742 MiB/s).
8754
- When a workload is 100% random, the cool tier’s throughput decreases by roughly 88% versus the hot tier (2,479 MiB/s compared to 280 MiB/s).
8855
- The performance drop for hot tier when doing 100% sequential (3,330 MiB/s) and 100% random (2,479 MiB/s) workloads was roughly 25%. The performance drop for the cool tier when doing 100% sequential (1,742 MiB/s) and 100% random (280 MiB/s) workloads was roughly 88%.
89-
- Hot tier throughput maintains about 2,300 MiB/s regardless of the workload mix.
9056
- When a workload contains any percentage of random I/O, overall throughput for the cool tier is closer to 100% random than 100% sequential.
9157
- Reads from cool tier dropped by about 50% when moving from 100% sequential to an 80/20 sequential/random mix.
9258
- Sequential I/O can take advantage of a `readahead` cache in Azure NetApp Files that random I/O doesn't. This benefit to sequential I/O helps reduce the overall performance differences between the hot and cool tiers.
9359

94-
## General recommendations
95-
96-
To avoid worst-case scenario performance with cool access in Azure NetApp Files, follow these recommendations:
60+
## Considerations and recommendations
9761

9862
- If your workload frequently changes access patterns in an unpredictable manner, cool access may not be ideal due to the performance differences between hot and cool tiers.
9963
- If your workload contains any percentage of random I/O, performance expectations when accessing data on the cool tier should be adjusted accordingly.
10064
- Configure the coolness window and cool access retrieval settings to match your workload patterns and to minimize the amount of cool tier retrieval.
65+
- Performance from cool access can vary depending on the dataset and system load where the application is running. It's recommended to conduct relevant tests with your dataset to understand and account for performance variability from cool access.
10166

10267
## Next steps
10368
* [Azure NetApp Files storage with cool access](cool-access-introduction.md)

articles/azure-netapp-files/performance-large-volumes-linux.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ ms.workload: storage
1313
ms.tgt_pltfrm: na
1414
ms.custom: linux-related-content
1515
ms.topic: conceptual
16-
ms.date: 10/16/2024
16+
ms.date: 10/24/2024
1717
ms.author: anfdocs
1818
---
1919
# Azure NetApp Files large volume performance benchmarks for Linux
@@ -30,8 +30,8 @@ This article describes the tested performance capabilities of a single [Azure Ne
3030

3131
The Ultra service level was used in these tests.
3232

33-
* Sequential writes: 100% sequential writes maxed out at 8,500 MiB/second in these benchmarks. (A single large volume’s maximum throughput is capped at 12,800 MiB/second by the service.)
34-
* Sequential reads: 100% sequential reads maxed out at 10,000 MiB/second in these benchmarks. (At the time of these benchmarks, this limit was the maximum allowed throughput. The limit has increased to 12,800 MiB/second.)
33+
* Sequential writes: 100% sequential writes maxed out at ~8,500 MiB/second in these benchmarks. (A single large volume’s maximum throughput is capped at 12,800 MiB/second by the service, so more potential throughput is possible.)
34+
* Sequential reads: 100% sequential reads maxed out at ~12,761 MiB/second in these benchmarks. (A single large volume's throughput is capped at 12,800 MiB/second. This result is near the maximum achievable throughput at this time.)
3535

3636
* Random I/O: The same single large volume delivers over 700,000 operations per second.
3737

@@ -56,7 +56,7 @@ Tests observed performance thresholds of a single large volume on scale-out and
5656

5757
### 256-KiB sequential workloads (MiB/s)
5858

59-
The graph represents a 256 KiB sequential workload and a 1 TiB working set. It shows that a single Azure NetApp Files large volume can handle between approximately 8,518 MiB/s pure sequential writes and 9,970 MiB/s pure sequential reads.
59+
The graph represents a 256-KiB sequential workload using 12 virtual machines reading and writing to a single large volume using a 1-TiB working set. The graph shows that a single Azure NetApp Files large volume can handle between approximately 8,518 MiB/s pure sequential writes and 12,761 MiB/s pure sequential reads.
6060

6161
:::image type="content" source="./media/performance-large-volumes-linux/256-kib-sequential-reads.png" alt-text="Bar chart of a 256-KiB sequential workload on a large volume." lightbox="./media/performance-large-volumes-linux/256-kib-sequential-reads.png":::
6262

articles/azure-signalr/howto-enable-geo-replication.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -192,3 +192,13 @@ Specifically, if your application typically broadcasts to larger groups (size >1
192192
To ensure effective failover management, it is recommended to set each replica's unit size to handle all traffic. Alternatively, you could enable [autoscaling](signalr-howto-scale-autoscale.md) to manage this.
193193

194194
For more performance evaluation, refer to [Performance](signalr-concept-performance.md).
195+
196+
## Non-Inherited and Inherited Configurations
197+
Replicas inherit most configurations from the primary resource; however, some settings must be configured directly on the replicas. Below is the list of those configurations:
198+
199+
1. **SKU**: Each replica has its own SKU name and unit size. The autoscaling rules for replicas must be configured separately based on their individual metrics.
200+
2. **Shared private endpoints**: While shared private endpoints are automatically replicated to replicas, separate approvals are required on target private link resources. To add or remove shared private endpoints, manage them on the primary resource. **Do not** enable the replica until its shared private endpoint has been approved.
201+
3. **Log Destination Settings**. If not configured on the replicas, only logs from the primary resource will be transferred.
202+
4. **Alerts**.
203+
204+
All other configurations are inherited from the primary resource. For example, access keys, identity, application firewall, custom domains, private endpoints, and access control.

articles/azure-web-pubsub/howto-enable-geo-replication.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -200,4 +200,14 @@ To ensure effective failover management, it is recommended to set each replica's
200200

201201
For more performance evaluation, refer to [Performance](concept-performance.md).
202202

203+
## Non-Inherited and Inherited Configurations
204+
Replicas inherit most configurations from the primary resource; however, some settings must be configured directly on the replicas. Below is the list of those configurations:
205+
206+
1. **SKU**: Each replica has its own SKU name and unit size. The autoscaling rules for replicas must be configured separately based on their individual metrics.
207+
2. **Shared private endpoints**: While shared private endpoints are automatically replicated to replicas, separate approvals are required on target private link resources. To add or remove shared private endpoints, manage them on the primary resource. **Do not** enable the replica until its shared private endpoint has been approved.
208+
3. **Log Destination Settings**. If not configured on the replicas, only logs from the primary resource will be transferred.
209+
4. **Alerts**.
210+
211+
All other configurations are inherited from the primary resource. For example, access keys, identity, application firewall, custom domains, private endpoints, and access control.
212+
203213

articles/communication-services/concepts/rooms/room-concept.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -146,7 +146,7 @@ Rooms are created and managed via rooms APIs or SDKs. Use the rooms API/SDKs in
146146
| Virtual Rooms SDKs | 2023-06-14 | Generally Available - Fully supported |
147147
| Virtual Rooms SDKs | 2023-10-30 | Public Preview - Fully supported |
148148
| Virtual Rooms SDKs | 2023-03-31 | Public Preview - retired |
149-
| Virtual Rooms SDKs | 2022-02-01 | Will be retired on April 30, 2024 |
149+
| Virtual Rooms SDKs | 2022-02-01 | Public Preview - retired |
150150
| Virtual Rooms SDKs | 2021-04-07 | Public Preview - retired |
151151

152152
## Predefined participant roles and permissions in Virtual Rooms calls

0 commit comments

Comments
 (0)