Skip to content

Commit c875e06

Browse files
committed
remove extra graph
1 parent 49d441d commit c875e06

File tree

1 file changed

+12
-25
lines changed

1 file changed

+12
-25
lines changed

articles/azure-netapp-files/performance-considerations-cool-access.md

Lines changed: 12 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -25,13 +25,13 @@ In a recent test performed using Standard storage with cool access for Azure Net
2525
In the following scenario, a single job on one D32_V5 virtual machine (VM) was used on a 50-TiB Azure NetApp Files volume using the Ultra performance tier. Different block sizes were used to test performance on hot and cool tiers.
2626

2727
>[!NOTE]
28-
>The maximum for the Ultra service level is 128 MiB/s per tebibyte of allocated capacity. An Azure NetApp Files regular volume can manage a throughput up to ~5,000 MiB/s.
28+
>The maximum for the Ultra service level is 128 MiB/s per tebibyte of allocated capacity. An Azure NetApp Files regular volume can manage a throughput up to approximately 5,000 MiB/s.
2929
30-
The following graph shows the cool tier performance for this test using a variety of queue depths. The maximum throughput for a single VM was ~400MiB/s.
30+
The following graph shows the cool tier performance for this test using a variety of queue depths. The maximum throughput for a single VM was approximately 400 MiB/s.
3131

3232
:::image type="content" source="./media/performance-considerations-cool-access/cool-tier-block-sizes.png" alt-text="Chart of cool tier throughput at varying block sizes." lightbox="./media/performance-considerations-cool-access/cool-tier-block-sizes.png":::
3333

34-
Hot tier performance was around 2.75x better, capping out at ~1180MiB/s.
34+
Hot tier performance was around 2.75x better, capping out at approximately 11,180 MiB/s.
3535

3636
:::image type="content" source="./media/performance-considerations-cool-access/hot-tier-block-sizes.png" alt-text="Chart of hot tier throughput at varying block sizes." lightbox="./media/performance-considerations-cool-access/hot-tier-block-sizes.png":::
3737

@@ -41,28 +41,24 @@ This graph shows a side-by-side comparison of cool and hot tier performance with
4141

4242
## 100% sequential reads on hot/cool tier (multiple jobs)
4343

44-
For this scenario, the test was conducted with 16 job using a 256 KB block size on a single D32_V5 VM on a 50-TiB Azure NetApp Files volume using the Ultra performance tier.
44+
For this scenario, the test was conducted with 16 job using a 256=KB block size on a single D32_V5 VM on a 50-TiB Azure NetApp Files volume using the Ultra performance tier.
4545

4646
>[!NOTE]
47-
>The maximum for the Ultra service level is 128 MiB/s per tebibyte of allocated capacity. An Azure NetApp Files regular volume can manage a throughput up to ~5,000 MiB/s.
48-
49-
The following graph shows the side-by-side comparison of what was seen in hot and cool tiers with a single job at multiple queue depths.
50-
51-
:::image type="content" source="./media/performance-considerations-cool-access/throughput-graph.png" alt-text="Chart of throughput at varying `iodepths` with one job." lightbox="./media/performance-considerations-cool-access/throughput-graph.png":::
52-
47+
>The maximum for the Ultra service level is 128 MiB/s per tebibyte of allocated capacity. An Azure NetApp Files regular volume can manage a throughput of up to approximately 5,000 MiB/s.
5348
5449
It's possible to push for more throughput for the hot and cool tiers using a single VM when running multiple jobs. The performance difference between hot and cool tiers is less drastic when running multiple jobs. The following graph displays results for hot and cool tiers when running 16 jobs with 16 threads at a 256-KB block size.
5550

5651
:::image type="content" source="./media/performance-considerations-cool-access/throughput-sixteen-jobs.png" alt-text="Chart of throughput at varying `iodepths` with 16 jobs." lightbox="./media/performance-considerations-cool-access/throughput-sixteen-jobs.png":::
5752

5853
- Throughput improved by nearly three times for the hot tier.
5954
- Throughput improved by 6.5 times for the cool tier.
60-
- The performance difference for the hot and cool tier decreased from 2.9 to just 1.3. <!-- x what? -->
55+
- The performance difference for the hot and cool tier decreased from 2.9x to just 1.3x.
6156

6257
## Maximum viable job scale for cool tier – 100% sequential reads
58+
6359
The cool tier has a limit of how many jobs can be pushed to a single Azure NetApp Files volume before latency starts to spike to levels that are generally unusable for most workloads.
6460

65-
In the case of cool tiering, that limit is around 16 jobs with a queue depth of no more than 15. The following graph shows that latency spikes from ~23ms with 16 jobs/15 queue depth with slightly less throughput than with a queue depth of 14. Latency spikes as high as ~63ms when pushing 32 jobs and throughput drops by roughly ~14%.
61+
In the case of cool tiering, that limit is around 16 jobs with a queue depth of no more than 15. The following graph shows that latency spikes from approximately 23 milliseconds (ms) with 16 jobs/15 queue depth with slightly less throughput than with a queue depth of 14. Latency spikes as high as about 63 ms when pushing 32 jobs and throughput drops by roughly 14%.
6662

6763
:::image type="content" source="./media/performance-considerations-cool-access/sixteen-jobs-line-graph.png" alt-text="Chart of throughput and latency for tests with 16 jobs." lightbox="./media/performance-considerations-cool-access/sixteen-jobs-line-graph.png":::
6864

@@ -80,17 +76,17 @@ The following graph shows the results using 16 jobs on a single VM with a queue
8076

8177
:::image type="content" source="./media/performance-considerations-cool-access/mixed-workload-throughput.png" alt-text="Chart showing throughput for mixed workloads." lightbox="./media/performance-considerations-cool-access/mixed-workload-throughput.png":::
8278

83-
The impact on performance when mixing workloads can also be observed when looking at the latency as the workload mix changes. The graphs show how latency impact for cool and hot tiers as the workload mix goes from 100% sequential to 100% random. Latency starts to spike for the cool tier at around a 60/40 sequential/random mix (greater than 12 ms), while latency remains the same (sub-2ms) for the hot tier.
79+
The impact on performance when mixing workloads can also be observed when looking at the latency as the workload mix changes. The graphs show how latency impact for cool and hot tiers as the workload mix goes from 100% sequential to 100% random. Latency starts to spike for the cool tier at around a 60/40 sequential/random mix (greater than 12 ms), while latency remains the same (under 2 ms) for the hot tier.
8480

8581
:::image type="content" source="./media/performance-considerations-cool-access/mixed-workload-throughput-latency.png" alt-text="Chart showing throughput and latency for mixed workloads." lightbox="./media/performance-considerations-cool-access/mixed-workload-throughput-latency.png":::
8682

8783

8884
## Results summary
8985

9086
- When a workload is 100% sequential, the cool tier's throughput decreases by roughly 47% versus the hot tier (3330 MiB/s compared to 1742 MiB/s).
91-
- When a workload is 100% random, the cool tier’s throughput decreases by roughly 88% versus the hot tier (2479 MiB/s compared to 280 MiB/s).
92-
- The performance drop for hot tier when doing 100% sequential (3330 MiB/s) and 100% random (2479 MiB/s) workloads was roughly 25%. The performance drop for the cool tier when doing 100% sequential (1742 MiB/s) and 100% random (280 MiB/s) workloads was roughly 88%.
93-
- Hot tier throughput maintains about 2300 MiB/s regardless of the workload mix.
87+
- When a workload is 100% random, the cool tier’s throughput decreases by roughly 88% versus the hot tier (2,479 MiB/s compared to 280 MiB/s).
88+
- The performance drop for hot tier when doing 100% sequential (3,330 MiB/s) and 100% random (2,479 MiB/s) workloads was roughly 25%. The performance drop for the cool tier when doing 100% sequential (1,742 MiB/s) and 100% random (280 MiB/s) workloads was roughly 88%.
89+
- Hot tier throughput maintains about 2,300 MiB/s regardless of the workload mix.
9490
- When a workload contains any percentage of random I/O, overall throughput for the cool tier is closer to 100% random than 100% sequential.
9591
- Reads from cool tier dropped by about 50% when moving from 100% sequential to an 80/20 sequential/random mix.
9692
- Sequential I/O can take advantage of a `readahead` cache in Azure NetApp Files that random I/O doesn't. This benefit to sequential I/O helps reduce the overall performance differences between the hot and cool tiers.
@@ -106,12 +102,3 @@ To avoid worst-case scenario performance with cool access in Azure NetApp Files,
106102
## Next steps
107103
* [Azure NetApp Files storage with cool access](cool-access-introduction.md)
108104
* [Manage Azure NetApp Files storage with cool access](manage-cool-access.md)
109-
110-
111-
112-
113-
114-
115-
116-
117-

0 commit comments

Comments
 (0)