Skip to content

Commit 49d441d

Browse files
committed
remove existing performance section
1 parent f424b5d commit 49d441d

File tree

2 files changed

+1
-78
lines changed

2 files changed

+1
-78
lines changed

articles/azure-netapp-files/cool-access-introduction.md

Lines changed: 0 additions & 77 deletions
Original file line numberDiff line numberDiff line change
@@ -79,83 +79,6 @@ Azure NetApp Files storage with cool access is supported for the following regio
7979
* West US 2
8080
* West US 3
8181

82-
## Effects of cool access on data
83-
84-
This section describes a large-duration, large-dataset warming test. It shows an example scenario of a dataset where 100% of the data is in the cool tier and how it warms over time.
85-
86-
Typical randomly accessed data starts as part of a working set (read, modify, and write). As data loses relevance, it becomes "cool" and is eventually tiered off to the cool tier.
87-
88-
Cool data might become hot again. It’s not typical for the entire working set to start as cold, but some scenarios do exist, for example, audits, year-end processing, quarter-end processing, lawsuits, and end-of-year licensure reviews.
89-
90-
This scenario provides insight to the warming performance behavior of a 100% cooled dataset. The insight applies whether it's a small percentage or the entire dataset.
91-
92-
### 4k random-read test
93-
94-
This section describes a 4k random-read test across 160 files totaling 10 TB of data.
95-
96-
#### Setup
97-
98-
**Capacity pool size:** 100-TB capacity pool <br>
99-
**Volume allocated capacity:** 100-TB volumes <br>
100-
**Working Dataset:** 10 TB <br>
101-
**Service Level:** Standard storage with cool access <br>
102-
**Volume Count/Size:** 1 <br>
103-
**Client Count:** Four standard 8-s clients <br>
104-
**OS:** RHEL 8.3 <br>
105-
**Mount Option:** `rw,nconnect=8,hard,rsize=262144,wsize=262144,vers=3,tcp,bg,hard`
106-
107-
#### Methodology
108-
109-
This test was set up via FIO to run a 4k random-read test across 160 files that total 10 TB of data. FIO was configured to randomly read each block across the entire working dataset. (It can read any block any number of times as part of the test instead of touching each block once). This script was called once every 5 minutes and then a data point collected on performance. When blocks are randomly read, they're moved to the hot tier.
110-
111-
This test had a large dataset and ran several days starting the worst-case most-aged data (all caches dumped). The time component of the X axis has been removed because the total time to rewarm varies due to the dataset size. This curve could be in days, hours, minutes, or even seconds depending on the dataset.
112-
113-
#### Results
114-
115-
The following chart shows a test that ran over 2.5 days on the 10-TB working dataset that has been 100% cooled and the buffers cleared (absolute worst-case aged data).
116-
117-
:::image type="content" source="./media/cool-access-introduction/cool-access-test-chart.png" alt-text="Diagram that shows cool access read IOPS warming cooled tier, long duration, and 10-TB working set. The y-axis is titled IOPS, ranging from 0 to 140,000 in increments of 20,000. The x-axis is titled Behavior Over Time. A line charting Read IOPs is roughly flat until the right-most third of the x-axis where growth is exponential." lightbox="./media/cool-access-introduction/cool-access-test-chart.png":::
118-
119-
### 64k sequential-read test
120-
121-
#### Setup
122-
123-
**Capacity pool size:** 100-TB capacity pool <br>
124-
**Volume allocated capacity:** 100-TB volumes <br>
125-
**Working Dataset:** 10 TB <br>
126-
**Service Level:** Standard storage with cool access <br>
127-
**Volume Count/Size:** 1 <br>
128-
**Client Count:** One large client <br>
129-
**OS:** RHEL 8.3 <br>
130-
**Mount Option:** `rw,nconnect=8,hard,rsize=262144,wsize=262144,vers=3,tcp,bg,hard` <br>
131-
132-
#### Methodology
133-
134-
Sequentially read blocks aren't rewarmed to the hot tier. However, small dataset sizes might see performance improvements because of caching (no performance change guarantees).
135-
136-
This test provides the following data points:
137-
* 100% hot tier dataset
138-
* 100% cool tier dataset
139-
140-
This test ran for 30 minutes to obtain a stable performance number.
141-
142-
#### Results
143-
144-
The following table summarizes the test results:
145-
146-
| 64-k sequential | Read throughput |
147-
|-|-|
148-
| Hot data | 1,683 MB/s |
149-
| Cool data | 899 MB/s |
150-
151-
### Test conclusions
152-
153-
Data read from the cool tier experiences a performance hit. If you size your time to cool off correctly, then you might not experience a performance hit at all. You might have little cool tier access, and a 30-day window is perfect for keeping warm data warm.
154-
155-
You should avoid a situation that churns blocks between the hot tier and the cool tier. For instance, you set a workload for data to cool seven days, and you randomly read a large percentage of the dataset every 11 days.
156-
157-
In summary, if your working set is predictable, you can save cost by moving infrequently accessed data blocks to the cool tier. The 7 to 30 day wait range before cooling provides a large window for working sets that are rarely accessed after they're dormant or don't require the hot-tier speeds when they're accessed.
158-
15982
## Metrics
16083

16184
Cool access offers [performance metrics](azure-netapp-files-metrics.md#cool-access-metrics) to understand usage patterns on a per volume basis:

articles/azure-netapp-files/performance-considerations-cool-access.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ services: azure-netapp-files
55
author: b-ahibbard
66
ms.service: azure-netapp-files
77
ms.topic: conceptual
8-
ms.date: 08/23/2024
8+
ms.date: 09/05/2024
99
ms.author: anfdocs
1010
---
1111
# Performance considerations for Azure NetApp Files storage with cool access

0 commit comments

Comments
 (0)