Skip to content

Commit 2b9b662

Browse files
authored
Merge pull request #285430 from b-ahibbard/cool-access-performance
Cool access performance
2 parents c5f953d + d909439 commit 2b9b662

12 files changed

+115
-84
lines changed

articles/azure-netapp-files/TOC.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -111,6 +111,8 @@
111111
href: performance-large-volumes-linux.md
112112
- name: Performance impact of Kerberos on NFSv4.1
113113
href: performance-impact-kerberos.md
114+
- name: Performance considerations for cool access
115+
href: performance-considerations-cool-access.md
114116
- name: Oracle database performance on Azure NetApp Files single volumes
115117
href: performance-oracle-single-volumes.md
116118
- name: Oracle database performance on Azure NetApp Files multiple volumes

articles/azure-netapp-files/azure-netapp-files-service-levels.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ services: azure-netapp-files
55
author: b-hchen
66
ms.service: azure-netapp-files
77
ms.topic: conceptual
8-
ms.date: 08/20/2024
8+
ms.date: 09/05/2024
99
ms.author: anfdocs
1010
---
1111
# Service levels for Azure NetApp Files
@@ -25,7 +25,7 @@ Azure NetApp Files supports three service levels: *Ultra*, *Premium*, and *Stand
2525
The Ultra service level provides up to 128 MiB/s of throughput per 1 TiB of capacity provisioned.
2626

2727
* Storage with cool access:
28-
Cool access storage is available with the Standard, Premium, and Ultra service levels. The throughput experience for any of these service levels with cool access is the same for cool access as it is for data in the hot tier. It may differ when data that resides in the cool tier is accessed. For more information, see [Azure NetApp Files storage with cool access](cool-access-introduction.md#effects-of-cool-access-on-data).
28+
Cool access storage is available with the Standard, Premium, and Ultra service levels. The throughput experience for any of these service levels with cool access is the same for cool access as it is for data in the hot tier. It may differ when data that resides in the cool tier is accessed. For more information, see [Azure NetApp Files storage with cool access](cool-access-introduction.md) and [Performance considerations for storage with cool access](performance-considerations-cool-access.md).
2929

3030
## Throughput limits
3131

articles/azure-netapp-files/cool-access-introduction.md

Lines changed: 1 addition & 77 deletions
Original file line numberDiff line numberDiff line change
@@ -79,83 +79,6 @@ Azure NetApp Files storage with cool access is supported for the following regio
7979
* West US 2
8080
* West US 3
8181

82-
## Effects of cool access on data
83-
84-
This section describes a large-duration, large-dataset warming test. It shows an example scenario of a dataset where 100% of the data is in the cool tier and how it warms over time.
85-
86-
Typical randomly accessed data starts as part of a working set (read, modify, and write). As data loses relevance, it becomes "cool" and is eventually tiered off to the cool tier.
87-
88-
Cool data might become hot again. It’s not typical for the entire working set to start as cold, but some scenarios do exist, for example, audits, year-end processing, quarter-end processing, lawsuits, and end-of-year licensure reviews.
89-
90-
This scenario provides insight to the warming performance behavior of a 100% cooled dataset. The insight applies whether it's a small percentage or the entire dataset.
91-
92-
### 4k random-read test
93-
94-
This section describes a 4k random-read test across 160 files totaling 10 TB of data.
95-
96-
#### Setup
97-
98-
**Capacity pool size:** 100-TB capacity pool <br>
99-
**Volume allocated capacity:** 100-TB volumes <br>
100-
**Working Dataset:** 10 TB <br>
101-
**Service Level:** Standard storage with cool access <br>
102-
**Volume Count/Size:** 1 <br>
103-
**Client Count:** Four standard 8-s clients <br>
104-
**OS:** RHEL 8.3 <br>
105-
**Mount Option:** `rw,nconnect=8,hard,rsize=262144,wsize=262144,vers=3,tcp,bg,hard`
106-
107-
#### Methodology
108-
109-
This test was set up via FIO to run a 4k random-read test across 160 files that total 10 TB of data. FIO was configured to randomly read each block across the entire working dataset. (It can read any block any number of times as part of the test instead of touching each block once). This script was called once every 5 minutes and then a data point collected on performance. When blocks are randomly read, they're moved to the hot tier.
110-
111-
This test had a large dataset and ran several days starting the worst-case most-aged data (all caches dumped). The time component of the X axis has been removed because the total time to rewarm varies due to the dataset size. This curve could be in days, hours, minutes, or even seconds depending on the dataset.
112-
113-
#### Results
114-
115-
The following chart shows a test that ran over 2.5 days on the 10-TB working dataset that has been 100% cooled and the buffers cleared (absolute worst-case aged data).
116-
117-
:::image type="content" source="./media/cool-access-introduction/cool-access-test-chart.png" alt-text="Diagram that shows cool access read IOPS warming cooled tier, long duration, and 10-TB working set. The y-axis is titled IOPS, ranging from 0 to 140,000 in increments of 20,000. The x-axis is titled Behavior Over Time. A line charting Read IOPs is roughly flat until the right-most third of the x-axis where growth is exponential." lightbox="./media/cool-access-introduction/cool-access-test-chart.png":::
118-
119-
### 64k sequential-read test
120-
121-
#### Setup
122-
123-
**Capacity pool size:** 100-TB capacity pool <br>
124-
**Volume allocated capacity:** 100-TB volumes <br>
125-
**Working Dataset:** 10 TB <br>
126-
**Service Level:** Standard storage with cool access <br>
127-
**Volume Count/Size:** 1 <br>
128-
**Client Count:** One large client <br>
129-
**OS:** RHEL 8.3 <br>
130-
**Mount Option:** `rw,nconnect=8,hard,rsize=262144,wsize=262144,vers=3,tcp,bg,hard` <br>
131-
132-
#### Methodology
133-
134-
Sequentially read blocks aren't rewarmed to the hot tier. However, small dataset sizes might see performance improvements because of caching (no performance change guarantees).
135-
136-
This test provides the following data points:
137-
* 100% hot tier dataset
138-
* 100% cool tier dataset
139-
140-
This test ran for 30 minutes to obtain a stable performance number.
141-
142-
#### Results
143-
144-
The following table summarizes the test results:
145-
146-
| 64-k sequential | Read throughput |
147-
|-|-|
148-
| Hot data | 1,683 MB/s |
149-
| Cool data | 899 MB/s |
150-
151-
### Test conclusions
152-
153-
Data read from the cool tier experiences a performance hit. If you size your time to cool off correctly, then you might not experience a performance hit at all. You might have little cool tier access, and a 30-day window is perfect for keeping warm data warm.
154-
155-
You should avoid a situation that churns blocks between the hot tier and the cool tier. For instance, you set a workload for data to cool seven days, and you randomly read a large percentage of the dataset every 11 days.
156-
157-
In summary, if your working set is predictable, you can save cost by moving infrequently accessed data blocks to the cool tier. The 7 to 30 day wait range before cooling provides a large window for working sets that are rarely accessed after they're dormant or don't require the hot-tier speeds when they're accessed.
158-
15982
## Metrics
16083

16184
Cool access offers [performance metrics](azure-netapp-files-metrics.md#cool-access-metrics) to understand usage patterns on a per volume basis:
@@ -335,3 +258,4 @@ Your first twelve-month savings:
335258

336259
* [Manage Azure NetApp Files storage with cool access](manage-cool-access.md)
337260
* [Metrics for Azure NetApp Files](azure-netapp-files-metrics.md)
261+
* [Performance considerations for Azure NetApp Files storage with cool access](performance-considerations-cool-access.md)

articles/azure-netapp-files/manage-cool-access.md

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -126,13 +126,13 @@ Azure NetApp Files storage with cool access can be enabled during the creation o
126126

127127
* *Cool access is **enabled***:
128128
* If no value is set for cool access retrieval policy:
129-
The retrieval policy will be set to `Default`, and cold data will be retrieved to the hot tier only when performing random reads. Sequential reads will be served directly from the cool tier.
129+
The retrieval policy set to `Default`. Cool data is only retrieved to the hot tier only when performing random reads. Sequential reads are served directly from the cool tier.
130130
* If cool access retrieval policy is set to `Default`:
131-
Cold data will be retrieved only by performing random reads.
131+
Cold data is retrieved only by performing random reads.
132132
* If cool access retrieval policy is set to `On-Read`:
133-
Cold data will be retrieved by performing both sequential and random reads.
133+
Cold data is retrieved by performing both sequential and random reads.
134134
* If cool access retrieval policy is set to `Never`:
135-
Cold data is served directly from the cool tier and not be retrieved to the hot tier.
135+
Cold data is served directly from the cool tier and not retrieved to the hot tier.
136136
* *Cool access is **disabled**:*
137137
* You can set a cool access retrieval policy if cool access is disabled only if there's existing data on the cool tier.
138138
* Once you disable the cool access setting on the volume, the cool access retrieval policy remains the same.
@@ -151,7 +151,7 @@ In a cool-access enabled capacity pool, you can enable an existing volume to sup
151151
1. Right-click the volume for which you want to enable the cool access.
152152
1. In the **Edit** window that appears, set the following options for the volume:
153153
* **Enable Cool Access**
154-
This option specifies whether the volume will support cool access.
154+
This option specifies whether the volume supports cool access.
155155
* **Coolness Period**
156156
This option specifies the period (in days) after which infrequently accessed data blocks (cold data blocks) are moved to the Azure storage account. The default value is 31 days. The supported values are between 2 and 183 days.
157157
* **Cool Access Retrieval Policy**
@@ -185,3 +185,4 @@ Based on the client read/write patterns, you can modify the cool access configur
185185

186186
## Next steps
187187
* [Azure NetApp Files storage with cool access](cool-access-introduction.md)
188+
* [Performance considerations for Azure NetApp Files storage with cool access](performance-considerations-cool-access.md)
60.5 KB
Loading
62.9 KB
Loading
44.6 KB
Loading
32.7 KB
Loading
81.5 KB
Loading
19 KB
Loading

0 commit comments

Comments
 (0)