Skip to content

Commit 8d8744e

Browse files
authored
Merge pull request #185463 from b-hchen/patch-21
Added FAQ: What are the rules behind the proposed throughput for my H…
2 parents 22d68fb + 6768938 commit 8d8744e

File tree

1 file changed

+17
-1
lines changed

1 file changed

+17
-1
lines changed

articles/azure-netapp-files/faq-application-volume-group.md

Lines changed: 17 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.workload: storage
66
ms.topic: conceptual
77
author: b-hchen
88
ms.author: anfdocs
9-
ms.date: 11/19/2021
9+
ms.date: 01/19/2022
1010
---
1111
# Application volume group FAQs
1212

@@ -105,6 +105,22 @@ Creating a volume group involves many different steps, and not all of them can b
105105

106106
In the current implementation, the application volume group has a focus on the initial creation and deletion of a volume group only.
107107

108+
## What are the rules behind the proposed throughput for my HANA data and log volumes?
109+
110+
SAP defines the Key Performance Indicators (KPIs) for the HANA data and log volume as 400 MiB/s for the data and 250 MiB/s for the log volume. This definition is independent of the size or the workload of the HANA database. Application volume group scales the throughput values in a way that even the smallest database meets the SAP HANA KPIs, and larger database will benefit from a higher throughput level, scaling the proposal based on the entered HANA database size.
111+
112+
The following table describes the memory range and proposed throughput ***for the HANA data volume***:
113+
114+
<table><thead><tr><th colspan="2">Memory range (in TB)</th><th rowspan="2">Proposed throughput</th></tr><tr><th>Minimum</th><th>Maximum</th></tr></thead><tbody><tr><td>0</td><td>1</td><td>400</td></tr><tr><td>1</td><td>2</td><td>600</td></tr><tr><td>2</td><td>4</td><td>800</td></tr><tr><td>4</td><td>6</td><td>1000</td></tr><tr><td>6</td><td>8</td><td>1200</td></tr><tr><td>8</td><td>10</td><td>1400</td></tr><tr><td>10</td><td>unlimited</td><td>1500</td></tr></tbody></table>
115+
116+
The following table describes the memory range and proposed throughput ***for the HANA log volume***:
117+
118+
<table><thead><tr><th colspan="2">Memory range (in TB)</th><th rowspan="2">Proposed throughput</th></tr><tr><th>Minimum</th><th>Maximum</th></tr></thead><tbody><tr><td>0</td><td>4</td><td>250</td></tr><tr><td>4</td><td>unlimited</td><td>500</td></tr></tbody></table>
119+
120+
Higher throughput for the database volume is most important for the database startup of larger databases when reading data into memory. At runtime, most of the I/O is write I/O, where even the KPIs show lower values. User experience shows that, for smaller databases, HANA KPI values may be higher than what’s required for most of the time.
121+
122+
Azure NetApp Files performance of each volume can be adjusted at runtime. As such, at any time, you can adjust the performance of your database by adjusting the data and log volume throughput to your specific requirements. For instance, you can fine-tune performance and reduce costs by allowing higher throughput at startup while reducing to KPIs for normal operation.
123+
108124
## Next steps
109125

110126
* [Understand Azure NetApp Files application volume group for SAP HANA](application-volume-group-introduction.md)

0 commit comments

Comments
 (0)