Skip to content

Commit db2b337

Browse files
authored
Merge pull request #187721 from msjuergent/avgs
Avgs
2 parents 9b403b0 + 1f1f8a2 commit db2b337

File tree

3 files changed

+18
-4
lines changed

3 files changed

+18
-4
lines changed

articles/virtual-machines/workloads/sap/hana-vm-operations-netapp.md

Lines changed: 15 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.service: virtual-machines-sap
1212
ms.topic: article
1313
ms.tgt_pltfrm: vm-linux
1414
ms.workload: infrastructure
15-
ms.date: 09/08/2021
15+
ms.date: 02/07/2022
1616
ms.author: juergent
1717
ms.custom: H1Hack27Feb2017
1818

@@ -34,12 +34,12 @@ When considering Azure NetApp Files for the SAP Netweaver and SAP HANA, be aware
3434
- The minimum volume size is 100 GiB
3535
- Azure NetApp Files and all virtual machines, where Azure NetApp Files volumes are mounted, must be in the same Azure Virtual Network or in [peered virtual networks](../../../virtual-network/virtual-network-peering-overview.md) in the same region
3636
- It is important to have the virtual machines deployed in close proximity to the Azure NetApp storage for low latency.
37-
- The selected virtual network must have a subnet, delegated to Azure NetApp Files
37+
- The selected virtual network must have a subnet, delegated to Azure NetApp Files. The subnet requires a minimum of a /28 IP address range. Ideally a /26 would be better if many ANF volumes should be mounted to VMs in the specific VNET
3838
- Make sure the latency from the database server to the ANF volume is measured and below 1 millisecond
3939
- The throughput of an Azure NetApp volume is a function of the volume quota and Service level, as documented in [Service level for Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-service-levels.md). When sizing the HANA Azure NetApp volumes, make sure the resulting throughput meets the HANA system requirements. Alternatively consider using a [manual QoS capacity pool](../../../azure-netapp-files/azure-netapp-files-understand-storage-hierarchy.md#manual-qos-type) where volume capacity and throughput can be configured and scaled independently (SAP HANA specific examples are in [this document](../../../azure-netapp-files/azure-netapp-files-understand-storage-hierarchy.md#manual-qos-type)
4040
- Try to “consolidate” volumes to achieve more performance in a larger Volume for example, use one volume for /sapmnt, /usr/sap/trans, … if possible
4141
- Azure NetApp Files offers [export policy](../../../azure-netapp-files/azure-netapp-files-configure-export-policy.md): you can control the allowed clients, the access type (Read&Write, Read Only, etc.).
42-
- Azure NetApp Files feature isn't zone aware yet. Currently Azure NetApp Files feature isn't deployed in all Availability zones in an Azure region. Be aware of the potential latency implications in some Azure regions.
42+
- Azure NetApp Files feature isn't zone aware yet. Currently Azure NetApp Files feature isn't deployed in all Availability zones in an Azure region. Be aware of the potential latency implications in some Azure regions. Though to achieve proximity, the functionality of [Application Volume Groups](../../../azure-netapp-files/application-volume-group-introduction.md) is in public preview. See also later in this article
4343
- The User ID for <b>sid</b>adm and the Group ID for `sapsys` on the virtual machines must match the configuration in Azure NetApp Files.
4444

4545
> [!IMPORTANT]
@@ -120,6 +120,18 @@ Therefore you could consider to deploy similar throughput for the ANF volumes as
120120
121121
Documentation on how to deploy an SAP HANA scale-out configuration with standby node using NFS v4.1 volumes that are hosted in ANF is published in [SAP HANA scale-out with standby node on Azure VMs with Azure NetApp Files on SUSE Linux Enterprise Server](./sap-hana-scale-out-standby-netapp-files-suse.md).
122122

123+
## Deployment through application volume groups for SAP HANA (AVG)
124+
To deploy ANF volumes with proximity to your VM, a new functionality called application volume groups got developed. The functionality is currently in public preview. There is a series of articles that document the functionality. Best is to start with the article [Understand Azure NetApp Files application volume group for SAP HANA](../../../azure-netapp-files/application-volume-group-introduction.md). As you read the articles, it becomes clear that the usage of AVGs invovles the usage of Azure proximity placement groups as well. Proximity placement groups are used by the new functionality to tie into with the volumes that are getting created. To ensure that over the lifetime of the HANA system, the VM’s will not be moved away from the ANF volumes, we recommend using a combination of Avset/ PPG for each of the zones you deploy into.
125+
The order of deployment would look like:
126+
127+
- Using the [form](https://aka.ms/HANAPINNING) you need to request a pinning of the empty Avset to a compute HW to ensure that VM’s will not move
128+
- Assign a PPG to the Avset and start a VM assigned to this Avset
129+
- Use Azure application volume group to deploy your HANA volumes
130+
131+
The proximity placement group configuration to use AVGs in an optimal way would look like:
132+
133+
![ANF application volume group and ppg architecture](media/hana-vm-operations-netapp/avg-ppg-architecture.png)
134+
123135

124136
## Availability
125137
ANF system updates and upgrades are applied without impacting the customer environment. The defined [SLA is 99.99%](https://azure.microsoft.com/support/legal/sla/netapp/).
722 KB
Loading

articles/virtual-machines/workloads/sap/sap-proximity-placement-scenarios.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.service: virtual-machines-sap
1212
ms.topic: article
1313
ms.tgt_pltfrm: vm-linux
1414
ms.workload: infrastructure
15-
ms.date: 11/14/2021
15+
ms.date: 02/07/2022
1616
ms.author: juergent
1717
ms.custom: H1Hack27Feb2017
1818

@@ -91,6 +91,8 @@ Based on many improvements deployed by Microsoft into the Azure regions to reduc
9191

9292
The difference to the recommendation given so far is that the database VMs in the two zones are no more a part of the proximity placement groups. The proximity placement groups per zone are now scoped with the deployment of the VM running the SAP ASCS/SCS instances. This also means that for the regions where Availability Zones are collected by multiple datacenters, the ASCS/SCS instance, and the application tier could run under one network spine and the database VMs could run under another network spine. Though with the network improvements made, the network latency between the SAP application tier and the DBMS tier still should be sufficient for sufficiently good performance and throughput. The advantage of this new configuration is that you have more flexibility in resizing VMs or moving to new VM types with either the DBMS layer or/and the application layer of the SAP system.
9393

94+
For the special case of using Azure NetApp Files (ANF) for the DBMS environment and the ANF related new functionaliy of [Azure application availability groups for SAP HANA](../../../azure-netapp-files/application-volume-group-introduction.md) and its necessity for proximity placement groups, check the document [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md).
95+
9496

9597
### Proximity placement groups with availability set deployments
9698
In this case, the purpose is to use proximity placement groups to collocate the VMs that are deployed through different availability sets. In this usage scenario, you are not using a controlled deployment across different Availability Zones in a region. Instead you want to deploy the SAP system by using availability sets. As a result, you have at least an availability set for the DBMS VMs, ASCS/SCS VMs, and the application tier VMs. Since you cannot specify at deployment time of a VM an availability set AND an Availability Zone, you can't control where the VMs in the different availability sets are going to be allocated. This could result in some Azure regions that the network latency between different VMs, still could be too high to give a sufficiently good performance experience. So the resulting architecture would look like:

0 commit comments

Comments
 (0)