You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/sap/workloads/high-availability-zones.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,7 +23,7 @@ As of the typical SAP NetWeaver or S/4HANA architecture, you need to protect thr
23
23
24
24
The major differences between deploying your critical VMs through availability sets or Availability Zones are:
25
25
26
-
- Deploying with an availability set is lining up the VMs within the set in a single zone or datacenter (whatever applies for the specific region). As a result the deployment through the availability set isn't protected by power, cooling or networking issues that affect the datacenter(s) of the zone as a whole. Using availability sets, there's also no forced alignment between a VM and its disks. Means, the disks can be in any datacenter of the Azure region, independent of the zonal structure of the region. On the plus side, the VMs are aligned with update and fault domains within that zone or datacenter. Specifically for the SAP ASCS or database layer where we protect two VMs per availability set, the alignment with fault domains prevents that both VMs are ending up on the same host hardware.
26
+
- Deploying with an availability set is lining up the VMs within the set in a single zone or datacenter (whatever applies for the specific region). As a result the deployment through the availability set isn't protected by power, cooling or networking issues that affect the datacenter(s) of the zone as a whole. With availability sets, there's also no forced alignment between a VM and its disks. Means, the disks can be in any datacenter of the Azure region, independent of the zonal structure of the region. On the plus side, the VMs are aligned with update and fault domains within that zone or datacenter. Specifically for the SAP ASCS or database layer where we protect two VMs per availability set, the alignment with fault domains prevents that both VMs are ending up on the same host hardware.
27
27
- On deploying VMs through Azure Availability Zones and choosing different zones (maximum of three possible), is going to deploy the VMs across the different physical locations and with that adds protection from power, cooling or networking issues that affect the datacenter(s) of the zone as a whole. VMs and their related disks are also colocated in the same Availability Zone. However, as you deploy more than one VM of the same VM family into the same Availability Zone, there's no protection from those VMs ending up on the same host or same fault domain. As a result, deploying through Availability Zones is ideal for the SAP ASCS and database layer where we usually look at two VMs each. For the SAP application layer, which can be drastically more than two VMs, you might need to fall back to a different deployment model (see later).
28
28
29
29
Your motivation for a deployment across Azure Availability Zones should be that you, on top of covering failure of a single critical VM or ability to reduce downtime for software patching within a critical, want to protect from larger infrastructure issues that might affect the availability of one or multiple Azure datacenters.
@@ -57,8 +57,8 @@ Unless you configure the business process assignment with SAP functionalities li
57
57
58
58
If you want to deploy an SAP NetWeaver or S/4HANA system across zones, there are two architecture patterns you can deploy:
59
59
60
-
- Active/active: The pair of VMs running ASCS/SCS and the pair of VMs running the database layer are distributed across two zones. The number of VMs running the SAP application layer are deployed in even numbers across the same two zones. If a database or ASCS/SCS VM is failing over, some of the open and active transactions might be rolled back. But users are remaining logged in. It doesn't really matter in which of the zones the active database VM and the application instances run. This architecture is the preferred architecture to deploy across zones. In cases where network latencies between zones are causing larger differences when executing business processes, SAP functionalities like Logon Groups, RFC Server Groups, Batch Server Groups, and similar can be used to route the execution of the business processes to specific dialog instances that are in the same zone with the active database instance
61
-
- Active/passive: The pair of VMs running ASCS/SCS and the pair of VMs running the database layer are distributed across two zones. The number of VMs running the SAP application layer are deployed into one of the Availability Zones. You run the application layer in the same zone as the active ASCS/SCS and database instance. You can use this deployment architecture if you deem the network latency across the different zones being too high, And with that causing intolerable differences in the runtime of your business processes. Or if you want to use Availability Zone deployments as Short Distance DR deployments. the zones. If an ASCS/SCS or database VM fails over to the secondary zone, you might encounter higher network latency and with that a reduction of throughput. And you're required to fail back the previously failed over VM as soon as possible to get back to the previous throughput levels. If a zonal outage occurs, the application layer needs to be failed over to the secondary zone. An activity that users experience as complete system shutdown.
60
+
- Active/active: The pair of VMs running ASCS/SCS and the pair of VMs running the database layer are distributed across two zones. The VMs running the SAP application layer are deployed in even numbers across the same two zones. If a database or ASCS/SCS VM is failing over, some of the open and active transactions might be rolled back. But users are remaining logged in. It doesn't really matter in which of the zones the active database VM and the application instances run. This architecture is the preferred architecture to deploy across zones. In cases where network latencies between zones are causing larger differences when executing business processes, you could use functionalities like SAP Logon Groups, RFC Server Groups, Batch Server Groups, and similar to route the execution of the business processes to specific dialog instances that are in the same zone with the active database instance
61
+
- Active/passive: The pair of VMs running ASCS/SCS and the pair of VMs running the database layer are distributed across two zones. The VMs running the SAP application layer are deployed into one of the Availability Zones. You run the application layer in the same zone as the active ASCS/SCS and database instance. You can use this deployment architecture if you deem the network latency across the different zones as too high, And with that causing intolerable differences in the runtime of your business processes. Or if you want to use Availability Zone deployments as Short Distance DR deployments. the zones. If an ASCS/SCS or database VM fails over to the secondary zone, you might encounter higher network latency and with that a reduction of throughput. And you're required to fail back the previously failed over VM as soon as possible to get back to the previous throughput levels. If a zonal outage occurs, the application layer needs to be failed over to the secondary zone. An activity that users experience as complete system shutdown.
62
62
63
63
So before you decide how to use Availability Zones, you need to determine:
64
64
@@ -72,7 +72,7 @@ To determine the latency between the different zones, you need to:
72
72
73
73
- Deploy the VM SKU you want to use for your database instance in all three zones. Make sure [Azure Accelerated Networking](https://azure.microsoft.com/blog/maximize-your-vm-s-performance-with-accelerated-networking-now-generally-available-for-both-windows-and-linux/) is enabled when you take this measurement. Accelerated Networking is the default setting since a few years. Nevertheless, check whether it's enabled and working
74
74
- When you find the two zones with the least network latency, deploy another three VMs of the VM SKU that you want to use as the application layer VM across the three Availability Zones. Measure the network latency against the two DBMS VMs in the two DBMS zones that you selected.
75
-
- Use **`niping`** as a measuring tool. This tool, from SAP, is described in SAP support notes [#500235](https://launchpad.support.sap.com/#/notes/500235) and [#1100926](https://launchpad.support.sap.com/#/notes/1100926/E). Treat the network latency classification in SAP Note [#1100926](https://launchpad.support.sap.com/#/notes/1100926/E) as rough guidances. Netwokr latencies larger than 0.7 milliseconds does not mean that the system won't work technically or that business processes will not satisfy your individual workload. The note is not meant to state what is supported or not supported by SAP and/or Microsoft. Focus on the commands documented for latency measurements. Because **ping** doesn't work through the Azure Accelerated Networking code paths, we don't recommend that you use it.
75
+
- Use **`niping`** as a measuring tool. This tool, from SAP, is described in SAP support notes [#500235](https://launchpad.support.sap.com/#/notes/500235) and [#1100926](https://launchpad.support.sap.com/#/notes/1100926/E). Treat the network latency classification in SAP Note [#1100926](https://launchpad.support.sap.com/#/notes/1100926/E) as rough guidance. Network latencies larger than 0.7 milliseconds don't mean that the system isn't going to work technically or that business processes aren't satisfying your individual SLAs. The note isn't meant to state what is supported or not supported by SAP and/or Microsoft. Focus on the commands documented for latency measurements. Because **ping** doesn't work through the Azure Accelerated Networking code paths, we don't recommend that you use it.
76
76
77
77
You don't need to perform these tests manually. You can find a PowerShell procedure [Availability Zone Latency Test](https://github.com/Azure/SAP-on-Azure-Scripts-and-Utilities/tree/master/AvZone-Latency-Test) that automates the latency tests described.
78
78
@@ -108,13 +108,13 @@ The following considerations apply for this configuration:
108
108
- The Azure virtual network that you deployed to host the SAP system, together with its subnets, is stretched across zones. You don't need separate virtual networks and subnets for each zone.
109
109
- For all virtual machines you deploy, you need to use [Azure Managed Disks](https://azure.microsoft.com/services/managed-disks/). Unmanaged disks aren't supported for zonal deployments.
110
110
- Azure Premium SSD v2, [Ultra SSD storage](/azure/virtual-machines/disks-types#ultra-disks), or Azure NetApp Files don't support any synchronous storage replication across zones. For database deployments, we rely on database methods to replicate data across zones.
111
-
- Premium SSD v1 which supports synchronous zonal replication across Aailability Zones have not been tested with SAP database workload. Therefore the zonal synchronous replication of Azure Premium SSD v1 need to be considered as not supported for SAP database workloads.
111
+
- Premium SSD v1 which supports synchronous zonal replication across Availability Zones haven't been tested with SAP database workload. Therefore the zonal synchronous replication of Azure Premium SSD v1 needs to be considered as not supported for SAP database workloads.
112
112
- For SMB and NFS shares based on [Azure Premium Files](https://azure.microsoft.com/blog/announcing-the-general-availability-of-azure-premium-files/), zonal redundancy with synchronous replication is offered. Check [this document](../../storage/files/storage-files-planning.md#redundancy) for availability of ZRS for Azure Premium Files in the region you want to deploy into. The usage of zonal replicated NFS and SMB shares is fully supported with SAP application layer deployments and high availability failover clusters for NetWeaver or S/4HANA centrals services. Documents that cover these cases are:
113
113
-[High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server with NFS on Azure Files](./high-availability-guide-suse-nfs-azure-files.md)
114
114
-[Azure Virtual Machines high availability for SAP NetWeaver on Red Hat Enterprise Linux with Azure NetApp Files for SAP applications](./high-availability-guide-rhel-nfs-azure-files.md)
115
115
-[High availability for SAP NetWeaver on Azure VMs on Windows with Azure Files Premium SMB for SAP applications](./high-availability-guide-windows-azure-files-smb.md)
116
116
- The third zone is used to host the SBD device if you build a [SUSE Linux Pacemaker cluster](./high-availability-guide-suse-pacemaker.md#create-an-azure-fence-agent-device) and use SBD devices instead of the Azure Fencing Agent. Or for more application instances.
117
-
- To achieve run time consistency for critical business processes, you can try to direct certain batch jobs and users to application instances that are in-zone with the active database instance by using SAP batch server groups, SAP logon groups, or RFC groups. However, in zonal failover process, you would need to manually move these groups to instances running on VMs that are in-zone with the active DB VM.
117
+
- To achieve run time consistency for critical business processes, you can try directing certain batch jobs and users to application instances that are in-zone with the active database instance by using SAP batch server groups, SAP logon groups, or RFC groups. However, in zonal failover process, you would need to manually move these groups to instances running on VMs that are in-zone with the active DB VM.
118
118
- You might want to deploy dormant dialog instances in each of the zones.
119
119
120
120
> [!IMPORTANT]
@@ -137,7 +137,7 @@ The following considerations apply for this configuration:
137
137
- The Azure virtual network that you deployed to host the SAP system, together with its subnets, is stretched across zones. You don't need separate virtual networks for each zone.
138
138
- For all virtual machines you deploy, you need to use [Azure Managed Disks](https://azure.microsoft.com/services/managed-disks/). Unmanaged disks aren't supported for zonal deployments.
139
139
- Azure Premium SSD v2, [Ultra SSD storage](/azure/virtual-machines/disks-types#ultra-disks), or Azure NetApp Files don't support any synchronous storage replication across zones. For database deployments, we rely on database methods to replicate data across zones.
140
-
- Premium SSD v1 which supports synchronous zonal replication across Aailability Zones have not been tested with SAP database workload. Therefore the zonal synchronous replication of Azure Premium SSD v1 need to be considered as not supported for SAP database workloads.
140
+
- Premium SSD v1 which supports synchronous zonal replication across Availability Zones haven't been tested with SAP database workload. Therefore the zonal synchronous replication of Azure Premium SSD v1 needs to be considered as not supported for SAP database workloads.
141
141
- For SMB and NFS shares based on [Azure Premium Files](https://azure.microsoft.com/blog/announcing-the-general-availability-of-azure-premium-files/), zonal redundancy with synchronous replication is offered. Check [this document](../../storage/files/storage-files-planning.md#redundancy) for availability of ZRS for Azure Premium Files in the region you want to deploy into. The usage of zonal replicated NFS and SMB shares is fully supported with SAP application layer deployments and high availability failover clusters for NetWeaver or S/4HANA centrals services. Documents that cover these cases are:
142
142
-[High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server with NFS on Azure Files](./high-availability-guide-suse-nfs-azure-files.md)
143
143
-[Azure Virtual Machines high availability for SAP NetWeaver on Red Hat Enterprise Linux with Azure NetApp Files for SAP applications](./high-availability-guide-rhel-nfs-azure-files.md)
@@ -159,7 +159,7 @@ Here's one example of how such a configuration might look:
159
159
160
160
The following considerations apply for this configuration:
161
161
162
-
- You're either assuming that there's a significant distance between the facilities hosting an Availability Zone or you're forced to stay within a certain Azure region. Availability sets can't be deployed in Azure Availability Zones. To compensate for that, you can use Azure proximity placement groups as documented in the article [Azure Proximity Placement Groups for optimal network latency with SAP applications](proximity-placement-scenarios.md).
162
+
- You're either assuming that there's a significant distance between the facilities hosting an Availability Zone. Or you're forced to stay within a certain Azure region. Availability sets can't be deployed in Azure Availability Zones. To compensate for that, you can use Azure proximity placement groups as documented in the article [Azure Proximity Placement Groups for optimal network latency with SAP applications](proximity-placement-scenarios.md).
163
163
- When you use this architecture, you need to monitor the status closely, and try to keep the active database instance and SAP Central Services instances in the same zone as your deployed application layer. If there was a failover of SAP Central Service or the DBMS instance, you want to make sure that you can manually fail back into the zone with the SAP application layer deployed as quickly as possible.
164
164
- You should have production application instances preinstalled in the VMs that run the active QA application instances.
165
165
- In a zonal failure case, shut down the QA application instances and start the production instances instead. You need to use virtual names for the application instances to make this work.
@@ -168,7 +168,7 @@ The following considerations apply for this configuration:
168
168
- The Azure virtual network that you deployed to host the SAP system, together with its subnets, is stretched across zones. You don't need separate virtual networks for each zone.
169
169
- For all virtual machines you deploy, you need to use [Azure Managed Disks](https://azure.microsoft.com/services/managed-disks/). Unmanaged disks aren't supported for zonal deployments.
170
170
- Azure Premium SSD v2, [Ultra SSD storage](/azure/virtual-machines/disks-types#ultra-disks), or Azure NetApp Files don't support any synchronous storage replication across zones. For database deployments, we rely on database methods to replicate data across zones.
171
-
- Premium SSD v1 which supports synchronous zonal replication across Aailability Zones have not been tested with SAP database workload. Therefore the zonal synchronous replication of Azure Premium SSD v1 need to be considered as not supported for SAP database workloads.
171
+
- Premium SSD v1 which supports synchronous zonal replication across Availability Zones haven't been tested with SAP database workload. Therefore the zonal synchronous replication of Azure Premium SSD v1 needs to be considered as not supported for SAP database workloads.
172
172
- For SMB and NFS shares based on [Azure Premium Files](https://azure.microsoft.com/blog/announcing-the-general-availability-of-azure-premium-files/), zonal redundancy with synchronous replication is offered. Check [this document](../../storage/files/storage-files-planning.md#redundancy) for availability of ZRS for Azure Premium Files in the region you want to deploy into. The usage of zonal replicated NFS and SMB shares is fully supported with SAP application layer deployments and high availability failover clusters for NetWeaver or S/4HANA centrals services. Documents that cover these cases are:
173
173
-[High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server with NFS on Azure Files](./high-availability-guide-suse-nfs-azure-files.md)
174
174
-[Azure Virtual Machines high availability for SAP NetWeaver on Red Hat Enterprise Linux with Azure NetApp Files for SAP applications](./high-availability-guide-rhel-nfs-azure-files.md)
0 commit comments