You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/sap/workloads/disaster-recovery-sap-hana.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -54,7 +54,7 @@ Requirements for a third HSR site are different for HANA scale-up and HANA scale
54
54
55
55
## HANA scale-up: Add HANA multitarget system replication for DR purposes
56
56
57
-
With SAP HANA HA hook SAPHanaSR for [SLES](./sap-hana-high-availability.md#implement-hana-hooks-saphanasr-and-suschksrv) and [RHEL](./sap-hana-high-availability-rhel.md#implement-the-python-system-replication-hook-saphanasr), you can add a third node for DR purposes. The Pacemaker environment is aware of a HANA multitarget DR setup.
57
+
With SAP HANA HA hooks SAPHanaSR/susHanaSR for [SLES](./sap-hana-high-availability.md#implement-hana-resource-agents) and [RHEL](./sap-hana-high-availability-rhel.md#implement-the-python-system-replication-hook-saphanasr), you can add a third node for DR purposes. The Pacemaker environment is aware of a HANA multitarget DR setup.
58
58
59
59
Failure of the third node won't trigger any cluster action. The cluster detects the replication status of connected sites and the monitored attribute for the third site can change between `SOK` and `SFAIL` states. Any takeover tests to the third/DR site or executing your DR exercise process should first place the cluster resources into maintenance mode to prevent any undesired cluster action.
Copy file name to clipboardExpand all lines: articles/sap/workloads/get-started.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -55,6 +55,7 @@ In the SAP workload documentation space, you can find the following areas:
55
55
56
56
## Change Log
57
57
58
+
- August 22, 2024: Added documentation option for SAPHanaSR-angi as separate tab in [High availability for SAP HANA on Azure VMs on SLES](./sap-hana-high-availability.md) and [High availability of SAP HANA scale-up with Azure NetApp Files on SLES](./sap-hana-high-availability-netapp-files-suse.md).
58
59
- July 29, 2024: Changes in [Azure VMs high availability for SAP NetWeaver on SLES for SAP Applications with simple mount and NFS](./high-availability-guide-suse-nfs-simple-mount.md), [Azure VMs high availability for SAP NW on SLES with NFS on Azure Files](./high-availability-guide-suse-nfs-azure-files.md), [Azure VMs high availability for SAP NW on SLES with NFS on Azure Files](./high-availability-guide-suse-netapp-files.md), [Azure VMs high availability for SAP NetWeaver on SLES](./high-availability-guide-suse.md), [Azure VMs high availability for SAP NetWeaver on SLES multi-SID guide](./high-availability-guide-suse-multi-sid.md) with the instructions of managing SAP ASCS and ERS instances SAP startup framework when configured with systemd.
59
60
- July 24, 2024: Release of SBD STONITH support using iSCSI target server or Azure shared disk in [Configuring Pacemaker on RHEL in Azure](./high-availability-guide-rhel-pacemaker.md).
60
61
- July 19, 2024: Change in [Setting up Pacemaker on RHEL in Azure](./high-availability-guide-rhel-pacemaker.md) to add a statement around clusters spanning Virtual networks(VNets)/subnets.
Copy file name to clipboardExpand all lines: articles/sap/workloads/sap-hana-high-availability-netapp-files-suse.md
+25-1Lines changed: 25 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -411,7 +411,7 @@ Follow the steps in [Setting up Pacemaker on SUSE Enterprise Linux](./high-avail
411
411
412
412
## Implement HANA hooks SAPHanaSR and susChkSrv
413
413
414
-
This important step optimizes the integration with the cluster and improves the detection when a cluster failover is needed. We highly recommend that you configure both SAPHanaSR and susChkSrv Python hooks. Follow the steps in [Implement the Python system replication hooks SAPHanaSR and susChkSrv](./sap-hana-high-availability.md#implement-hana-hooks-saphanasr-and-suschksrv).
414
+
This important step optimizes the integration with the cluster and improves the detection when a cluster failover is needed. We highly recommend that you configure both SAPHanaSR and susChkSrv Python hooks. Follow the steps in [Implement the Python system replication hooks SAPHanaSR/SAPHanaSR-angi and susChkSrv](./sap-hana-high-availability.md#implement-hana-resource-agents).
415
415
416
416
## Configure SAP HANA cluster resources
417
417
@@ -443,6 +443,10 @@ Example output:
443
443
444
444
### Create file system resources
445
445
446
+
File system /hana/shared/SID is necessary forboth HANA operation and also for Pacemaker monitoring actions that determine HANA's state. Implement resource agents to monitor and actincase of failures. The section contains two options, one for`SAPHanaSR` and another for`SAPHanaSR-angi`.
447
+
448
+
#### [SAPHanaSR](#tab/saphanasr)
449
+
446
450
Create a dummy file system cluster resource. It monitors and reports failures if there's a problem accessing the NFS-mounted file system /hana/shared. That allows the cluster to trigger failover if there's a problem accessing /hana/shared. For more information, see [Handling failed NFS share in SUSE HA cluster for HANA system replication](https://www.suse.com/support/kb/doc/?id=000019904).
447
451
448
452
1. **[A]** Create the directory structure on both nodes.
@@ -508,6 +512,26 @@ Create a dummy file system cluster resource. It monitors and reports failures if
508
512
509
513
The `on-fail=fence` attribute is also added to the monitor operation. With this option, if the monitor operation fails on a node, that node is immediately fenced.
510
514
515
+
#### [SAPHanaSR-angi](#tab/saphanasr-angi)
516
+
517
+
When using SAPHanaSR-angi package and resource agent, it adds a new agent SAPHanaFilesystem to monitor read/write access to /hana/shared/SID. Filesystem /hana/shared is already mounted with entries in /etc/fstab on each host. SAPHanaFilesystem and Pacemaker doesn't mount the filesystem for HANA and doesn't need any additional mount or subdirectory pre-created.
518
+
519
+
1. **[1]** Configure SAPHanaFilesystem agent
520
+
521
+
```bash
522
+
# Replace <placeholders> with your instance number and HANA system ID.
> Timeouts in the preceding configuration might need to be adapted to the specific HANA setup to avoid unnecessary fence actions. Don't set the timeout values too low. Be aware that the file system monitor isn't related to the HANA system replication. For more information, see the [SUSE documentation](https://www.suse.com/support/kb/doc/?id=000019904).
0 commit comments