Skip to content

Commit 9e15c08

Browse files
Merge pull request #292500 from msftrobiro/sap-hana-scaleout-rhel-chksrv-add
Add ChkSrv hook for scale-out
2 parents b93719b + 333684a commit 9e15c08

File tree

1 file changed

+38
-27
lines changed

1 file changed

+38
-27
lines changed

articles/sap/workloads/sap-hana-high-availability-scale-out-hsr-rhel.md

Lines changed: 38 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87
88
ms.service: sap-on-azure
99
ms.subservice: sap-vm-workloads
1010
ms.topic: article
11-
ms.date: 06/18/2024
11+
ms.date: 12/31/2024
1212
ms.author: radeltch
1313
---
1414

@@ -943,38 +943,45 @@ Now you're ready to create the cluster resources:
943943
> [!NOTE]
944944
> For the minimum supported version of package `resource-agents-sap-hana-scaleout` for your operating system release, see [Support policies for RHEL HA clusters - Management of SAP HANA in a cluster](https://access.redhat.com/articles/3397471) .
945945
946-
2. **[1,2]** Install the HANA system replication hook on one HANA DB node on each system replication site. SAP HANA should still be down.
946+
2. **[1,2]** Configure the HANA system replication hooks on one HANA DB node on each system replication site. SAP HANA should still be down.
947+
`resource-agents-sap-hana-scaleout` version 0.185.3-0 or newer includes both hooks SAPHanaSR and ChkSrv. It is mandatory for correct cluster operation to enable the SAPHanaSR hook. We highly recommend that you configure both SAPHanaSR and ChkSrv Python hooks.
947948
948-
1. Prepare the hook as `root`.
949-
950-
```bash
951-
mkdir -p /hana/shared/myHooks
952-
cp /usr/share/SAPHanaSR-ScaleOut/SAPHanaSR.py /hana/shared/myHooks
953-
chown -R hn1adm:sapsys /hana/shared/myHooks
954-
```
955-
956-
2. Adjust `global.ini`.
949+
1. Adjust `global.ini`.
957950
958951
```bash
959952
# add to global.ini
960953
[ha_dr_provider_SAPHanaSR]
961954
provider = SAPHanaSR
962-
path = /hana/shared/myHooks
955+
path = /usr/share/SAPHanaSR-ScaleOut
963956
execution_order = 1
964-
957+
958+
[ha_dr_provider_chksrv]
959+
provider = ChkSrv
960+
path = /usr/share/SAPHanaSR-ScaleOut
961+
execution_order = 2
962+
action_on_lost = kill
963+
965964
[trace]
966965
ha_dr_saphanasr = info
966+
ha_dr_chksrv = info
967967
```
968968
969+
If you point parameter `path` to the default `/usr/share/SAPHanaSR-ScaleOut` location, the Python hook code updates automatically through OS updates. HANA uses the hook code updates when it next restarts. With an optional own path like `/hana/shared/myHooks`, you can decouple OS updates from the hook version that HANA will use.
970+
971+
You can adjust the behavior of `ChkSrv` hook by using the `action_on_lost` parameter. Valid values are [ `ignore` | `stop` | `kill` ].
972+
973+
For more information on the implementation of the SAP HANA hooks, see [Enabling the SAP HANA srConnectionChanged() hook](https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html-single/automating_sap_hana_scale-out_system_replication_using_the_rhel_ha_add-on/index#proc_instances_automating-sap-hana-scale-out-v9) and [Enabling the SAP HANA srServiceStateChanged() hook for hdbindexserver process failure action (optional)](https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html-single/automating_sap_hana_scale-out_system_replication_using_the_rhel_ha_add-on/index#con_hooks_automating-sap-hana-scale-out-v9).
974+
969975
3. **[AH]** The cluster requires sudoers configuration on the cluster node for <sid\>adm. In this example, you achieve this by creating a new file. Run the commands as `root`.
970976
971977
```bash
972978
sudo visudo -f /etc/sudoers.d/20-saphana
973979
# Insert the following lines and then save
974980
Cmnd_Alias SOK = /usr/sbin/crm_attribute -n hana_hn1_glob_srHook -v SOK -t crm_config -s SAPHanaSR
975981
Cmnd_Alias SFAIL = /usr/sbin/crm_attribute -n hana_hn1_glob_srHook -v SFAIL -t crm_config -s SAPHanaSR
976-
hn1adm ALL=(ALL) NOPASSWD: SOK, SFAIL
977-
Defaults!SOK, SFAIL !requiretty
982+
Cmnd_Alias SRREBOOT = /usr/sbin/crm_attribute -n hana_hn1_gsh -v * -l reboot -t crm_config -s SAPHanaSR
983+
hn1adm ALL=(ALL) NOPASSWD: SOK, SFAIL, SRREBOOT
984+
Defaults!SOK, SFAIL, SRREBOOT !requiretty
978985
```
979986
980987
4. **[1,2]** Start SAP HANA on both replication sites. Run as <sid\>adm.
@@ -987,19 +994,23 @@ Now you're ready to create the cluster resources:
987994
988995
```bash
989996
cdtrace
990-
awk '/ha_dr_SAPHanaSR.*crm_attribute/ \
991-
{ printf "%s %s %s %s\n",$2,$3,$5,$16 }' nameserver_*
992-
993-
# Example entries
994-
# 2020-07-21 22:04:32.364379 ha_dr_SAPHanaSR SFAIL
995-
# 2020-07-21 22:04:46.905661 ha_dr_SAPHanaSR SFAIL
996-
# 2020-07-21 22:04:52.092016 ha_dr_SAPHanaSR SFAIL
997-
# 2020-07-21 22:04:52.782774 ha_dr_SAPHanaSR SFAIL
998-
# 2020-07-21 22:04:53.117492 ha_dr_SAPHanaSR SFAIL
999-
# 2020-07-21 22:06:35.599324 ha_dr_SAPHanaSR SOK
997+
awk '/ha_dr_SAPHanaSR.*crm_attribute/ \
998+
{ printf "%s %s %s %s\n",$2,$3,$5,$16 }' nameserver_*
999+
1000+
# Example entries
1001+
# 2020-07-21 22:04:52.782774 ha_dr_SAPHanaSR SFAIL
1002+
# 2020-07-21 22:04:53.117492 ha_dr_SAPHanaSR SFAIL
1003+
# 2020-07-21 22:06:35.599324 ha_dr_SAPHanaSR SOK
1004+
```
1005+
1006+
6. **[1]** Verify the ChkSrv hook installation. Run as <sid\>adm on the active HANA system replication site.
1007+
1008+
```bash
1009+
cdtrace
1010+
tail -20 nameserver_chksrv.trc
10001011
```
10011012
1002-
6. **[1]** Create the HANA cluster resources. Run the following commands as `root`.
1013+
7. **[1]** Create the HANA cluster resources. Run the following commands as `root`.
10031014
1. Make sure the cluster is already in maintenance mode.
10041015
10051016
2. Next, create the HANA topology resource.
@@ -1089,7 +1100,7 @@ Now you're ready to create the cluster resources:
10891100
pcs constraint location SAPHanaTopology_HN1_HDB03-clone rule resource-discovery=never score=-INFINITY hana_nfs_s1_active ne true and hana_nfs_s2_active ne true
10901101
```
10911102
1092-
7. **[1]** Place the cluster out of maintenance mode. Make sure that the cluster status is `ok`, and that all of the resources are started.
1103+
8. **[1]** Place the cluster out of maintenance mode. Make sure that the cluster status is `ok`, and that all of the resources are started.
10931104
10941105
```bash
10951106
sudo pcs property set maintenance-mode=false

0 commit comments

Comments
 (0)