You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -191,7 +191,6 @@ For the configuration presented in this document, deploy seven virtual machines:
191
191
1. Select the virtual machines of the HANA cluster (the NICs for the `client` subnet).
192
192
1. Select **Add**.
193
193
2. Select **Save**.
194
-
195
194
196
195
1. Next, create a health probe:
197
196
@@ -216,7 +215,6 @@ For the configuration presented in this document, deploy seven virtual machines:
216
215
> [!Note]
217
216
> When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md).
218
217
219
-
220
218
> [!IMPORTANT]
221
219
> Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set parameter **net.ipv4.tcp_timestamps** to **0**. For details see [Load Balancer health probes](../../../load-balancer/load-balancer-custom-probe-overview.md).
222
220
> See also SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421).
@@ -230,7 +228,6 @@ The next sections describe the steps to deploy NFS - you'll need to select only
230
228
> [!TIP]
231
229
> You chose to deploy `/hana/shared` on [NFS share on Azure Files](../../../storage/files/files-nfs-protocol.md) or [NFS volume on Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-introduction.md).
232
230
233
-
234
231
#### Deploy the Azure NetApp Files infrastructure
235
232
236
233
Deploy ANF volumes for the `/hana/shared` file system. You will need a separate `/hana/shared` volume for each HANA system replication site. For more information, see [Set up the Azure NetApp Files infrastructure](./sap-hana-scale-out-standby-netapp-files-suse.md#set-up-the-azure-netapp-files-infrastructure).
@@ -240,7 +237,6 @@ In this example, the following Azure NetApp Files volumes were used:
Deploy Azure Files NFS shares for the `/hana/shared` file system. You will need a separate `/hana/shared` Azure Files NFS share for each HANA system replication site. For more information, see [How to create an NFS share](../../../storage/files/storage-files-how-to-create-nfs-shares.md?tabs=azure-portal).
@@ -253,9 +249,9 @@ In this example, the following Azure Files NFS shares were used:
253
249
## Operating system configuration and preparation
254
250
255
251
The instructions in the next sections are prefixed with one of the following abbreviations:
256
-
* **[A]**: Applicable to all nodes
252
+
* **[A]**: Applicable to all nodes, including majority maker
257
253
* **[AH]**: Applicable to all HANA DB nodes
258
-
* **[M]**: Applicable to the majority maker node
254
+
* **[M]**: Applicable to the majority maker node only
259
255
* **[AH1]**: Applicable to all HANA DB nodes on SITE 1
260
256
* **[AH2]**: Applicable to all HANA DB nodes on SITE 2
261
257
* **[1]**: Applicable only to HANA DB node 1, SITE 1
@@ -307,6 +303,9 @@ Configure and prepare your OS by doing the following steps:
307
303
308
304
3. **[A]** SUSE delivers special resource agents for SAP HANA and by default agents for SAP HANA scale-up are installed. Uninstall the packages for scale-up, if installed and install the packages for scenario SAP HANA scale-out. The step needs to be performed on all cluster VMs, including the majority maker.
309
305
306
+
> [!NOTE]
307
+
> SAPHanaSR-ScaleOut version 0.181 or higher must be installed.
308
+
310
309
```bash
311
310
# Uninstall scale-up packages and patterns
312
311
sudo zypper remove patterns-sap-hana
@@ -326,7 +325,7 @@ You chose to deploy the SAP shared directories on [NFS share on Azure Files](../
326
325
327
326
In this example, the shared HANA file systems are deployed on Azure NetApp Files and mounted over NFSv4.1. Follow the steps in this section, only if you are using NFS on Azure NetApp Files.
328
327
329
-
1. **[A]** Prepare the OS for running SAP HANA on NetApp Systems with NFS, as described in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/91-NetApp-HANA.conf* for the NetApp configuration settings.
328
+
1. **[AH]** Prepare the OS for running SAP HANA on NetApp Systems with NFS, as described in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/91-NetApp-HANA.conf* for the NetApp configuration settings.
330
329
331
330
<pre><code>
332
331
vi /etc/sysctl.d/91-NetApp-HANA.conf
@@ -343,7 +342,7 @@ In this example, the shared HANA file systems are deployed on Azure NetApp Files
343
342
net.ipv4.tcp_sack = 1
344
343
</code></pre>
345
344
346
-
2. **[A]** Adjust the sunrpc settings, as recommended in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346).
345
+
2. **[AH]** Adjust the sunrpc settings, as recommended in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346).
347
346
348
347
<pre><code>
349
348
vi /etc/modprobe.d/sunrpc.conf
@@ -818,31 +817,38 @@ Create a dummy file system cluster resource, which will monitor and report failu
818
817
819
818
`on-fail=fence` attribute is also added to the monitor operation. With this option, if the monitor operation fails on a node, that node is immediately fenced.
820
819
821
-
## Implement HANA hooks SAPHanaSR and susChkSrv
820
+
## Implement HANA HA hooks SAPHanaSrMultiTarget and susChkSrv
822
821
823
-
This important step is to optimize the integration with the cluster and detection when a cluster failover is possible. It is highly recommended to configure the SAPHanaSR Python hook. For HANA 2.0 SP5 and above, implementing both SAPHanaSR and susChkSrv hook is recommended.
822
+
This important step is to optimize the integration with the cluster and detection when a cluster failover is possible. It is highly recommended to configure SAPHanaSrMultiTarget Python hook. For HANA 2.0 SP5 and above, implementing both SAPHanaSrMultiTarget and susChkSrv hooks is recommended.
824
823
825
-
SusChkSrv extends the functionality of the main SAPHanaSR HA provider. It acts in the situation when HANA process hdbindexserver crashes. If a single process crashes typically HANA tries to restart it. Restarting the indexserver process can take a long time, during which the HANA database is not responsive.
824
+
> [!NOTE]
825
+
> SAPHanaSrMultiTarget HA provider replaces SAPHanaSR forHANA scale-out. SAPHanaSR was describedin earlier version of this document.
826
+
> See [SUSE blog post](https://www.suse.com/c/sap-hana-scale-out-multi-target-upgrade/) about changes with the new HANA HA hook.
827
+
> This document provides steps fora new installation with the new provider. Upgrading an existing environment from SAPHanaSR to SAPHanaSrMultiTarget provider requires several changes and are _NOT_ describedin this document. If the existing environment uses no third site fordisaster recovery and [HANA multi-target system replication](https://help.sap.com/docs/SAP_HANA_PLATFORM/4e9b18c116aa42fc84c7dbfd02111aba/ba457510958241889a459e606bbcf3d3.html) is not used, SAPHanaSR HA provider can remainin use.
826
828
827
-
With susChkSrv implemented, an immediate and configurable action is executed, instead of waiting on hdbindexserver process to restart on the same node. In HANA scale-out susChkSrv acts forevery HANA VM independently. The configured action will kill HANA or fence the affected VM, which triggers a failover by SAPHanaSRin the configured timeout period.
829
+
SusChkSrv extends the functionality of the main SAPHanaSrMultiTarget HA provider. It acts in the situation when HANA process hdbindexserver crashes. If a single process crashes typically HANA tries to restart it. Restarting the indexserver process can take a long time, during which the HANA database is not responsive. With susChkSrv implemented, an immediate and configurable action is executed, instead of waiting on hdbindexserver process to restart on the same node. In HANA scale-out susChkSrv acts forevery HANA VM independently. The configured action will kill HANA or fence the affected VM, which triggers a failover by SAPHanaSRin the configured timeout period.
828
830
829
-
> [!NOTE]
830
-
> susChkSrv Python hook requires SAP HANA 2.0 SP5 and SAPHanaSR-ScaleOut version 0.184.1 or higher must be installed.
831
+
SUSE SLES 15 SP1 or higher is required for operation of both HANA HA hooks. Below table shows other dependencies.
832
+
833
+
|SAP HANA HA hook | HANA version required | SAPHanaSR-ScaleOut required |
| SAPHanaSrMultiTarget | HANA 2.0 SPS4 or higher | 0.181 or higher |
836
+
| susChkSrv | HANA 2.0 SPS5 or higher | 0.184.1 or higher |
831
837
832
838
1. **[1,2]** Stop HANA on both system replication sites. Execute as <sid\>adm:
833
839
834
840
```bash
835
841
sapcontrol -nr 03 -function StopSystem
836
842
```
837
843
838
-
2. **[1,2]** Adjust `global.ini` on each cluster site. If the requirementsfor susChkSrv hook are not met, remove the entire block `[ha_dr_provider_suschksrv]` from below section.
844
+
2. **[1,2]** Adjust `global.ini` on each cluster site. If the prerequisitesfor susChkSrv hook are not met, remove the entire block `[ha_dr_provider_suschksrv]` from below section.
839
845
You can adjust the behavior of susChkSrv with parameter action_on_lost. Valid values are [ ignore | stop |kill| fence ].
840
846
841
847
```bash
842
848
# add to global.ini
843
-
[ha_dr_provider_SAPHanaSR]
844
-
provider = SAPHanaSR
845
-
path = /usr/share/SAPHanaSR-ScaleOut
849
+
[ha_dr_provider_saphanasrmultitarget]
850
+
provider = SAPHanaSrMultiTarget
851
+
path = /usr/share/SAPHanaSR-ScaleOut/
846
852
execution_order = 1
847
853
848
854
[ha_dr_provider_suschksrv]
@@ -852,21 +858,21 @@ You can adjust the behavior of susChkSrv with parameter action_on_lost. Valid va
852
858
action_on_lost = kill
853
859
854
860
[trace]
855
-
ha_dr_saphanasr = info
861
+
ha_dr_saphanasrmultitarget = info
856
862
```
857
863
858
-
Configuration pointing to the standard location /usr/share/SAPHanaSR-ScaleOut brings a benefit, that the python hook code is automatically updated through OS or package updates and it gets used by HANA at next restart. With an optional, own path, such as /hana/shared/myHooks you can decouple OS updates from the used hook version.
864
+
Configuration pointing to the standard location /usr/share/SAPHanaSR-ScaleOut brings a benefit, that the python hook code is automatically updated through OS or package updates and it gets used by HANA at next restart. With an optional own path, such as /hana/shared/myHooks you can decouple OS updates from the used hook version.
859
865
860
-
3. **[AH]** The cluster requires sudoers configuration on the cluster nodefor<sid\>adm. In this example that is achieved by creating a new file. Execute the commands as `root` adapt the values of hn1/HN1 with correct SID.
866
+
3. **[AH]** The cluster requires sudoers configuration on the cluster nodesfor<sid\>adm. In this example that is achieved by creating a new file. Execute the commands as `root` adapt the values of hn1 with correct lowercase SID.
861
867
862
868
```bash
863
869
cat <<EOF > /etc/sudoers.d/20-saphana
864
-
# SAPHanaSR-ScaleOut needs for srHook
865
-
Cmnd_Alias SOK = /usr/sbin/crm_attribute -n hana_hn1_glob_srHook -v SOK -t crm_config -s SAPHanaSR
4. **[1]** Verify the communication between the HANA HA hook and the cluster, showing status SOK for SID and both replication sites with status P(rimary) or S(econdary).
988
+
```bash
989
+
sudo /usr/sbin/SAPHanaSR-showAttr
990
+
# Expected result
991
+
# Global cib-time maintenance prim sec sync_state upd
# HN1 Fri Jan 27 10:38:46 2023 false HANA_S1 - SOK ok
994
+
#
995
+
# Sites lpt lss mns srHook srr
996
+
# -----------------------------------------------
997
+
# HANA_S1 1674815869 4 hana-s1-db1 PRIM P
998
+
# HANA_S2 30 4 hana-s2-db1 SWAIT S
999
+
```
989
1000
990
1001
> [!NOTE]
991
1002
> The timeouts in the above configuration are just examples and may need to be adapted to the specific HANA setup. For instance, you may need to increase the start timeout, if it takes longer to start the SAP HANA database.
992
-
1003
+
1004
+
1005
+
## (Optional) Enabling HANA multi-target system replication for DR purposes
1006
+
1007
+
<details>
1008
+
<summary>Expand</summary>
1009
+
1010
+
With new SAP HANA HA provider SAPHanaSrMultiTarget, a third system replication site as disaster recovery (DR) can be used with a HANA scale-out system. The cluster environment is aware of a multi-target DR setup. Failure of the third site will not trigger any cluster action. Cluster is detects the replication status of connected sites and the monitored attributed can change between SOK and SFAIL. Maximum of one system replication to an HANA database outside the linux cluster is supported.
1011
+
1012
+
> [!NOTE]
1013
+
Example of a multi-target system replication system. See [SAP documentation](https://help.sap.com/docs/SAP_HANA_PLATFORM/4e9b18c116aa42fc84c7dbfd02111aba/2e6c71ab55f147e19b832565311a8e4e.html) for further details.
Failure of the third site will not trigger any cluster action. Cluster is detects the replication status of connected sites and the monitored attributed can change between SOK and SFAIL.
1060
+
1061
+
If cluster parameter AUTOMATED_REGISTER="true" is setin the cluster after conclusion of testing, HANA parameter `register_secondaries_on_takeover = true` can be configured in`[system_replication]` block of global.ini on the two SAP HANA sites in the Linux cluster. Such configuration would re-register the third site after a takeover between the first two sites to keep a multi-target setup.
0 commit comments