Skip to content

Commit c96bddd

Browse files
committed
Change in SLES HA documentation to switch to RA azure-lb
1 parent 7f341cf commit c96bddd

File tree

4 files changed

+33
-30
lines changed

4 files changed

+33
-30
lines changed

articles/virtual-machines/workloads/sap/high-availability-guide-suse-multi-sid.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -251,7 +251,7 @@ This documentation assumes that:
251251
> - For SLES 15/15 SP1, the version must be at least resource-agents-4.3.0184.6ee15eb2-4.13.1.
252252
>
253253
> Note that the change will require brief downtime.
254-
> For existing Pacemaker clusters, if the configuration was already changed to use socat as described in [Azure Load-Balancer Detection Hardening](https://www.suse.com/support/kb/doc/?id=7024128), there is no requirement to switch immediatelly to azure-lb resource agent.
254+
> For existing Pacemaker clusters, if the configuration was already changed to use socat as described in [Azure Load-Balancer Detection Hardening](https://www.suse.com/support/kb/doc/?id=7024128), there is no requirement to switch immediately to azure-lb resource agent.
255255
256256
```
257257
sudo crm configure primitive fs_NW2_ASCS Filesystem device='nw2-nfs:/NW2/ASCS' directory='/usr/sap/NW2/ASCS10' fstype='nfs4' \

articles/virtual-machines/workloads/sap/high-availability-guide-suse-netapp-files.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -508,7 +508,7 @@ The following items are prefixed with either **[A]** - applicable to all nodes,
508508
> - For SLES 15/15 SP1, the version must be at least resource-agents-4.3.0184.6ee15eb2-4.13.1.
509509
>
510510
> Note that the change will require brief downtime.
511-
> For existing Pacemaker clusters, if the configuration was already changed to use socat as described in [Azure Load-Balancer Detection Hardening](https://www.suse.com/support/kb/doc/?id=7024128), there is no requirement to switch immediatelly to azure-lb resource agent.
511+
> For existing Pacemaker clusters, if the configuration was already changed to use socat as described in [Azure Load-Balancer Detection Hardening](https://www.suse.com/support/kb/doc/?id=7024128), there is no requirement to switch immediately to azure-lb resource agent.
512512
513513
<pre><code>sudo crm node standby <b>anftstsapcl2</b>
514514
# If using NFSv3

articles/virtual-machines/workloads/sap/high-availability-guide-suse.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -411,7 +411,7 @@ The following items are prefixed with either **[A]** - applicable to all nodes,
411411
> - For SLES 15/15 SP1, the version must be at least resource-agents-4.3.0184.6ee15eb2-4.13.1.
412412
>
413413
> Note that the change will require brief downtime.
414-
> For existing Pacemaker clusters, if the configuration was already changed to use socat as described in [Azure Load-Balancer Detection Hardening](https://www.suse.com/support/kb/doc/?id=7024128), there is no requirement to switch immediatelly to azure-lb resource agent.
414+
> For existing Pacemaker clusters, if the configuration was already changed to use socat as described in [Azure Load-Balancer Detection Hardening](https://www.suse.com/support/kb/doc/?id=7024128), there is no requirement to switch immediately to azure-lb resource agent.
415415
416416
<pre><code>sudo crm node standby <b>nw1-cl-1</b>
417417

articles/virtual-machines/workloads/sap/sap-hana-high-availability.md

Lines changed: 30 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.service: virtual-machines-linux
1212
ms.topic: article
1313
ms.tgt_pltfrm: vm-linux
1414
ms.workload: infrastructure
15-
ms.date: 11/06/2019
15+
ms.date: 03/06/2020
1616
ms.author: radeltch
1717

1818
---
@@ -512,7 +512,12 @@ Next, create the HANA resources:
512512

513513
> [!IMPORTANT]
514514
> Recent testing revealed situations, where netcat stops responding to requests due to backlog and its limitation of handling only one connection. The netcat resource stops listening to the Azure Load balancer requests and the floating IP becomes unavailable.
515-
> For existing Pacemaker clusters, we recommend replacing netcat with socat, following the instructions in [Azure Load-Balancer Detection Hardening](https://www.suse.com/support/kb/doc/?id=7024128). Note that the change will require brief downtime.
515+
> For existing Pacemaker clusters, we recommended in the past replacing netcat with socat. Currently we recommend using azure-lb resource agent, which is part of package resource-agents, with the following package version requirements:
516+
> - For SLES 12 SP4/SP5, the version must be at least resource-agents-4.3.018.a7fb5035-3.30.1.
517+
> - For SLES 15/15 SP1, the version must be at least resource-agents-4.3.0184.6ee15eb2-4.13.1.
518+
>
519+
> Note that the change will require brief downtime.
520+
> For existing Pacemaker clusters, if the configuration was already changed to use socat as described in [Azure Load-Balancer Detection Hardening](https://www.suse.com/support/kb/doc/?id=7024128), there is no requirement to switch immediately to azure-lb resource agent.
516521
517522
<pre><code># Replace the bold string with your instance number, HANA system ID, and the front-end IP address of the Azure load balancer.
518523

@@ -536,9 +541,7 @@ sudo crm configure primitive rsc_ip_<b>HN1</b>_HDB<b>03</b> ocf:heartbeat:IPaddr
536541
op monitor interval="10s" timeout="20s" \
537542
params ip="<b>10.0.0.13</b>"
538543

539-
sudo crm configure primitive rsc_nc_<b>HN1</b>_HDB<b>03</b> anything \
540-
params binfile="/usr/bin/socat" cmdline_options="-U TCP-LISTEN:625<b>03</b>,backlog=10,fork,reuseaddr /dev/null" \
541-
op monitor timeout=20s interval=10 depth=0
544+
sudo crm configure primitive rsc_nc_<b>HN1</b>_HDB<b>03</b> azure-lb port=625<b>03</b>
542545

543546
sudo crm configure group g_ip_<b>HN1</b>_HDB<b>03</b> rsc_ip_<b>HN1</b>_HDB<b>03</b> rsc_nc_<b>HN1</b>_HDB<b>03</b>
544547

@@ -573,7 +576,7 @@ Make sure that the cluster status is ok and that all of the resources are starte
573576
# Slaves: [ hn1-db-1 ]
574577
# Resource Group: g_ip_HN1_HDB03
575578
# rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
576-
# rsc_nc_HN1_HDB03 (ocf::heartbeat:anything): Started hn1-db-0
579+
# rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0
577580
</code></pre>
578581

579582
## Test the cluster setup
@@ -617,7 +620,7 @@ stonith-sbd (stonith:external/sbd): Started hn1-db-1
617620
Stopped: [ hn1-db-0 ]
618621
Resource Group: g_ip_HN1_HDB03
619622
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-1
620-
rsc_nc_HN1_HDB03 (ocf::heartbeat:anything): Started hn1-db-1
623+
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-1
621624

622625
Failed Actions:
623626
* rsc_SAPHana_HN1_HDB03_start_0 on hn1-db-0 'not running' (7): call=84, status=complete, exitreason='none',
@@ -659,7 +662,7 @@ stonith-sbd (stonith:external/sbd): Started hn1-db-1
659662
Slaves: [ hn1-db-0 ]
660663
Resource Group: g_ip_HN1_HDB03
661664
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-1
662-
rsc_nc_HN1_HDB03 (ocf::heartbeat:anything): Started hn1-db-1
665+
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-1
663666
</code></pre>
664667

665668
### Test the Azure fencing agent (not SBD)
@@ -747,7 +750,7 @@ NOTE: The following tests are designed to be run in sequence and depend on the e
747750
Slaves: [ hn1-db-1 ]
748751
Resource Group: g_ip_HN1_HDB03
749752
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
750-
rsc_nc_HN1_HDB03 (ocf::heartbeat:anything): Started hn1-db-0
753+
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0
751754
</code></pre>
752755

753756
Run the following commands as <hanasid\>adm on node hn1-db-0:
@@ -774,7 +777,7 @@ NOTE: The following tests are designed to be run in sequence and depend on the e
774777
Slaves: [ hn1-db-0 ]
775778
Resource Group: g_ip_HN1_HDB03
776779
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-1
777-
rsc_nc_HN1_HDB03 (ocf::heartbeat:anything): Started hn1-db-1
780+
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-1
778781
</code></pre>
779782

780783
1. TEST 2: STOP PRIMARY DATABASE ON NODE 2
@@ -788,7 +791,7 @@ NOTE: The following tests are designed to be run in sequence and depend on the e
788791
Slaves: [ hn1-db-0 ]
789792
Resource Group: g_ip_HN1_HDB03
790793
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-1
791-
rsc_nc_HN1_HDB03 (ocf::heartbeat:anything): Started hn1-db-1
794+
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-1
792795
</code></pre>
793796

794797
Run the following commands as <hanasid\>adm on node hn1-db-1:
@@ -815,7 +818,7 @@ NOTE: The following tests are designed to be run in sequence and depend on the e
815818
Slaves: [ hn1-db-1 ]
816819
Resource Group: g_ip_HN1_HDB03
817820
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
818-
rsc_nc_HN1_HDB03 (ocf::heartbeat:anything): Started hn1-db-0
821+
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0
819822
</code></pre>
820823

821824
1. TEST 3: CRASH PRIMARY DATABASE ON NODE
@@ -829,7 +832,7 @@ NOTE: The following tests are designed to be run in sequence and depend on the e
829832
Slaves: [ hn1-db-1 ]
830833
Resource Group: g_ip_HN1_HDB03
831834
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
832-
rsc_nc_HN1_HDB03 (ocf::heartbeat:anything): Started hn1-db-0
835+
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0
833836
</code></pre>
834837

835838
Run the following commands as <hanasid\>adm on node hn1-db-0:
@@ -856,7 +859,7 @@ NOTE: The following tests are designed to be run in sequence and depend on the e
856859
Slaves: [ hn1-db-0 ]
857860
Resource Group: g_ip_HN1_HDB03
858861
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-1
859-
rsc_nc_HN1_HDB03 (ocf::heartbeat:anything): Started hn1-db-1
862+
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-1
860863
</code></pre>
861864

862865
1. TEST 4: CRASH PRIMARY DATABASE ON NODE 2
@@ -870,7 +873,7 @@ NOTE: The following tests are designed to be run in sequence and depend on the e
870873
Slaves: [ hn1-db-0 ]
871874
Resource Group: g_ip_HN1_HDB03
872875
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-1
873-
rsc_nc_HN1_HDB03 (ocf::heartbeat:anything): Started hn1-db-1
876+
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-1
874877
</code></pre>
875878

876879
Run the following commands as <hanasid\>adm on node hn1-db-1:
@@ -897,7 +900,7 @@ NOTE: The following tests are designed to be run in sequence and depend on the e
897900
Slaves: [ hn1-db-1 ]
898901
Resource Group: g_ip_HN1_HDB03
899902
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
900-
rsc_nc_HN1_HDB03 (ocf::heartbeat:anything): Started hn1-db-0
903+
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0
901904
</code></pre>
902905

903906
1. TEST 5: CRASH PRIMARY SITE NODE (NODE 1)
@@ -911,7 +914,7 @@ NOTE: The following tests are designed to be run in sequence and depend on the e
911914
Slaves: [ hn1-db-1 ]
912915
Resource Group: g_ip_HN1_HDB03
913916
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
914-
rsc_nc_HN1_HDB03 (ocf::heartbeat:anything): Started hn1-db-0
917+
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0
915918
</code></pre>
916919

917920
Run the following commands as root on node hn1-db-0:
@@ -948,7 +951,7 @@ NOTE: The following tests are designed to be run in sequence and depend on the e
948951
Slaves: [ hn1-db-0 ]
949952
Resource Group: g_ip_HN1_HDB03
950953
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-1
951-
rsc_nc_HN1_HDB03 (ocf::heartbeat:anything): Started hn1-db-1
954+
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-1
952955
</code></pre>
953956

954957
1. TEST 6: CRASH SECONDARY SITE NODE (NODE 2)
@@ -962,7 +965,7 @@ NOTE: The following tests are designed to be run in sequence and depend on the e
962965
Slaves: [ hn1-db-0 ]
963966
Resource Group: g_ip_HN1_HDB03
964967
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-1
965-
rsc_nc_HN1_HDB03 (ocf::heartbeat:anything): Started hn1-db-1
968+
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-1
966969
</code></pre>
967970

968971
Run the following commands as root on node hn1-db-1:
@@ -999,7 +1002,7 @@ NOTE: The following tests are designed to be run in sequence and depend on the e
9991002
Slaves: [ hn1-db-1 ]
10001003
Resource Group: g_ip_HN1_HDB03
10011004
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
1002-
rsc_nc_HN1_HDB03 (ocf::heartbeat:anything): Started hn1-db-0
1005+
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0
10031006
</code></pre>
10041007

10051008
1. TEST 7: STOP THE SECONDARY DATABASE ON NODE 2
@@ -1013,7 +1016,7 @@ NOTE: The following tests are designed to be run in sequence and depend on the e
10131016
Slaves: [ hn1-db-1 ]
10141017
Resource Group: g_ip_HN1_HDB03
10151018
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
1016-
rsc_nc_HN1_HDB03 (ocf::heartbeat:anything): Started hn1-db-0
1019+
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0
10171020
</code></pre>
10181021

10191022
Run the following commands as <hanasid\>adm on node hn1-db-1:
@@ -1036,7 +1039,7 @@ NOTE: The following tests are designed to be run in sequence and depend on the e
10361039
Slaves: [ hn1-db-1 ]
10371040
Resource Group: g_ip_HN1_HDB03
10381041
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
1039-
rsc_nc_HN1_HDB03 (ocf::heartbeat:anything): Started hn1-db-0
1042+
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0
10401043
</code></pre>
10411044

10421045
1. TEST 8: CRASH THE SECONDARY DATABASE ON NODE 2
@@ -1050,7 +1053,7 @@ NOTE: The following tests are designed to be run in sequence and depend on the e
10501053
Slaves: [ hn1-db-1 ]
10511054
Resource Group: g_ip_HN1_HDB03
10521055
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
1053-
rsc_nc_HN1_HDB03 (ocf::heartbeat:anything): Started hn1-db-0
1056+
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0
10541057
</code></pre>
10551058

10561059
Run the following commands as <hanasid\>adm on node hn1-db-1:
@@ -1073,7 +1076,7 @@ NOTE: The following tests are designed to be run in sequence and depend on the e
10731076
Slaves: [ hn1-db-1 ]
10741077
Resource Group: g_ip_HN1_HDB03
10751078
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
1076-
rsc_nc_HN1_HDB03 (ocf::heartbeat:anything): Started hn1-db-0
1079+
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0
10771080
</code></pre>
10781081

10791082
1. TEST 9: CRASH SECONDARY SITE NODE (NODE 2) RUNNING SECONDARY HANA DATABASE
@@ -1087,7 +1090,7 @@ NOTE: The following tests are designed to be run in sequence and depend on the e
10871090
Slaves: [ hn1-db-1 ]
10881091
Resource Group: g_ip_HN1_HDB03
10891092
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
1090-
rsc_nc_HN1_HDB03 (ocf::heartbeat:anything): Started hn1-db-0
1093+
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0
10911094
</code></pre>
10921095

10931096
Run the following commands as root on node hn1-db-1:
@@ -1120,12 +1123,12 @@ NOTE: The following tests are designed to be run in sequence and depend on the e
11201123
Slaves: [ hn1-db-1 ]
11211124
Resource Group: g_ip_HN1_HDB03
11221125
rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
1123-
rsc_nc_HN1_HDB03 (ocf::heartbeat:anything): Started hn1-db-0
1126+
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0
11241127
</code></pre>
11251128

11261129
## Next steps
11271130

11281131
* [Azure Virtual Machines planning and implementation for SAP][planning-guide]
11291132
* [Azure Virtual Machines deployment for SAP][deployment-guide]
11301133
* [Azure Virtual Machines DBMS deployment for SAP][dbms-guide]
1131-
* To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure (large instances), see [SAP HANA (large instances) high availability and disaster recovery on Azure](hana-overview-high-availability-disaster-recovery.md)
1134+

0 commit comments

Comments
 (0)