Skip to content

Commit 4058937

Browse files
committed
Append SLES 15 SP04 changes in documents
1 parent de70d70 commit 4058937

File tree

2 files changed

+30
-26
lines changed

2 files changed

+30
-26
lines changed

articles/virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker.md

Lines changed: 25 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ ms.topic: article
1313
ms.tgt_pltfrm: vm-windows
1414
ms.workload: infrastructure-services
1515
ms.custom: subject-rbac-steps
16-
ms.date: 10/26/2022
16+
ms.date: 12/05/2022
1717
ms.author: radeltch
1818

1919
---
@@ -541,12 +541,17 @@ Make sure to assign the custom role to the service principal at all VM (cluster
541541
<pre><code>sudo zypper update
542542
</code></pre>
543543

544-
1. **[A]** Install the component, which you'll need for the cluster resources.
544+
> [!NOTE]
545+
> On SLES 15 SP04 check the version of *crmsh* and *pacemaker* package, and make sure that the miniumum version requirements are met:
546+
> - crmsh-4.4.0+20221028.3e41444-150400.3.9.1 or later
547+
> - pacemaker-2.1.2+20211124.ada5c3b36-150400.4.6.1 or later
548+
549+
2. **[A]** Install the component, which you'll need for the cluster resources.
545550

546551
<pre><code>sudo zypper in socat
547552
</code></pre>
548553

549-
1. **[A]** Install the azure-lb component, which you'll need for the cluster resources.
554+
3. **[A]** Install the azure-lb component, which you'll need for the cluster resources.
550555

551556
<pre><code>sudo zypper in resource-agents
552557
</code></pre>
@@ -556,7 +561,7 @@ Make sure to assign the custom role to the service principal at all VM (cluster
556561
> - **SLES 12 SP4/SP5**: The version must be resource-agents-4.3.018.a7fb5035-3.30.1 or later.
557562
> - **SLES 15/15 SP1**: The version must be resource-agents-4.3.0184.6ee15eb2-4.13.1 or later.
558563
559-
1. **[A]** Configure the operating system.
564+
4. **[A]** Configure the operating system.
560565

561566
a. Pacemaker occasionally creates many processes, which can exhaust the allowed number. When this happens, a heartbeat between the cluster nodes might fail and lead to a failover of your resources. We recommend increasing the maximum number of allowed processes by setting the following parameter:
562567

@@ -589,7 +594,7 @@ Make sure to assign the custom role to the service principal at all VM (cluster
589594
vm.swappiness = 10
590595
</code></pre>
591596

592-
1. **[A]** Configure *cloud-netconfig-azure* for the high availability cluster.
597+
5. **[A]** Configure *cloud-netconfig-azure* for the high availability cluster.
593598

594599
>[!NOTE]
595600
> Check the installed version of the *cloud-netconfig-azure* package by running **zypper info cloud-netconfig-azure**. If the version in your environment is 1.3 or later, it's no longer necessary to suppress the management of network interfaces by the cloud network plug-in. If the version is earlier than 1.3, we recommend that you update the *cloud-netconfig-azure* package to the latest available version.
@@ -604,7 +609,7 @@ Make sure to assign the custom role to the service principal at all VM (cluster
604609
CLOUD_NETCONFIG_MANAGE="no"
605610
</code></pre>
606611

607-
1. **[1]** Enable SSH access.
612+
6. **[1]** Enable SSH access.
608613

609614
<pre><code>sudo ssh-keygen
610615

@@ -616,7 +621,7 @@ Make sure to assign the custom role to the service principal at all VM (cluster
616621
sudo cat /root/.ssh/id_rsa.pub
617622
</code></pre>
618623

619-
1. **[2]** Enable SSH access.
624+
7. **[2]** Enable SSH access.
620625

621626
<pre><code>sudo ssh-keygen
622627

@@ -631,13 +636,13 @@ Make sure to assign the custom role to the service principal at all VM (cluster
631636
sudo cat /root/.ssh/id_rsa.pub
632637
</code></pre>
633638

634-
1. **[1]** Enable SSH access.
639+
8. **[1]** Enable SSH access.
635640

636641
<pre><code># insert the public key you copied in the last step into the authorized keys file on the first server
637642
sudo vi /root/.ssh/authorized_keys
638643
</code></pre>
639644

640-
1. **[A]** Install the *fence-agents* package if you're using a fencing device, based on the Azure fence agent.
645+
9. **[A]** Install the *fence-agents* package if you're using a fencing device, based on the Azure fence agent.
641646

642647
<pre><code>sudo zypper install fence-agents
643648
</code></pre>
@@ -651,7 +656,7 @@ Make sure to assign the custom role to the service principal at all VM (cluster
651656
> SLES 15 SP1 and higher: fence-agents 4.5.2+git.1592573838.1eee0863 or later.
652657
> Earlier versions will not work correctly with a managed identity configuration.
653658
654-
1. **[A]** Install the Azure Python SDK and Azure Identity Python module.
659+
10. **[A]** Install the Azure Python SDK and Azure Identity Python module.
655660

656661
Install the Azure Python SDK on SLES 12 SP4 or SLES 12 SP5:
657662
<pre><code># You might need to activate the public cloud extension first
@@ -665,7 +670,7 @@ Make sure to assign the custom role to the service principal at all VM (cluster
665670
SUSEConnect -p sle-module-public-cloud/15.1/x86_64
666671
sudo zypper install python3-azure-mgmt-compute
667672
sudo zypper install python3-azure-identity
668-
</code></pre>
673+
</code></pre>
669674

670675
>[!IMPORTANT]
671676
>Depending on your version and image type, you might need to activate the public cloud extension for your OS release before you can install the Azure Python SDK.
@@ -674,7 +679,7 @@ Make sure to assign the custom role to the service principal at all VM (cluster
674679
> - On SLES 12 SP4 or SLES 12 SP5, install version 4.6.2 or later of the *python-azure-mgmt-compute* package.
675680
> - If your *python-azure-mgmt-compute or python**3**-azure-mgmt-compute* package version is 17.0.0-6.7.1, follow the instructions in [SUSE KBA](https://www.suse.com/support/kb/doc/?id=000020377) to update the fence-agents version and install the Azure Identity client library for Python module if it is missing.
676681
677-
1. **[A]** Set up the hostname resolution.
682+
11. **[A]** Set up the hostname resolution.
678683

679684
You can either use a DNS server or modify the */etc/hosts* file on all nodes. This example shows how to use the */etc/hosts* file.
680685

@@ -696,11 +701,11 @@ Make sure to assign the custom role to the service principal at all VM (cluster
696701
<b>10.0.0.7 prod-cl1-1</b>
697702
</code></pre>
698703

699-
1. **[1]** Install the cluster.
704+
12. **[1]** Install the cluster.
700705

701706
- If you're using SBD devices for fencing (for either the iSCSI target server or Azure shared disk):
702707

703-
<pre><code>sudo ha-cluster-init -u
708+
<pre><code>sudo crm cluster init
704709
# ! NTP is not configured to start at system boot.
705710
# Do you want to continue anyway (y/n)? <b>y</b>
706711
# /root/.ssh/id_rsa already exists - overwrite (y/n)? <b>n</b>
@@ -712,33 +717,32 @@ Make sure to assign the custom role to the service principal at all VM (cluster
712717

713718
- If you're *not* using SBD devices for fencing:
714719

715-
<pre><code>sudo ha-cluster-init -u
720+
<pre><code>sudo crm cluster init
716721
# ! NTP is not configured to start at system boot.
717722
# Do you want to continue anyway (y/n)? <b>y</b>
718723
# /root/.ssh/id_rsa already exists - overwrite (y/n)? <b>n</b>
719724
# Address for ring0 [10.0.0.6] <b>Select Enter</b>
720725
# Port for ring0 [5405] <b>Select Enter</b>
721726
# Do you wish to use SBD (y/n)? <b>n</b>
722-
#WARNING: Not configuring SBD - STONITH will be disabled.
723-
727+
# WARNING: Not configuring SBD - STONITH will be disabled.
724728
# Do you wish to configure an administration IP (y/n)? <b>n</b>
725729
</code></pre>
726730

727-
1. **[2]** Add the node to the cluster.
731+
13. **[2]** Add the node to the cluster.
728732

729-
<pre><code>sudo ha-cluster-join
733+
<pre><code>sudo crm cluster join
730734
# ! NTP is not configured to start at system boot.
731735
# Do you want to continue anyway (y/n)? <b>y</b>
732736
# IP address or hostname of existing node (for example, 192.168.1.1) []<b>10.0.0.6</b>
733737
# /root/.ssh/id_rsa already exists - overwrite (y/n)? <b>n</b>
734738
</code></pre>
735739

736-
1. **[A]** Change the hacluster password to the same password.
740+
14. **[A]** Change the hacluster password to the same password.
737741

738742
<pre><code>sudo passwd hacluster
739743
</code></pre>
740744

741-
1. **[A]** Adjust the corosync settings.
745+
15. **[A]** Adjust the corosync settings.
742746

743747
<pre><code>sudo vi /etc/corosync/corosync.conf
744748
</code></pre>

articles/virtual-machines/workloads/sap/high-availability-guide-suse.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ ms.service: virtual-machines-sap
1313
ms.topic: article
1414
ms.tgt_pltfrm: vm-windows
1515
ms.workload: infrastructure-services
16-
ms.date: 11/18/2022
16+
ms.date: 12/05/2022
1717
ms.author: radeltch
1818

1919
---
@@ -1068,10 +1068,10 @@ The following tests are a copy of the test cases in the best practices guides of
10681068

10691069
Run the following commands as root to identify the process of the message server and kill it.
10701070

1071-
<pre><code>nw1-cl-1:~ # pgrep ms.sapNW1 | xargs kill -9
1071+
<pre><code>nw1-cl-1:~ # pgrep -f ms.sapNW1 | xargs kill -9
10721072
</code></pre>
10731073

1074-
If you only kill the message server once, it will be restarted by sapstart. If you kill it often enough, Pacemaker will eventually move the ASCS instance to the other node. Run the following commands as root to clean up the resource state of the ASCS and ERS instance after the test.
1074+
If you only kill the message server once, it will be restarted by sapstart. If you kill it often enough, Pacemaker will eventually move the ASCS instance to the other node, in case of ENSA1. Run the following commands as root to clean up the resource state of the ASCS and ERS instance after the test.
10751075

10761076
<pre><code>nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ASCS00
10771077
nw1-cl-0:~ # crm resource cleanup rsc_sap_NW1_ERS02
@@ -1113,7 +1113,7 @@ The following tests are a copy of the test cases in the best practices guides of
11131113

11141114
<pre><code>nw1-cl-0:~ #
11151115
#If using ENSA1
1116-
pgrep en.sapNW1 | xargs kill -9
1116+
pgrep -f en.sapNW1 | xargs kill -9
11171117
#If using ENSA2
11181118
pgrep -f enq.sapNW1 | xargs kill -9
11191119
</code></pre>
@@ -1158,7 +1158,7 @@ The following tests are a copy of the test cases in the best practices guides of
11581158

11591159
Run the following command as root on the node where the ERS instance is running to kill the enqueue replication server process.
11601160

1161-
<pre><code>nw1-cl-0:~ # pgrep er.sapNW1 | xargs kill -9
1161+
<pre><code>nw1-cl-0:~ # pgrep -f er.sapNW1 | xargs kill -9
11621162
</code></pre>
11631163

11641164
If you only run the command once, sapstart will restart the process. If you run it often enough, sapstart will not restart the process and the resource will be in a stopped state. Run the following commands as root to clean up the resource state of the ERS instance after the test.

0 commit comments

Comments
 (0)