You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/sap/workloads/high-availability-guide-suse-multi-sid.md
+16-16Lines changed: 16 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -137,7 +137,7 @@ This documentation assumes that:
137
137
138
138
* The Pacemaker cluster is already configured and running.
139
139
* At least one SAP system (ASCS / ERS instance) is already deployed and is running in the cluster.
140
-
* The cluster fail over functionality has been tested.
140
+
* The cluster fail over functionality is tested.
141
141
* The NFS shares for all SAP systems are deployed.
142
142
143
143
### Prepare for SAP NetWeaver Installation
@@ -163,7 +163,7 @@ This documentation assumes that:
163
163
10.3.1.32 nw3-nfs
164
164
```
165
165
166
-
3.**[A]** Create the shared directories for the additional **NW2** and **NW3** SAP systems that you are deploying to the cluster.
166
+
3.**[A]** Create the shared directories for the additional **NW2** and **NW3** SAP systems that you're deploying to the cluster.
167
167
168
168
```bash
169
169
sudo mkdir -p /sapmnt/NW2
@@ -185,18 +185,18 @@ This documentation assumes that:
185
185
sudo chattr +i /usr/sap/NW3/ERS22
186
186
```
187
187
188
-
4.**[A]** Configure `autofs` to mount the /sapmnt/SID and /usr/sap/SID/SYS file systems for the additional SAP systems that you are deploying to the cluster. In this example **NW2** and **NW3**.
188
+
4.**[A]** Configure `autofs` to mount the /sapmnt/SID and /usr/sap/SID/SYS file systems for the additional SAP systems that you're deploying to the cluster. In this example **NW2** and **NW3**.
189
189
190
-
Update file `/etc/auto.direct` with the file systems for the additional SAP systems that you are deploying to the cluster.
190
+
Update file `/etc/auto.direct` with the file systems for the additional SAP systems that you're deploying to the cluster.
191
191
192
192
* If using NFS file server, follow the instructions on the [Azure VMs high availability for SAP NetWeaver on SLES](./high-availability-guide-suse.md#prepare-for-sap-netweaver-installation) page
193
193
* If using Azure NetApp Files, follow the instructions on the [Azure VMs high availability for SAP NW on SLES with Azure NetApp Files](./high-availability-guide-suse-netapp-files.md#prepare-for-sap-netweaver-installation) page
194
194
195
-
You will need to restart the `autofs` service to mount the newly added shares.
195
+
You need to restart the `autofs` service to mount the newly added shares.
196
196
197
197
### Install ASCS / ERS
198
198
199
-
1. Create the virtual IP and health probe cluster resources for the ASCS instance of the additional SAP system you are deploying to the cluster. The example shown here is for **NW2** and **NW3** ASCS, using highly available NFS server.
199
+
1. Create the virtual IP and health probe cluster resources for the ASCS instance of the additional SAP system you're deploying to the cluster. The example shown here is for **NW2** and **NW3** ASCS, using highly available NFS server.
200
200
201
201
> [!IMPORTANT]
202
202
> Recent testing revealed situations, where netcat stops responding to requests due to backlog and its limitation of handling only one connection. The netcat resource stops listening to the Azure Load balancer requests and the floating IP becomes unavailable.
@@ -240,7 +240,7 @@ This documentation assumes that:
240
240
meta resource-stickiness=3000
241
241
```
242
242
243
-
As you creating the resources they may be assigned to different cluster resources. When you group them, they will migrate to one of the cluster nodes. Make sure the cluster status is ok and that all resources are started. It is not important on which node the resources are running.
243
+
As you creating the resources they may be assigned to different cluster resources. When you group them, they'll migrate to one of the cluster nodes. Make sure the cluster status is ok and that all resources are started. It isn't important on which node the resources are running.
244
244
245
245
2. **[1]** Install SAP NetWeaver ASCS
246
246
@@ -254,7 +254,7 @@ This documentation assumes that:
254
254
255
255
If the installation fails to create a subfolder in /usr/sap/**SID**/ASCS**Instance#**, try setting the owner to **sid**adm and group to sapsys of the ASCS**Instance#** and retry.
256
256
257
-
3. **[1]** Create a virtual IP and health-probe cluster resources for the ERS instance of the additional SAP system you are deploying to the cluster. The example shown here is for**NW2** and **NW3** ERS, using highly available NFS server.
257
+
3. **[1]** Create a virtual IP and health-probe cluster resources for the ERS instance of the additional SAP system you're deploying to the cluster. The example shown here is for **NW2** and **NW3** ERS, using highly available NFS server.
@@ -286,7 +286,7 @@ This documentation assumes that:
286
286
sudo crm configure group g-NW3_ERS fs_NW3_ERS nc_NW3_ERS vip_NW3_ERS
287
287
```
288
288
289
-
As you creating the resources they may be assigned to different cluster nodes. When you group them, they will migrate to one of the cluster nodes. Make sure the cluster status is ok and that all resources are started.
289
+
As you creating the resources they may be assigned to different cluster nodes. When you group them, they'll migrate to one of the cluster nodes. Make sure the cluster status is ok and that all resources are started.
290
290
291
291
Next, make sure that the resources of the newly created ERS group, are running on the cluster node, opposite to the cluster node where the ASCS instance for the same SAP system was installed. For example, if NW2 ASCS was installed on `slesmsscl1`, then make sure the NW2 ERS group is running on `slesmsscl2`. You can migrate the NW2 ERS group to `slesmsscl2` by running the following command:
292
292
@@ -296,7 +296,7 @@ This documentation assumes that:
296
296
297
297
4. **[2]** Install SAP NetWeaver ERS
298
298
299
-
Install SAP NetWeaver ERS as root on the other node, using a virtual hostname that maps to the IP address of the load balancer frontend configuration for the ERS. For example for system **NW2**, the virtual host name will be**msnw2ers**, **10.3.1.17** and the instance number that you used for the probe of the load balancer, for example **12**. For system **NW3**, the virtual host name **msnw3ers**, **10.3.1.19** and the instance number that you used for the probe of the load balancer, for example **22**.
299
+
Install SAP NetWeaver ERS as root on the other node, using a virtual hostname that maps to the IP address of the load balancer frontend configuration for the ERS. For example for system **NW2**, the virtual host name is**msnw2ers**, **10.3.1.17** and the instance number that you used for the probe of the load balancer, for example **12**. For system **NW3**, the virtual host name **msnw3ers**, **10.3.1.19** and the instance number that you used for the probe of the load balancer, for example **22**.
300
300
301
301
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to sapinst. You can use parameter SAPINST_USE_HOSTNAME to install SAP, using virtual host name.
302
302
@@ -316,7 +316,7 @@ This documentation assumes that:
316
316
crm resource unmigrate g-NW3_ERS
317
317
```
318
318
319
-
5. **[1]** Adapt the ASCS/SCS and ERS instance profiles for the newly installed SAP system(s). The example shown below is for NW2. You will need to adapt the ASCS/SCS and ERS profiles for all SAP instances added to the cluster.
319
+
5. **[1]** Adapt the ASCS/SCS and ERS instance profiles for the newly installed SAP system(s). The example shown below is for NW2. You'll need to adapt the ASCS/SCS and ERS profiles for all SAP instances added to the cluster.
320
320
321
321
* ASCS/SCS profile
322
322
@@ -468,9 +468,9 @@ This documentation assumes that:
If you are upgrading from an older version and switching to enqueue server 2, see SAP note [2641019](https://launchpad.support.sap.com/#/notes/2641019).
471
+
If you're upgrading from an older version and switching to enqueue server 2, see SAP note [2641019](https://launchpad.support.sap.com/#/notes/2641019).
472
472
473
-
Make sure that the cluster status is ok and that all resources are started. It is not important on which node the resources are running.
473
+
Make sure that the cluster status is ok and that all resources are started. It isn't important on which node the resources are running.
474
474
The following example shows the cluster resources status, after SAP systems **NW2** and **NW3** were added to the cluster.
475
475
476
476
```bash
@@ -528,13 +528,13 @@ Complete your SAP installation by:
528
528
529
529
## Test the multi-SID cluster setup
530
530
531
-
The following tests are a subset of the test cases in the best practices guides of SUSE. They are included for your convenience. For the full list of cluster tests, reference the following documentation:
531
+
The following tests are a subset of the test cases in the best practices guides of SUSE. They're included for your convenience. For the full list of cluster tests, reference the following documentation:
532
532
533
533
* If using highly available NFS server, follow [High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server for SAP applications](./high-availability-guide-suse.md).
534
534
* If using Azure NetApp Files NFS volumes, follow [High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server with Azure NetApp Files for SAP applications](./high-availability-guide-suse-netapp-files.md)
535
535
536
536
Always read the SUSE best practices guides and perform all additional tests that might have been added.
537
-
The tests that are presented are in a two node, multi-SID cluster with three SAP systems installed.
537
+
The tests that are presented are in a two nodes, multi-SID cluster with three SAP systems installed.
538
538
539
539
1. Test HAGetFailoverConfig and HACheckFailoverConfig
540
540
@@ -825,7 +825,7 @@ The tests that are presented are in a two node, multi-SID cluster with three SAP
825
825
slesmsscl2:~ # echo b > /proc/sysrq-trigger
826
826
```
827
827
828
-
If you use SBD, Pacemaker should not automatically start on the killed node. The status after the node is started again should look like this.
828
+
If you use SBD, Pacemaker shouldn't automatically start on the killed node. The status after the node is started again should look like this.
0 commit comments