You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/virtual-machines/workloads/sap/large-instance-high-availability-rhel.md
+8-11Lines changed: 8 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,10 +11,7 @@ ms.date: 04/19/2021
11
11
# Azure Large Instances high availability for SAP on RHEL
12
12
13
13
> [!NOTE]
14
-
> This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When this term is removed from the software, we’ll remove it from this article.
15
-
16
-
> [!NOTE]
17
-
> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we’ll remove it from this article.
14
+
> This article contains references to the terms *blacklist* and *slave*, terms that Microsoft no longer uses. When the term is removed from the software, we’ll remove it from this article.
18
15
19
16
In this article, you learn how to configure the Pacemaker cluster in RHEL 7 to automate an SAP HANA database failover. You need to have a good understanding of Linux, SAP HANA, and Pacemaker to complete the steps in this guide.
20
17
@@ -135,7 +132,7 @@ Before you can begin configuring the cluster, set up SSH key exchange to establi
135
132
6. Update the System
136
133
1. First, install the latest updates on the system before you start to install the SBD device.
137
134
1. Customers must make sure that they have at least version 4.1.1-12.el7_6.26 of the resource-agents-sap-hana package installed, as documented in [Support Policies for RHEL High Availability Clusters - Management of SAP HANA in a Cluster](https://access.redhat.com/articles/3397471)
138
-
1. If you don’t want a complete update of the system, even if is recommended, update the following packages at a minimum.
135
+
1. If you don’t want a complete update of the system, even if it is recommended, update the following packages at a minimum.
139
136
1. `resource-agents-sap-hana`
140
137
1. `selinux-policy`
141
138
1. `iscsi-initiator-utils`
@@ -157,7 +154,7 @@ Before you can begin configuring the cluster, set up SSH key exchange to establi
157
154
```
158
155
159
156
160
-
8. Install the Pacemaker, SBD, OpenIPMI, ipmitool and fencing_sbd tools on all nodes.
157
+
8. Install the Pacemaker, SBD, OpenIPMI, ipmitool, and fencing_sbd tools on all nodes.
@@ -187,7 +184,7 @@ In this section, you learn how to configure Watchdog. This section uses the same
187
184
188
185
```
189
186
190
-
2. The default Linux watchdog, which will be installed during the installation, is the iTCO watchdog which is not supported by UCS and HPE SDFlex systems. Therefore, this watchdog must be disabled.
187
+
2. The default Linux watchdog, that will be installed during the installation, is the iTCO watchdog which is not supported by UCS and HPE SDFlex systems. Therefore, this watchdog must be disabled.
191
188
1. The wrong watchdog is installed and loaded on the system:
192
189
```
193
190
sollabdsm35:~ # lsmod |grep iTCO
@@ -324,7 +321,7 @@ In this section, you learn how to configure SBD. This section uses the same two
324
321
`- 10:0:3:2 sdl 8:176 active ready running
325
322
```
326
323
327
-
4. Creating the SBD discs and set up the cluster primitive fencing. This step must be executed on first node.
324
+
4. Creating the SBD discs and setup the cluster primitive fencing. This step must be executed on first node.
| **Synchronous in-memory (default)** | Synchronous in memory (mode=syncmem) means the log write is considered as successful, when the log entry has been written to the log volume of the primary and sending the log has been acknowledged by the secondary instance after copying to memory. When the connection to the secondary system is lost, the primary system continues transaction processing and writes the changes only to the local disk.Data loss can occur when primary and secondary fail at the same time as long as the secondary system is connected or when a takeover is executed, while the secondary system is disconnected. This option provides better performance because it is not necessary to wait for disk I/O on the secondary instance, but is more vulnerable to data loss. |
655
-
| **Synchronous** | Synchronous (mode=sync) means the log write is considered as successful when the log entry has been written to the log volume of the primary and the secondary instance. When the connection to the secondary system is lost, the primary system continues transaction processing and writes the changes only to the local disk. No data loss occurs in this scenario as long as the secondary system is connected. Data loss can occur, when a takeover is executed while the secondary system is disconnected.Additionally, this replication mode can run with a full sync option. This means that log write is successful when the log buffer has been written to the log file of the primary and the secondary instance. In addition, when the secondary system is disconnected (for example, because of network failure) the primary systems suspends transaction processing until the connection to the secondary system is reestablished.No data loss occurs in this scenario. You can set the full sync option for system replication only with the parameter \[system\_replication\]/enable\_full\_sync). For more information on how to enable the full sync option, see Enable Full Sync Option for System Replication. |
651
+
| **Synchronous in-memory (default)** | Synchronous in memory (mode=syncmem) means the log write is considered as successful, when the log entry has been written to the log volume of the primary and sending the log has been acknowledged by the secondary instance after copying to memory. When the connection to the secondary system is lost, the primary system continues transaction processing and writes the changes only to the local disk. Data loss can occur when primary and secondary fail at the same time as long as the secondary system is connected or when a takeover is executed, while the secondary system is disconnected. This option provides better performance because it is not necessary to wait for disk I/O on the secondary instance, but is more vulnerable to data loss. |
652
+
| **Synchronous** | Synchronous (mode=sync) means the log write is considered as successful when the log entry has been written to the log volume of the primary and the secondary instance. When the connection to the secondary system is lost, the primary system continues transaction processing and writes the changes only to the local disk. No data loss occurs in this scenario as long as the secondary system is connected. Data loss can occur, when a takeover is executed while the secondary system is disconnected. Additionally, this replication mode can run with a full sync option. This means that log write is successful when the log buffer has been written to the log file of the primary and the secondary instance. In addition, when the secondary system is disconnected (for example, because of network failure) the primary systems suspends transaction processing until the connection to the secondary system is reestablished. No data loss occurs in this scenario. You can set the full sync option for system replication only with the parameter \[system\_replication\]/enable\_full\_sync). For more information on how to enable the full sync option, see Enable Full Sync Option for System Replication. |
656
653
| **Asynchronous** | Asynchronous (mode=async) means the primary system sends redo log buffers to the secondary system asynchronously. The primary system commits a transaction when it has been written to the log file of the primary system and sent to the secondary system through the network. It does not wait for confirmation from the secondary system. This option provides better performance because it is not necessary to wait for log I/O on the secondary system. Database consistency across all services on the secondary system is guaranteed. However, it is more vulnerable to data loss. Data changes may be lost on takeover. |
657
654
658
655
1. These are the actions to execute on node1 (primary).
@@ -1069,7 +1066,7 @@ Ensure you have met the following prerequisites:
1069
1066
| Attribute Name | Description |
1070
1067
|---|---|
1071
1068
| SID | SAP System Identifier (SID) of SAP HANA installation. Must be the same for all nodes. |
1072
-
| InstanceNumber | 2-digit SAP Instance Idntifier.|
1069
+
| InstanceNumber | 2-digit SAP Instance Identifier.|
0 commit comments