Skip to content

Commit dc10bd6

Browse files
committed
Formatting and improved clarity
1 parent 6678935 commit dc10bd6

File tree

2 files changed

+36
-36
lines changed

2 files changed

+36
-36
lines changed

articles/sap/workloads/disaster-recovery-sap-hana.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ Requirements for additional HSR sites are different for HANA scale-up and HANA s
5050
> [!NOTE]
5151
>
5252
> - Requirements in this article are only valid for a Pacemaker-enabled landscape. Without Pacemaker, SAP HANA version requirements apply for the chosen replication mode.
53-
> - Pacemaker and the HANA cluster resource agent manage only two sites. The additional HSR sites isn't controlled by the Pacemaker cluster.
53+
> - Pacemaker and the HANA cluster resource agent manage only two sites. Any additional HSR sites aren't controlled by the Pacemaker cluster.
5454
5555
- RedHat supports one or more additional system replication sites to an SAP HANA database outside the Pacemaker cluster.
5656
- **HANA scale-up only**: See RedHat [support policies for RHEL HA clusters](https://access.redhat.com/articles/3397471) for details on the minimum OS, SAP HANA, and cluster resource agents version.

articles/sap/workloads/sap-hana-high-availability-scale-out-hsr-suse.md

Lines changed: 35 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ In the presented architecture, you can deploy the HANA shared file system `/hana
7575
For recommended SAP HANA storage configurations, see [SAP HANA Azure VMs storage configurations](./hana-vm-operations-storage.md).
7676

7777
> [!IMPORTANT]
78-
> If deploying all HANA file systems on Azure NetApp Files, for production systems, where performance is a key, we recommend to evaluate and consider using [Azure NetApp Files application volume group for SAP HANA](hana-vm-operations-netapp.md#deployment-through-azure-netapp-files-application-volume-group-for-sap-hana-avg).
78+
> If deploying all HANA file systems on Azure NetApp Files, for production systems, where performance is a key, we recommend that you evaluate and consider using [Azure NetApp Files application volume group for SAP HANA](hana-vm-operations-netapp.md#deployment-through-azure-netapp-files-application-volume-group-for-sap-hana-avg).
7979
8080
> [!WARNING]
8181
> Deploying `/hana/data` and `/hana/log` on NFS on Azure Files isn't supported.
@@ -118,7 +118,7 @@ In the following instructions, it's assumed that you've already created the reso
118118
> [!IMPORTANT]
119119
>
120120
> * Make sure that the OS you select is SAP-certified for SAP HANA on the specific VM types you're using. For a list of SAP HANA certified VM types and OS releases for those types, go to the [SAP HANA certified IaaS platforms](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;iaas;v:125;v:105;v:99;v:120) site. Click into the details of the listed VM type to get the complete list of SAP HANA-supported OS releases for that type.
121-
> * If you choose to deploy `/hana/shared` on NFS on Azure Files, we recommend to deploy on SUSE Linux Enterprise Server (SLES) 15 SP2 and later.
121+
> * If you choose to deploy `/hana/shared` on NFS on Azure Files, we recommend that you deploy on SUSE Linux Enterprise Server (SLES) 15 SP2 and later.
122122
123123
2. Create six network interfaces, one for each HANA DB virtual machine, in the `inter` virtual network subnet (in this example, **hana-s1-db1-inter**, **hana-s1-db2-inter**, **hana-s1-db3-inter**, **hana-s2-db1-inter**, **hana-s2-db2-inter**, and **hana-s2-db3-inter**).
124124

@@ -162,26 +162,26 @@ In the following instructions, it's assumed that you've already created the reso
162162
163163
### Configure Azure load balancer
164164
165-
During VM configuration, you have an option to create or select exiting load balancer in networking section. Follow below steps, to setup standard load balancer for high availability setup of HANA database.
165+
During VM configuration, you have an option to create or select exiting load balancer in networking section. Follow below steps to set up standard load balancer for high availability setup of HANA database.
166166
167167
> [!NOTE]
168168
>
169169
> * For HANA scale out, select the network interface for the `client` subnet when adding the virtual machines in the backend pool.
170-
> * The full set of command in Azure CLI and PowerShell adds the VMs with primary Network interface in the backend pool.
170+
> * The full set of commands in Azure CLI and PowerShell adds the VMs with primary Network interface in the backend pool.
171171
172172
#### [Azure portal](#tab/lb-portal)
173173
174174
[!INCLUDE [Configure Azure standard load balancer using Azure portal](../../../includes/sap-load-balancer-db-portal.md)]
175175
176176
#### [Azure CLI](#tab/lb-azurecli)
177177
178-
The full set of Azure CLI codes display the setup of the load balancer, which include two VMs in the backend pool. Depending on the number of VMs in your HANA scale-out, you could add more VMs in the backend pool.
178+
The full set of Azure CLI codes display the setup of the load balancer, which includes two VMs in the backend pool. Depending on the number of VMs in your HANA scale-out, you could add more VMs in the backend pool.
179179
180180
[!INCLUDE [Configure Azure standard load balancer using Azure CLI](../../../includes/sap-load-balancer-db-azurecli.md)]
181181
182182
#### [PowerShell](#tab/lb-powershell)
183183
184-
The full set of PowerShell code display the setup of the load balancer, which include two VMs in the backend pool. Depending on the number of VMs in your HANA scale-out, you could add more VMs in the backend pool.
184+
The full set of PowerShell code display the setup of the load balancer, which includes two VMs in the backend pool. Depending on the number of VMs in your HANA scale-out, you could add more VMs in the backend pool.
185185
186186
[!INCLUDE [Configure Azure standard load balancer using PowerShell](../../../includes/sap-load-balancer-db-powershell.md)]
187187
@@ -770,7 +770,7 @@ SUSE provides two different software packages for the Pacemaker resource agent t
770770
> [!WARNING]
771771
> Don't replace the package SAPHanaSR-ScaleOut by SAPHanaSR-angi in an already configured cluster. Upgrading from SAPHanaSR to SAPHanaSR-angi requires a specific procedure. For more details, see SUSE's blog post [How to upgrade to SAPHanaSR-angi](https://www.suse.com/c/how-to-upgrade-to-saphanasr-angi/).
772772

773-
1. **[A]** Install the SAP HANA high availability packages:
773+
- **[A]** Install the SAP HANA high availability packages:
774774

775775
### [SAPHanaSR-angi](#tab/saphanasr-angi)
776776
> [!NOTE]
@@ -1064,41 +1064,41 @@ sudo crm configure location loc_SAPHanaFilesystem_not_on_majority_maker cln_SAPH
10641064
10651065
Create a dummy file system cluster resource, which will monitor and report failures, in case there's a problem accessing the NFS-mounted file system `/hana/shared`. That allows the cluster to trigger failover, in case there's a problem accessing `/hana/shared`. For more information, see [Handling failed NFS share in SUSE HA cluster for HANA system replication](https://www.suse.com/support/kb/doc/?id=000019904)
10661066
1067-
1. **[1,2]** Create the directory on the NFS mounted file system /hana/shared, which will be used in the special file system monitoring resource. The directories need to be created on both sites.
1067+
- **[1,2]** Create the directory on the NFS mounted file system /hana/shared, which will be used in the special file system monitoring resource. The directories need to be created on both sites.
10681068
1069-
```bash
1070-
mkdir -p /hana/shared/HN1/check
1071-
```
1069+
```bash
1070+
mkdir -p /hana/shared/HN1/check
1071+
```
10721072
1073-
2. **[AH]** Create the directory, which will be used to mount the special file system monitoring resource. The directory needs to be created on all HANA cluster nodes.
1073+
- **[AH]** Create the directory, which will be used to mount the special file system monitoring resource. The directory needs to be created on all HANA cluster nodes.
10741074
1075-
```bash
1076-
mkdir -p /hana/check
1077-
```
1075+
```bash
1076+
mkdir -p /hana/check
1077+
```
10781078
1079-
3. **[1]** Create the file system cluster resources.
1079+
- **[1]** Create the file system cluster resources.
10801080
1081-
```bash
1082-
# Replace <placeholders> with your instance number and HANA system ID
1083-
1084-
crm configure primitive fs_<SID>_HDB<InstNum>_fscheck Filesystem \
1085-
params device="/hana/shared/<SID>/check" \
1086-
directory="/hana/check" fstype=nfs4 \
1087-
options="bind,defaults,rw,hard,proto=tcp,noatime,nfsvers=4.1,lock" \
1088-
op monitor interval=120 timeout=120 on-fail=fence \
1089-
op_params OCF_CHECK_LEVEL=20 \
1090-
op start interval=0 timeout=120 op stop interval=0 timeout=120
1091-
1092-
crm configure clone cln_fs_<SID>_HDB<InstNum>_fscheck fs_<SID>_HDB<InstNum>_fscheck \
1093-
meta clone-node-max=1 interleave=true
1094-
# Add a location constraint to not run filesystem check on majority maker VM
1095-
crm configure location loc_cln_fs_<SID>_HDB<InstNum>_fscheck_not_on_mm \
1096-
cln_fs_<SID>_HDB<InstNum>_fscheck -inf: hana-s-mm
1097-
```
1081+
```bash
1082+
# Replace <placeholders> with your instance number and HANA system ID
1083+
1084+
crm configure primitive fs_<SID>_HDB<InstNum>_fscheck Filesystem \
1085+
params device="/hana/shared/<SID>/check" \
1086+
directory="/hana/check" fstype=nfs4 \
1087+
options="bind,defaults,rw,hard,proto=tcp,noatime,nfsvers=4.1,lock" \
1088+
op monitor interval=120 timeout=120 on-fail=fence \
1089+
op_params OCF_CHECK_LEVEL=20 \
1090+
op start interval=0 timeout=120 op stop interval=0 timeout=120
1091+
1092+
crm configure clone cln_fs_<SID>_HDB<InstNum>_fscheck fs_<SID>_HDB<InstNum>_fscheck \
1093+
meta clone-node-max=1 interleave=true
1094+
# Add a location constraint to not run filesystem check on majority maker VM
1095+
crm configure location loc_cln_fs_<SID>_HDB<InstNum>_fscheck_not_on_mm \
1096+
cln_fs_<SID>_HDB<InstNum>_fscheck -inf: hana-s-mm
1097+
```
10981098
1099-
`OCF_CHECK_LEVEL=20` attribute is added to the monitor operation, so that monitor operations perform a read/write test on the file system. Without this attribute, the monitor operation only verifies that the file system is mounted. This can be a problem because when connectivity is lost, the file system may remain mounted, despite being inaccessible.
1099+
`OCF_CHECK_LEVEL=20` attribute is added to the monitor operation, so that monitor operations perform a read/write test on the file system. Without this attribute, the monitor operation only verifies that the file system is mounted. This can be a problem because when connectivity is lost, the file system may remain mounted, despite being inaccessible.
11001100
1101-
`on-fail=fence` attribute is also added to the monitor operation. With this option, if the monitor operation fails on a node, that node is immediately fenced.
1101+
`on-fail=fence` attribute is also added to the monitor operation. With this option, if the monitor operation fails on a node, that node is immediately fenced.
11021102
11031103
---
11041104

0 commit comments

Comments
 (0)