You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/sap/workloads/sap-hana-high-availability-scale-out-hsr-rhel.md
+37-36Lines changed: 37 additions & 36 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -865,49 +865,49 @@ For the next part of this process, you need to create file system resources. Her
865
865
1. **[1]** Create the file system cluster resources for `/hana/shared` in the disabled state. You use `--disabled` because you have to define the location constraints before the mounts are enabled.
866
866
You chose to deploy /hana/shared' on [NFS share on Azure Files](../../storage/files/files-nfs-protocol.md) or [NFS volume on Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-introduction.md).
867
867
868
-
- In this example, the '/hana/shared' file system is deployed on Azure NetApp Files and mounted over NFSv4.1. Follow the steps in this section, only if you're using NFS on Azure NetApp Files.
868
+
- In this example, the '/hana/shared' file system is deployed on Azure NetApp Files and mounted over NFSv4.1. Follow the steps in this section, only if you're using NFS on Azure NetApp Files.
fstype=nfs options='defaults,rw,hard,timeo=600,rsize=262144,wsize=262144,proto=tcp,noatime,sec=sys,nfsvers=4.1,lock,_netdev' op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 \
879
+
op start interval=0 timeout=120 op stop interval=0 timeout=120
880
+
881
+
# clone the /hana/shared file system resources for both site1 and site2
882
+
pcs resource clone fs_hana_shared_s1 meta clone-node-max=1 interleave=true
883
+
pcs resource clone fs_hana_shared_s2 meta clone-node-max=1 interleave=true
884
+
```
885
885
886
886
The suggested timeouts values allow the cluster resources to withstand protocol-specific pause, related to NFSv4.1 lease renewals on Azure NetApp Files. For more information see [NFS in NetApp Best practice](https://www.netapp.com/media/10720-tr-4067.pdf).
887
887
888
-
- In this example, the '/hara/shared' file system is deployed on NFS on Azure Files. Follow the steps in this section, only if you're using NFS on Azure Files.
888
+
- In this example, the '/hana/shared' file system is deployed on NFS on Azure Files. Follow the steps in this section, only if you're using NFS on Azure Files.
fstype=nfs options='defaults,rw,hard,proto=tcp,noatime,nfsvers=4.1,lock' op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 \
899
+
op start interval=0 timeout=120 op stop interval=0 timeout=120
900
+
901
+
# clone the /hana/shared file system resources for both site1 and site2
902
+
pcs resource clone fs_hana_shared_s1 meta clone-node-max=1 interleave=true
903
+
pcs resource clone fs_hana_shared_s2 meta clone-node-max=1 interleave=true
904
+
```
905
905
906
-
The `OCF_CHECK_LEVEL=20` attribute is added to the monitor operation, so that monitor operations perform a read/write test on the file system. Without this attribute, the monitor operation only verifies that the file system is mounted. This can be a problem because when connectivity is lost, the file system might remain mounted, despite being inaccessible.
906
+
The `OCF_CHECK_LEVEL=20` attribute is added to the monitor operation, so that monitor operations perform a read/write test on the file system. Without this attribute, the monitor operation only verifies that the file system is mounted. This can be a problem because when connectivity is lost, the file system might remain mounted, despite being inaccessible.
907
907
908
-
The `on-fail=fence` attribute is also added to the monitor operation. With this option, if the monitor operation fails on a node, that node is immediately fenced. Without this option, the default behavior is to stop all resources that depend on the failed resource, then restart the failed resource, and then start all the resources that depend on the failed resource. Not only can this behavior take a long time when an SAP HANA resource depends on the failed resource, but it also can fail altogether. The SAP HANA resource can't stop successfully, if the NFS share holding the HANA binaries is inaccessible.
908
+
The `on-fail=fence` attribute is also added to the monitor operation. With this option, if the monitor operation fails on a node, that node is immediately fenced. Without this option, the default behavior is to stop all resources that depend on the failed resource, then restart the failed resource, and then start all the resources that depend on the failed resource. Not only can this behavior take a long time when an SAP HANA resource depends on the failed resource, but it also can fail altogether. The SAP HANA resource can't stop successfully, if the NFS share holding the HANA binaries is inaccessible.
909
909
910
-
The timeouts in the above configurations may need to be adapted to the specific SAP setup.
910
+
The timeouts in the above configurations may need to be adapted to the specific SAP setup.
911
911
912
912
913
913
1. **[1]** Configure and verify the node attributes. All SAP HANA DB nodes on replication site 1 are assigned attribute `S1`, and all SAP HANA DB nodes on replication site 2 are assigned attribute `S2`.
@@ -1110,7 +1110,8 @@ Now you're ready to create the cluster resources:
1110
1110
sudo pcs resource group add g_ip_HN1_03 nc_HN1_03 vip_HN1_03
1111
1111
```
1112
1112
1113
-
1. Create the cluster constraints.
1113
+
1.
1114
+
2. Create the cluster constraints.
1114
1115
If you're building a RHEL **7.x** cluster, use the following commands:
0 commit comments