You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/sap/workloads/sap-hana-high-availability-scale-out-hsr-rhel.md
+15-11Lines changed: 15 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1322,13 +1322,12 @@ When you're testing a HANA cluster configured with a read-enabled secondary, be
1322
1322
#site id: 1
1323
1323
#site name: HANA_S1
1324
1324
```
1325
-
DO TUK
1326
1325
1327
1326
1. Verify the cluster configuration for a failure scenario, when a node loses access to the NFS share (`/hana/shared`).
1328
1327
1329
-
The SAP HANA resource agents depend on binaries, stored on `/hana/shared`, to perform operations during failover. File system `/hana/shared` is mounted over NFS in the presented configuration. One test that you can perform is to remount the `/hana/shared` file system as *Read only*. This approach validates that the cluster will fail over, if access to `/hana/shared` is lost on the active system replication site.
1328
+
The SAP HANA resource agents depend on binaries, stored on `/hana/shared`, to perform operations during failover. File system `/hana/shared` is mounted over NFS in the presented configuration. A test that can be performed, is to create a temporary firewall rule to block access to the `/hana/shared` NFS mounted file system on one of the primary site VMs. This approach validates that the cluster will fail over, if access to `/hana/shared` is lost on the active system replication site.
1330
1329
1331
-
**Expected result**: When you remount `/hana/shared` as *Read only*, the monitoring operation that performs a read/write operation on the file system will fail. This is because it isn't able to write to the file system, and will trigger HANA resource failover. The same result is expected when your HANA node loses access to the NFS share.
1330
+
**Expected result**: When you block the access to the `/hana/shared` NFS mounted file system on one of the primary site VMs, the monitoring operation that performs read/write operation on file system, will fail, as it is not able to access the file system and will trigger HANA resource failover. The same result is expected when your HANA node loses access to the NFS share.
1332
1331
1333
1332
You can check the state of the cluster resources by running `crm_mon` or `pcs status`. Resource state before starting the test:
1334
1333
```bash
@@ -1359,14 +1358,19 @@ DO TUK
1359
1358
# vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hana-s1-db1
1360
1359
```
1361
1360
1362
-
To simulate failure for`/hana/shared` on one of the primary replication site VMs, run the following command:
1363
-
```bash
1364
-
# Execute as root
1365
-
mount -o ro /hana/shared
1366
-
# Or if the preceding command returns an error
1367
-
sudo mount -o ro 10.23.1.7/HN1-shared-s1 /hana/shared
1368
-
```
1369
-
1361
+
To simulate failure for `/hana/shared`:
1362
+
1363
+
* If using NFS on ANF, first confirm the IP address for the `/hana/shared` ANF volume on the primary site. You can do that by running `df -kh|grep /hana/shared`.
1364
+
* If using NFS on Azure Files, first determine the IP address of the private end point for your storage account.
1365
+
1366
+
Then, set up a temporary firewall rule to block access to the IP address of the `/hana/shared` NFS file system by executing the following command on one of the primary HANA system replication site VMs.
1367
+
1368
+
In this example, the command was executed on hana-s1-db1 for ANF volume `/hana/shared`.
1369
+
1370
+
```bash
1371
+
iptables -A INPUT -s 10.23.1.7 -j DROP; iptables -A OUTPUT -d 10.23.1.7 -j DROP
1372
+
```
1373
+
1370
1374
The HANA VM that lost access to `/hana/shared` should restart or stop, depending on the cluster configuration. The cluster resources are migrated to the other HANA system replication site.
1371
1375
1372
1376
If the cluster hasn't started on the VM that was restarted, start the cluster by running the following:
0 commit comments