|
| 1 | +--- |
| 2 | +title: Troubleshoot Unexpected Node Reboots in Azure Linux SUSE Pacemaker Cluster |
| 3 | +description: This article provides troubleshooting steps for resolving unexpected node restarts in SUSE Linux Pacemaker Clusters |
| 4 | +author: rnirek |
| 5 | +ms.author: rnirek |
| 6 | +ms.reviewer: divargas, rnirek, lariasjaen |
| 7 | +ms.topic: troubleshooting |
| 8 | +ms.date: 1/13/2025 |
| 9 | +ms.service: azure-virtual-machines |
| 10 | +ms.collection: linux |
| 11 | +ms.custom: sap:Issue with Pacemaker cluster, and fencing |
| 12 | +--- |
| 13 | + |
| 14 | +# Troubleshooting unexpected node restarts in Azure Linux SUSE Pacemaker Cluster nodes |
| 15 | + |
| 16 | +**Applies to:** :heavy_check_mark: Linux VMs |
| 17 | + |
| 18 | +This article provides guidance for troubleshooting, analysis, and resolution of the most common scenarios for unexpected node restarts in SUSE Pacemaker Clusters. |
| 19 | + |
| 20 | +## Prerequisites |
| 21 | + |
| 22 | +- Make sure that the Pacemaker Cluster setup is correctly configured by following the guidelines that are provided in [SUSE - set up Pacemaker on SUSE Linux Enterprise Server in Azure ](/azure/sap/workloads/high-availability-guide-suse-pacemaker). |
| 23 | +- For a Microsoft Azure Pacemaker cluster that uses the Azure Fence Agent as the STONITH (Shoot-The-Other-Node-In-The-Head) device, refer to the documentation that's provided in [SUSE - Create Azure Fence agent STONITH device](/azure/sap/workloads/high-availability-guide-suse-pacemaker?tabs=msi#use-an-azure-fence-agent-1). |
| 24 | +- For a Microsoft Azure Pacemaker cluster that uses SBD (STONITH Block Device) storage protection as the STONITH device, choose one of the following setup options (see the articles for detailed information): |
| 25 | + - [SBD with an iscsi target server](/azure/sap/workloads/high-availability-guide-suse-pacemaker?tabs=msi#sbd-with-an-iscsi-target-server) |
| 26 | + - [SBD with an Azure shared disk](/azure/sap/workloads/high-availability-guide-suse-pacemaker?tabs=msi#sbd-with-an-azure-shared-disk) |
| 27 | + |
| 28 | +## Scenario 1: Network outage |
| 29 | +- The cluster nodes are experiencing `corosync` communication errors. This causes continuous retransmissions because of an inability to establish communication between nodes. This issue triggers application time-outs, ultimately causing node fencing and subsequent restarts. |
| 30 | +- Additionally, services that are dependent on network connectivity, such as `waagent`, generate communication-related error messages in the logs. This further indicates network-related disruptions. |
| 31 | + |
| 32 | +The following messages are logged in the `/var/log/messages` log: |
| 33 | + |
| 34 | +From `node 01`: |
| 35 | +```output |
| 36 | +Aug 21 01:48:00 node 01 corosync[19389]: [TOTEM ] Token has not been received in 30000 ms |
| 37 | +Aug 21 01:48:00 node 01 corosync[19389]: [TOTEM ] A processor failed, forming new configuration: token timed out (40000ms), waiting 48000ms for consensus. |
| 38 | +``` |
| 39 | +From `node 02`: |
| 40 | +```output |
| 41 | +Aug 21 01:47:27 node 02 corosync[15241]: [KNET ] link: host: 2 link: 0 is down |
| 42 | +Aug 21 01:47:27 node 02 corosync[15241]: [KNET ] host: host: 2 (passive) best link: 0 (pri: 1) |
| 43 | +Aug 21 01:47:27 node 02 corosync[15241]: [KNET ] host: host: 2 has no active links |
| 44 | +Aug 21 01:47:31 node 02 corosync[15241]: [TOTEM ] Token has not been received in 30000 ms |
| 45 | +``` |
| 46 | + |
| 47 | +### Cause for scenario 1 |
| 48 | +An unexpected node restart occurs because of a Network Maintenance activity or an outage. For confirmation, you can match the timestamp by reviewing the [Azure Maintenance Notification](/azure/virtual-machines/linux/maintenance-notifications) in the Azure portal. For more information about Azure Scheduled Events, see [Azure Metadata Service: Scheduled Events for Linux VMs](/azure/virtual-machines/linux/scheduled-events). |
| 49 | + |
| 50 | +### Resolution for scenario 1 |
| 51 | +If the unexpected restart timestamp aligns with a maintenance activity, the analysis confirms that either platform or network maintenance affected the cluster. |
| 52 | + |
| 53 | +For further assistance or other queries, you can open a support request by following [these instructions](#next-steps). |
| 54 | + |
| 55 | +## Scenario 2: Cluster Misconfiguration |
| 56 | +The cluster nodes experience unexpected failovers or node restarts. These are often caused by cluster misconfigurations that affect the stability of Pacemaker Clusters. |
| 57 | + |
| 58 | +To review rhe cluster configuration, run the following command: |
| 59 | +```bash |
| 60 | +sudo crm configure show |
| 61 | +``` |
| 62 | + |
| 63 | +### Cause for scenario 2 |
| 64 | +Unexpected restarts in an Azure SUSE Pacemaker cluster often occur because of misconfigurations: |
| 65 | + |
| 66 | +- Incorrect STONITH configuration: |
| 67 | + - No STONITH or fencing misconfigured: Not configuring STONITH correctly can cause nodes to be marked as unhealthy and trigger unnecessary restarts. |
| 68 | + - Wrong STONITH resource settings: Incorrect parameters for Azure fencing agents, such as `fence_azure_arm`, can cause nodes to restart unexpectedly during failovers. |
| 69 | + - Insufficient permissions: The Azure resource group or credentials that are used for fencing might lack required permissions and cause STONITH failures. |
| 70 | + |
| 71 | +- Missing or incorrect resource constraints: |
| 72 | + Poorly set constraints can cause resources to be redistributed unnecessarily. This can cause node overload and restarts. Misaligned resource dependency configurations can cause nodes to fail or go into a restart loop. |
| 73 | + |
| 74 | +- Cluster threshold and time-out misconfigurations: |
| 75 | + - `failure-time-out`, `migration-threshold`, or `monitor-time-out` values may cause nodes to be prematurely restarted. |
| 76 | + - Heartbeat Timeout Settings: Incorrect `corosync` time-out settings for heartbeat intervals can cause nodes to assume each other are offline, triggering unnecessary restarts. |
| 77 | + |
| 78 | +- Lack of proper health checks: |
| 79 | + Not setting correct health-check intervals for critical services such as SAP HANA (High-performance ANalytic Application) can cause resource or node failures. |
| 80 | + |
| 81 | +- Resource agent misconfiguration: |
| 82 | + - Custom resource agents misaligned with cluster: Resource agents that don't adhere to Pacemaker standards can create unpredictable behavior, including node restarts. |
| 83 | + - Wrong resource start/stop parameters: Incorrectly tuned start/stop parameters in cluster configuration may cuase nodes to restart during resource recovery. |
| 84 | + |
| 85 | +- Corosync configuration issues: |
| 86 | + - Non-optimized network settings: Incorrect multicast/unicast configurations can cause heartbeat communication failures. Mismatched `ring0` and `ring1` network configurations can cause split-brain scenarios and node fencing. |
| 87 | + - Token time-out mismatches: Token time-out values that aren't aligned with the environment’s latency can trigger node isolation and restarts. |
| 88 | + - To review a Corosync configuration, run the following command: |
| 89 | + ```bash |
| 90 | + sudo cat /etc/corosync/corosync.conf |
| 91 | + ``` |
| 92 | + |
| 93 | +### Resolution for scenario 2 |
| 94 | +- Follow the proper guidelines to set up a [SUSE Pacemaker Cluster](#prerequisites). Additionally, make sure that appropriate resources are allocated for applications such as [SAP HANA](/azure/sap/workloads/sap-hana-high-availability) or [SAP NetWeaver](/azure/sap/workloads/high-availability-guide-suse), as specified in the Microsoft documentation. |
| 95 | +- Steps to make necessary changes to the cluster configuration: |
| 96 | + 1. Stop the application on both the nodes. |
| 97 | + 2. Put the cluster into maintenance-mode. |
| 98 | + ```bash |
| 99 | + crm configure property maintenance-mode=true |
| 100 | + ``` |
| 101 | + 3. Edit the cluster configuration: |
| 102 | + ```bash |
| 103 | + crm configure edit |
| 104 | + ``` |
| 105 | + 4. Save the changes. |
| 106 | + 5. Remove the cluster from maintenance-mode. |
| 107 | + ```bash |
| 108 | + crm configure property maintenance-mode=false |
| 109 | + ``` |
| 110 | +## Scenario 3: Migration from on-premises to Azure |
| 111 | +When you migrate a SUSE Pacemaker cluster from on-premises to Azure, unexpected restarts can occur becauise of specific misconfigurations or overlooked dependencies. |
| 112 | + |
| 113 | +### Cause for scenario 3 |
| 114 | +The following are common mistakes in this category: |
| 115 | + |
| 116 | +- Incomplete or incorrect STONITH configuration: |
| 117 | + - No STONITH or fencing misfconfigured: Not configuring STONITH (Shoot-The-Other-Node-In-The-Head) correctly can cause nodes to be marked as unhealthy and trigger unnecessary restarts. |
| 118 | + - Wrong STONITH resource settings: Incorrect parameters for Azure fencing agents such as `fence_azure_arm` can cause nodes to restart unexpectedly during failovers. |
| 119 | + - Insufficient permissions: The Azure resource group or credentials that are used for fencing might lack required permissions and cause STONITH failures. Key Azure-specific parameters, such as subscription ID, resource group, or VM names, must be correctly configured in the fencing agent. Omissions here can cause fencing failures and unexpected restarts. |
| 120 | + |
| 121 | + For more information, see [Troubleshoot Azure Fence Agent startup issues in SUSE](troubleshoot-azure-fence-agent-startup-suse.md) and [Troubleshoot SBD service failure in SUSE Pacemaker clusters](troubleshoot-sbd-issues-sles.md) |
| 122 | + |
| 123 | +- Network misconfigurations: |
| 124 | + Misconfigured VNets, subnets, or security group rules can block essential cluster communication and cause perceived node failures and restarts. |
| 125 | + |
| 126 | + For more information, see [Virtual networks and virtual machines in Azure](/azure/virtual-machines/linux/network-overview) |
| 127 | + |
| 128 | +- Metadata Service issues: |
| 129 | + Azure's cloud metadata services must be correctly handled. Otherwise, resource detection or startup processes can fail. |
| 130 | + |
| 131 | + For more information, see [Azure Instance Metadata Service](/azure/virtual-machines/instance-metadata-service) and [Azure Metadata Service: Scheduled Events for Linux VMs](/azure/virtual-machines/linux/scheduled-events) |
| 132 | +
|
| 133 | +- Performance and latency mismatches: |
| 134 | + - Inadequate VM sizing: Migrated workloads might not align with the selected Azure VM(Virtual Machine) size. This causes excessive resource use and triggers restarts. |
| 135 | + - Disk I/O mismatches: On-premises workloads with high IOPS (Input/output operations per second) demands must be paired with the appropriate Azure disk or storage performance tier. |
| 136 | + |
| 137 | + For more information, see [Collect performance metrics for a Linux VM](collect-performance-metrics-from-a-linux-system.md) |
| 138 | +
|
| 139 | +- Security and Firewall Rules: |
| 140 | + - Port Block: On-premises clusters often have open, internal communication. Additionally, Azure NSGs (Network Security Groups) or firewalls might block ports that are required for Pacemaker or Corosync communication. |
| 141 | + |
| 142 | + For more information, see [Network security group test](/azure/virtual-machines/network-security-group-test) |
| 143 | +
|
| 144 | +
|
| 145 | +### Resolution for scenario 3 |
| 146 | +
|
| 147 | +Follow the proper guidelines to set up a [SUSE Pacemaker Cluster](#prerequisites). Additionally, make sure that appropriate resources are allocated for applications such as [SAP HANA](/azure/sap/workloads/sap-hana-high-availability) or [SAP NetWeaver](/azure/sap/workloads/high-availability-guide-suse), as specified in the Microsoft documentation. |
| 148 | +
|
| 149 | +## Scenario 4: `HANA_CALL` time-out after 60 seconds |
| 150 | +
|
| 151 | +The Azure SUSE Pacemaker Cluster is running SAP HANA as an application, and it experiences unexpected restarts on one of the nodes or both nodes in the Pacemaker Cluster. Per the `/var/log/messages` or `/var/log/pacemaker.log` log entries, the node restart is caused by a `HANA_CALL` time-out, as follows: |
| 152 | +
|
| 153 | +```output |
| 154 | +2024-06-04T09:25:37.772406+00:00 node01 SAPHanaTopology(rsc_SAPHanaTopology_H00_HDB02)[99440]: WARNING: RA: HANA_CALL timed out after 60 seconds running command 'hdbnsutil -sr_stateConfiguration --sapcontrol=1' |
| 155 | +2024-06-04T09:25:38.711650+00:00 node01 SAPHana(rsc_SAPHana_H00_HDB02)[99475]: WARNING: RA: HANA_CALL timed out after 60 seconds running command 'hdbnsutil -sr_stateConfiguration' |
| 156 | +2024-06-04T09:25:38.724146+00:00 node01 SAPHana(rsc_SAPHana_H00_HDB02)[99475]: ERROR: ACT: check_for_primary: we didn't expect node_status to be: <> |
| 157 | +2024-06-04T09:25:38.736748+00:00 node01 SAPHana(rsc_SAPHana_H00_HDB02)[99475]: ERROR: ACT: check_for_primary: we didn't expect node_status to be: DUMP <00000000 0a |.|#01200000001> |
| 158 | +``` |
| 159 | +
|
| 160 | +### Cause for scenario 4 |
| 161 | +The SAP HANA time-out messages are commonly considered internal application time-outs. Therefore, the SAP vendor should be engaged. |
| 162 | +
|
| 163 | +### Resolution for scenario 4 |
| 164 | +- To identify the root cause of the issue, review the [OS performance](collect-performance-metrics-from-a-linux-system.md). |
| 165 | +- You should pay particular attention to memory pressure and storage devices and their configuration. This is especially true if HANA is hosted on Network File System (NFS), Azure NetApp Files (ANF), or Azure Files. |
| 166 | +- After you rule out external factors, such as platform or network outages, we recommend that you contact the application vendor for trace call analysis and log review. |
| 167 | +
|
| 168 | +## Scenario 5: `ASCS/ERS` time-out in SAP Netweaver clusters |
| 169 | +
|
| 170 | +The Azure SUSE Pacemaker Cluster is running SAP Netweaver ASCS/ERS as an application, and it experiences unexpected restarts on one of the nodes or both nodes in the Pacemaker Cluster. The following messages are logged in the `/var/log/messages` log: |
| 171 | +
|
| 172 | +```output |
| 173 | +2024-11-09T07:36:42.037589-05:00 node 01 SAPInstance(RSC_SAP_ERS10)[8689]: ERROR: SAP instance service enrepserver is not running with status GRAY ! |
| 174 | +2024-11-09T07:36:42.044583-05:00 node 01 pacemaker-controld[2596]: notice: Result of monitor operation for RSC_SAP_ERS10 on node01: not running |
| 175 | +``` |
| 176 | +
|
| 177 | +```output |
| 178 | +2024-11-09T07:39:42.789404-05:00 node01 SAPInstance(RSC_SAP_ASCS00)[16393]: ERROR: SAP Instance CP2-ASCS00 start failed: #01109.11.2024 07:39:42#012WaitforStarted#012FAIL: process msg_server MessageServer not running |
| 179 | +2024-11-09T07:39:420.796280-05:00 node01 pacemaker-execd[2404]: notice: RSC_SAP_ASCS00 start (call 78, PID 16393) exited with status 7 (execution time 23.488s) |
| 180 | +2024-11-09T07:39:42.828845-05:00 node 01 pacemaker-schedulerd[2406]: warning: Unexpected result (not running) was recorded for start of RSC_SAP_ASCS00 on node01 at Nov 9 07:39:42 2024 |
| 181 | +2024-11-09T07:39:42.828955-05:00 node 01 pacemaker-schedulerd[2406]: warning: Unexpected result (not running) was recorded for start of RSC_SAP_ASCS00 on node01 at Nov 9 07:39:42 2024 |
| 182 | +``` |
| 183 | +
|
| 184 | +### Cause for scenario 5 |
| 185 | +The `ASCS/ERS` resource is considered to be the application for SAP Netweaver clusters. When the corresponding cluster monitoring resource times out, it triggers a failover process. |
| 186 | +
|
| 187 | +### Resolution scenario 5 |
| 188 | +- To identify the root cause of the issue, we recommend that you review the [OS performance](collect-performance-metrics-from-a-linux-system.md). |
| 189 | +- You should pay particular attention to memory pressure and storage devices and their configuration. This is especially true if SAP Netweaver is hosted on Network File System (NFS), Azure NetApp Files (ANF), or Azure Files. |
| 190 | +- After you rule out external factors, such as platform or network outages, we recommend that you engage the application vendor for trace call analysis and log review. |
| 191 | +
|
| 192 | +## Next steps |
| 193 | +For additional help, open a support request, and submit your request by attaching [supportconfig](https://documentation.suse.com/smart/systems-management/html/supportconfig/index.html) and [hb_report](https://www.suse.com/support/kb/doc/?id=000019142) logs for troubleshooting. |
| 194 | +
|
| 195 | +[!INCLUDE [Third-party disclaimer](../../../includes/third-party-disclaimer.md)] |
| 196 | +
|
| 197 | +[!INCLUDE [Third-party contact disclaimer](../../../includes/third-party-contact-disclaimer.md)] |
| 198 | +
|
| 199 | +[!INCLUDE [Azure Help Support](../../../includes/azure-help-support.md)] |
0 commit comments