You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -114,7 +114,7 @@ If you prefer to use Azure CLI to view the activity log for a failed cluster, fo
114
114
115
115
In the Azure portal, navigate to your AKS cluster resource and select **Diagnose and solve problems** from the left menu. You'll see a list of categories and scenarios that you can select to run diagnostic checks and get recommended solutions.
116
116
117
-
In the Azure CLI, use the `az aks collect` command with the `--name` and `--resource-group` parameters to collect diagnostic data from your cluster nodes. You can also use the `--storage-account` and `--sas-token` parameters to specify an Azure Storage account where the data will be uploaded. The output will include a link to the **Diagnose and Solve Problems** blade where you can view the results and suggested actions.
117
+
In the Azure CLI, use the `az aks kollect` command with the `--name` and `--resource-group` parameters to collect diagnostic data from your cluster nodes. You can also use the `--storage-account` and `--sas-token` parameters to specify an Azure Storage account where the data will be uploaded. The output will include a link to the **Diagnose and Solve Problems** blade where you can view the results and suggested actions.
118
118
119
119
In the **Diagnose and Solve Problems** blade, you can select **Cluster Issues** as the category. If any issues are detected, you'll see a list of possible solutions that you can follow to fix them.
ms.custom: sap:Load balancer and Ingress controller
7
7
ms.topic: how-to
8
-
ms.date: 10/17/2024
8
+
ms.date: 03/10/2025
9
9
---
10
10
# Create an unmanaged ingress controller
11
11
@@ -574,7 +574,7 @@ Alternatively, a more granular approach is to delete the individual resources cr
574
574
575
575
To configure TLS with your existing ingress components, see [Use TLS with an ingress controller](/previous-versions/azure/aks/ingress-tls).
576
576
577
-
To configure your AKS cluster to use HTTP application routing, see [Enable the HTTP application routing add-on](/previous-versions/azure/aks/http-application-routing).
577
+
To configure your AKS cluster to use application routing, see [Application routing add-on](/azure/aks/app-routing).
578
578
579
579
This article included some external components to AKS. To learn more about these components, see the following project pages:
| United Kingdom | UK South, UK West | 20.58.68.62, 20.58.68.63, 20.90.32.180, 20.90.132.144, 20.90.132.145, 51.104.30.169, 172.187.0.26, 172.187.65.53 |
133
133
| United States | US Central, US East, US East 2, US East 2 EUAP, US North, US South, US West, US West 2, US West 3 | 4.149.249.197, 4.150.239.210, 20.14.127.175, 20.40.200.175, 20.45.242.18, 20.45.242.19, 20.45.242.20, 20.47.232.186, 20.51.21.252, 20.69.5.160, 20.69.5.161, 20.69.5.162, 20.83.222.100, 20.83.222.101, 20.83.222.102, 20.98.146.84, 20.98.146.85, 20.98.194.64, 20.98.194.65, 20.98.194.66, 20.168.188.34, 20.241.116.153, 52.159.214.194, 57.152.124.244, 68.220.123.194, 74.249.127.175, 74.249.142.218, 157.55.93.0, 168.61.232.59, 172.183.234.204, 172.191.219.35 |
134
-
| USGov | All US Government Cloud regions | 20.140.104.48, 20.140.105.3, 20.140.144.58, 20.140.144.59, 20.140.147.168, 20.140.53.121, 20.141.10.130, 20.141.10.131, 20.141.13.121, 20.141.15.104, 52.127.55.131, 52.235.252.252, 52.235.252.253, 52.243.247.124, 52.245.155.139, 52.245.156.185, 62.10.196.24, 62.10.196.25, 62.10.84.240, 62.11.6.64, 62.11.6.65|
134
+
| USGov | All US Government Cloud regions | 20.140.104.48, 20.140.105.3, 20.140.144.58, 20.140.144.59, 20.140.147.168, 20.140.53.121, 20.141.10.130, 20.141.10.131, 20.141.13.121, 20.141.15.104, 52.127.55.131, 52.235.252.252, 52.235.252.253, 52.243.247.124, 52.245.155.139, 52.245.156.185, 62.10.84.240 |
135
135
136
136
> [!IMPORTANT]
137
137
> - The IPs that need to be permitted are specific to the region where the VM is located. For example, a virtual machine deployed in the North Europe region needs to add the following IP exclusions to the storage account firewall for the Europe geography: 52.146.139.220 and 20.105.209.72. View the table above to find the correct IPs for your region and geography.
Copy file name to clipboardExpand all lines: support/azure/virtual-machines/linux/troubleshoot-rhel-pacemaker-cluster-services-resources-startup-issues.md
+94-10Lines changed: 94 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,11 +1,11 @@
1
1
---
2
-
title: Troubleshoot RHEL pacemaker cluster services and resources startup issues in Azure
2
+
title: Troubleshoot RHEL Pacemaker Cluster Services and Resources Startup Issues in Azure
3
3
description: Provides troubleshooting guidance for issues related to cluster resources or services in RedHat Enterprise Linux (RHEL)) Pacemaker Cluster
4
4
ms.reviewer: rnirek,srsakthi
5
-
ms.author: skarthikeyan
5
+
ms.author: rnirek
6
6
author: skarthikeyan7-msft
7
7
ms.topic: troubleshooting
8
-
ms.date: 01/22/2025
8
+
ms.date: 02/24/2025
9
9
ms.service: azure-virtual-machines
10
10
ms.collection: linux
11
11
ms.custom: sap:Issue with Pacemaker clustering, and fencing
@@ -71,7 +71,7 @@ quorum {
71
71
72
72
### Resolution for scenario 1
73
73
74
-
1. Before you make any changes, ensure you have a backup or snapshot. For more information, see [Azure VM backup](/azure/backup/backup-azure-vms-introduction).
74
+
1. Before you make any changes, make sure that you have a backup or snapshot. For more information, see [Azure VM backup](/azure/backup/backup-azure-vms-introduction).
75
75
76
76
2. Check for missing quorum section in `/etc/corosync/corosync.conf`. Compare the existing `corosync.conf` with any backup that's available in `/etc/corosync/`.
77
77
@@ -125,7 +125,7 @@ quorum {
125
125
}
126
126
```
127
127
128
-
5. Remove the cluster from maintenance-mode.
128
+
5. Remove the cluster from maintenancemode.
129
129
130
130
```bash
131
131
sudo pcs property set maintenance-mode=false
@@ -149,7 +149,7 @@ quorum {
149
149
150
150
A virtual IP resource (`IPaddr2` resource) didn't start or stop in Pacemaker.
151
151
152
-
The following error messages are logged in `/var/log/pacemaker.log`:
152
+
The following error entries are logged in `/var/log/pacemaker.log`:
153
153
154
154
```output
155
155
25167 IPaddr2(VIP)[16985]: 2024/09/07_15:44:19 ERROR: Unable to find nic or netmask.
If a route that matches the `VIP` isn't in the default routing table, you can specify the `NIC` name in the Pacemaker resource so that it can be configured to bypass the check:
210
210
211
-
1. Before you make any changes, ensure you have a backup or snapshot. For more information, see [Azure VM backup](/azure/backup/backup-azure-vms-introduction).
211
+
1. Before you make any changes, make sure that you have a backup or snapshot. For more information, see [Azure VM backup](/azure/backup/backup-azure-vms-introduction).
212
212
213
213
2. Put the cluster into maintenance mode:
214
214
@@ -334,7 +334,7 @@ The SAP HANA resource can't be started by Pacemaker if there are `SYN` failures
334
334
> [!Important]
335
335
> Steps 2, 3, and 4 must be performed by using a SAP administrator account. This is because these steps use a SAP System ID to stop, start, and re-enable replication manually.
336
336
337
-
1. Before you make any changes, ensure you have a backup or snapshot. For more information, see [Azure VM backup](/azure/backup/backup-azure-vms-introduction).
337
+
1. Before you make any changes, make sure that you have a backup or snapshot. For more information, see [Azure VM backup](/azure/backup/backup-azure-vms-introduction).
338
338
339
339
2. Put the cluster into maintenance mode:
340
340
@@ -512,7 +512,7 @@ This issue frequently occurs if the database is modified (manually stopped or st
512
512
> [!Note]
513
513
> Steps 1 through 5 should be performed by an SAP administrator.
514
514
515
-
1. Before you make any changes, ensure you have a backup or snapshot. For more information, see [Azure VM backup](/azure/backup/backup-azure-vms-introduction).
515
+
1. Before you make any changes, make sure that you have a backup or snapshot. For more information, see [Azure VM backup](/azure/backup/backup-azure-vms-introduction).
516
516
517
517
2. Put the cluster into maintenance mode:
518
518
@@ -620,7 +620,7 @@ Because of incorrect `InstanceName` and `START_PROFILE` attributes, the SAP inst
620
620
> [!Note]
621
621
> This resolution is applicable if `InstanceName` and `START_PROFILE` are separate files.
622
622
623
-
1. Before you make any changes, ensure you have a backup or snapshot. For more information, see [Azure VM backup](/azure/backup/backup-azure-vms-introduction).
623
+
1. Before you make any changes, make sure that you have a backup or snapshot. For more information, see [Azure VM backup](/azure/backup/backup-azure-vms-introduction).
624
624
625
625
2. Put the cluster into maintenance mode:
626
626
@@ -659,6 +659,90 @@ Because of incorrect `InstanceName` and `START_PROFILE` attributes, the SAP inst
659
659
sudo pcs property set maintenance-mode=false
660
660
```
661
661
662
+
## Scenario 5: Fenced node doesn't rejoin cluster
663
+
664
+
### Symptom for scenario 5
665
+
666
+
After the fencing operation is finished, the affected node typically doesn't rejoin the Pacemaker Cluster, and both the Pacemaker and Corosync services remain stopped unless they are manually started to restore the cluster online.
667
+
668
+
### Cause for scenario 5
669
+
670
+
After the node is fenced and restarted and has restarted its cluster services, it subsequently receives a message that states, `We were allegedly just fenced`. This causes it to shut down its Pacemaker and Corosync services and prevent the cluster from starting. Node1 initiates a STONITH action against node2. At `03:27:23`, when the network issue is resolved, node2 rejoins the Corosync membership. Consequently, a new two-node membership is established, as shown in `/var/log/messages` for node1:
671
+
672
+
```output
673
+
Feb 20 03:26:56 node1 corosync[1722]: [TOTEM ] A processor failed, forming new configuration.
674
+
Feb 20 03:27:23 node1 corosync[1722]: [TOTEM ] A new membership (1.116f4) was formed. Members left: 2
675
+
Feb 20 03:27:24 node1 corosync[1722]: [QUORUM] Members[1]: 1
676
+
...
677
+
Feb 20 03:27:24 node1 pacemaker-schedulerd[1739]: warning: Cluster node node2 will be fenced: peer is no longer part of the cluster
678
+
...
679
+
Feb 20 03:27:24 node1 pacemaker-fenced[1736]: notice: Delaying 'reboot' action targeting node2 using for 20s
680
+
Feb 20 03:27:25 node1 corosync[1722]: [TOTEM ] A new membership (1.116f8) was formed. Members joined: 2
681
+
Feb 20 03:27:25 node1 corosync[1722]: [QUORUM] Members[2]: 1 2
682
+
Feb 20 03:27:25 node1 corosync[1722]: [MAIN ] Completed service synchronization, ready to provide service.
683
+
```
684
+
685
+
Node1 received confirmation that node2 was successfully restarted, as shown in `/var/log/messages` for node2.
686
+
687
+
```output
688
+
Feb 20 03:27:46 node1 pacemaker-fenced[1736]: notice: Operation 'reboot' [43895] (call 28 from pacemaker-controld.1740) targeting node2 using xvm2 returned 0 (OK)
689
+
```
690
+
691
+
To fully complete the STONITH action, the system had to deliver the confirmation message to every node. Because node2 rejoined the group at `03:27:25` and no new membership that excluded node2 was yet formed because of the token and consensus timeouts not expiring, the confirmation message is delayed until node2 restarts its cluster services after startup. Upon receiving the message, node2 recognizes that it has been fenced and, consequently, shut down its services as shown:
692
+
693
+
`/var/log/messages` in node1:
694
+
```output
695
+
Feb 20 03:29:02 node1 corosync[1722]: [TOTEM ] A processor failed, forming new configuration.
696
+
Feb 20 03:29:10 node1 corosync[1722]: [TOTEM ] A new membership (1.116fc) was formed. Members joined: 2 left: 2
697
+
Feb 20 03:29:10 node1 corosync[1722]: [QUORUM] Members[2]: 1 2
698
+
Feb 20 03:29:10 node1 pacemaker-fenced[1736]: notice: Operation 'reboot' targeting node2 by node1 for pacemaker-controld.1740@node1: OK
699
+
Feb 20 03:29:10 node1 pacemaker-controld[1740]: notice: Peer node2 was terminated (reboot) by node1 on behalf of pacemaker-controld.1740: OK
700
+
...
701
+
Feb 20 03:29:11 node1 corosync[1722]: [CFG ] Node 2 was shut down by sysadmin
702
+
Feb 20 03:29:11 node1 corosync[1722]: [TOTEM ] A new membership (1.11700) was formed. Members left: 2
703
+
Feb 20 03:29:11 node1 corosync[1722]: [QUORUM] Members[1]: 1
704
+
Feb 20 03:29:11 node1 corosync[1722]: [MAIN ] Completed service synchronization, ready to provide service.
705
+
```
706
+
707
+
`/var/log/messages` in node2:
708
+
```output
709
+
Feb 20 03:29:11 [1155] node2 corosync notice [TOTEM ] A new membership (1.116fc) was formed. Members joined: 1
Feb 20 03:29:09 node2 pacemaker-controld [1323] (tengine_stonith_notify) crit: We were allegedly just fenced by node1 for node1!
712
+
```
713
+
714
+
### Resolution for scenario 5
715
+
716
+
Configure a startup delay for the Crosync service. This pause provides sufficient time for a new Closed Process Group (CPG) membership to form and exclude the fenced node so that the STONITH restart process can finish by making sure the completion message reaches all nodes in the membership.
717
+
718
+
To achieve this effect, run the following commands:
719
+
720
+
1. Put the cluster into maintenance mode:
721
+
722
+
```bash
723
+
sudo pcs property set maintenance-mode=true
724
+
```
725
+
2. Create a systemd drop-in file on all the nodes in the cluster:
726
+
727
+
- Edit the Corosync file:
728
+
```bash
729
+
sudo systemctl edit corosync.service
730
+
```
731
+
- Add the following lines:
732
+
```config
733
+
[Service]
734
+
ExecStartPre=/bin/sleep 60
735
+
```
736
+
- After you save the file and exit the text editor, reload the systemd manager configuration:
737
+
```bash
738
+
sudo systemctl daemon-reload
739
+
```
740
+
3. Remove the cluster from maintenance mode:
741
+
```bash
742
+
sudo pcs property set maintenance-mode=false
743
+
```
744
+
For more information refer to [Fenced Node Fails to Rejoin Cluster Without Manual Intervention](https://access.redhat.com/solutions/5644441)
745
+
662
746
## Next steps
663
747
664
748
For additional help, open a support request by using the following instructions. When you submit your request, attach the [SOS report](https://access.redhat.com/solutions/3592) from all the nodes in the cluster for troubleshooting.
| United Kingdom | UK South, UK West | 20.58.68.62, 20.58.68.63, 20.90.32.180, 20.90.132.144, 20.90.132.145, 51.104.30.169, 172.187.0.26, 172.187.65.53 |
187
187
| United States | US Central, US East, US East 2, US East 2 EUAP, US North, US South, US West, US West 2, US West 3 | 4.149.249.197, 4.150.239.210, 20.14.127.175, 20.40.200.175, 20.45.242.18, 20.45.242.19, 20.45.242.20, 20.47.232.186, 20.51.21.252, 20.69.5.160, 20.69.5.161, 20.69.5.162, 20.83.222.100, 20.83.222.101, 20.83.222.102, 20.98.146.84, 20.98.146.85, 20.98.194.64, 20.98.194.65, 20.98.194.66, 20.168.188.34, 20.241.116.153, 52.159.214.194, 57.152.124.244, 68.220.123.194, 74.249.127.175, 74.249.142.218, 157.55.93.0, 168.61.232.59, 172.183.234.204, 172.191.219.35 |
188
-
| USGov | All US Government Cloud regions | 20.140.104.48, 20.140.105.3, 20.140.144.58, 20.140.144.59, 20.140.147.168, 20.140.53.121, 20.141.10.130, 20.141.10.131, 20.141.13.121, 20.141.15.104, 52.127.55.131, 52.235.252.252, 52.235.252.253, 52.243.247.124, 52.245.155.139, 52.245.156.185, 62.10.196.24, 62.10.196.25, 62.10.84.240, 62.11.6.64, 62.11.6.65|
188
+
| USGov | All US Government Cloud regions | 20.140.104.48, 20.140.105.3, 20.140.144.58, 20.140.144.59, 20.140.147.168, 20.140.53.121, 20.141.10.130, 20.141.10.131, 20.141.13.121, 20.141.15.104, 52.127.55.131, 52.235.252.252, 52.235.252.253, 52.243.247.124, 52.245.155.139, 52.245.156.185, 62.10.84.240 |
189
189
190
190
> [!IMPORTANT]
191
191
> - The IPs that need to be permitted are specific to the region where the VM is located. For example, a virtual machine deployed in the North Europe region needs to add the following IP exclusions to the storage account firewall for the Europe geography: 52.146.139.220 and 20.105.209.72. View the table above to find the correct IPs for your region and geography.
0 commit comments