Skip to content

Commit 431106d

Browse files
authored
Merge pull request #8902 from MicrosoftDocs/main
Auto push to live 2025-05-12 02:32:50
2 parents 1130216 + d6d0852 commit 431106d

File tree

10 files changed

+115
-84
lines changed

10 files changed

+115
-84
lines changed

support/azure/azure-kubernetes/create-upgrade-delete/cannot-scale-cluster-autoscaler-enabled-node-pool.md

Lines changed: 47 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -3,14 +3,14 @@ title: Cluster autoscaler fails to scale with cannot scale cluster autoscaler en
33
description: Learn how to troubleshoot the cannot scale cluster autoscaler enabled node pool error when your autoscaler isn't scaling up or down.
44
author: sgeannina
55
ms.author: ninasegares
6-
ms.date: 10/17/2023
7-
ms.reviewer: aritraghosh, chiragpa
6+
ms.date: 04/17/2025
7+
ms.reviewer: aritraghosh, chiragpa.momajed
88
ms.service: azure-kubernetes-service
99
ms.custom: sap:Create, Upgrade, Scale and Delete operations (cluster or nodepool)
1010
---
1111
# Cluster autoscaler fails to scale with "cannot scale cluster autoscaler enabled node pool" error
1212

13-
This article discusses how to resolve the "cannot scale cluster autoscaler enabled node pool" error that appears when scaling a cluster with an autoscaler enabled node pool.
13+
This article discusses how to resolve the "cannot scale cluster autoscaler enabled node pool" error that occurs when you scale a cluster that has an autoscaler-enabled node pool.
1414

1515
## Symptoms
1616

@@ -22,45 +22,75 @@ You receive an error message that resembles the following message:
2222
2323
## Troubleshooting checklist
2424

25-
Azure Kubernetes Service (AKS) uses virtual machine scale sets-based agent pools, which contain cluster nodes and [cluster autoscaling capabilities](/azure/aks/cluster-autoscaler) if enabled.
25+
Azure Kubernetes Service (AKS) uses Azure Virtual Machine Scale Sets-based agent pools. These pools contain cluster nodes and [cluster autoscaling capabilities](/azure/aks/cluster-autoscaler), if they're enabled.
2626

2727
### Check that the cluster virtual machine scale set exists
2828

29-
1. Sign in to [Azure portal](https://portal.azure.com).
30-
1. Find the node resource group by searching the following names:
31-
32-
- The default name `MC_{AksResourceGroupName}_{YourAksClusterName}_{AksResourceLocation}`.
33-
- The custom name (if it was provided at creation).
29+
1. Sign in to the [Azure portal](https://portal.azure.com).
30+
1. Find the node resource group by searching for the following names:
3431

32+
- The default name `MC_{AksResourceGroupName}_{YourAksClusterName}_{AksResourceLocation}`
33+
- The custom name (if it was provided at creation)
34+
>
3535
> [!NOTE]
36-
> When you create a new cluster, AKS automatically creates a second resource group to store the AKS resources. For more information, see [Why are two resource groups created with AKS?](/azure/aks/faq#why-are-two-resource-groups-created-with-aks)
36+
> When you create a cluster, AKS automatically creates a second resource group to store the AKS resources. For more information, see [Why are two resource groups created with AKS?](/azure/aks/faq#why-are-two-resource-groups-created-with-aks)
3737
38-
1. Check the list of resources and make sure that there's a virtual machine scale set.
38+
1. Check the list of resources to make sure that a virtual machine scale set exists.
3939

4040
## Cause 1: The cluster virtual machine scale set was deleted
4141

42-
Deleting the virtual machine scale set attached to the cluster causes the cluster autoscaler to fail. It also causes issues when provisioning resources such as nodes and pods.
42+
If you delete the virtual machine scale set that's attached to the cluster, this action causes the cluster autoscaler to fail. It also causes issues when you provision resources such as nodes and pods.
4343

4444
> [!NOTE]
45-
> Modifying any resource under the node resource group in the AKS cluster is an unsupported action and will cause cluster operation failures. You can prevent changes from being made to the node resource group by [blocking users from modifying resources](/azure/aks/cluster-configuration#fully-managed-resource-group-preview) managed by the AKS cluster.
45+
> Modifying any resource under the node resource group in the AKS cluster is an unsupported action and causes cluster operation failures. You can prevent changes from being made to the node resource group by [blocking users from modifying resources](/azure/aks/cluster-configuration#fully-managed-resource-group-preview) that are managed by the AKS cluster.
46+
47+
### Reconcile node pool
48+
49+
If the cluster virtual machine scale set is accidentally deleted, you can reconcile the node pool by using `az aks nodepool update`:
50+
51+
```bash
52+
# Update Node Pool Configuration
53+
az aks nodepool update --resource-group <resource-group-name> --cluster-name <cluster-name> --name <nodepool-name> --tags <tags> --node-taints <taints> --labels <labels>
54+
55+
# Verify the Update
56+
az aks nodepool show --resource-group <resource-group-name> --cluster-name <cluster-name> --name <nodepool-name>
57+
```
58+
Monitor the node pool to make sure that it's functioning as expected and that all nodes are operational.
4659

4760
## Cause 2: Tags or any other properties were modified from the node resource group
4861

49-
You may receive scaling errors if you modify or delete Azure-created tags and other resource properties in the node resource group. For more information, see [Can I modify tags and other properties of the AKS resources in the node resource group?](/azure/aks/faq#can-i-modify-tags-and-other-properties-of-the-aks-resources-in-the-node-resource-group)
62+
You may experience scaling errors if you modify or delete Azure-created tags and other resource properties in the node resource group. For more information, see [Can I modify tags and other properties of the AKS resources in the node resource group?](/azure/aks/faq#can-i-modify-tags-and-other-properties-of-the-aks-resources-in-the-node-resource-group)
63+
64+
### Reconcile node resource group tags
65+
66+
Use the Azure CLI to make sure that the node resource group has the correct tags for AKS name and the AKS group name:
67+
68+
```bash
69+
# Add or update tags for AKS name and AKS group name
70+
az group update --name <node-resource-group-name> --set tags.AKS-Managed-Cluster-Name=<aks-managed-cluster-name> tags.AKS-Managed-Cluster-RG=<aks-managed-cluster-rg>
71+
72+
# Verify the tags
73+
az group show --name <node-resource-group-name> --query "tags"
74+
```
75+
Monitor the resource group to make sure that the tags are correctly applied and that the resource group is functioning as expected.
5076

5177
## Cause 3: The cluster node resource group was deleted
5278

53-
Deleting the cluster node resource group causes issues when provisioning the infrastructure resources required by the cluster, which causes the cluster autoscaler to fail.
79+
Deleting the cluster node resource group causes issues when you provision the infrastructure resources that are required by the cluster. This action causes the cluster autoscaler to fail.
5480

5581
## Solution: Update the cluster to the goal state without changing the configuration
5682

57-
To resolve this issue, you can run the following command to recover the deleted virtual machine scale set or any tags (missing or modified):
83+
To resolve this issue, run the following command to recover the deleted virtual machine scale set or any tags (missing or modified).
5884

5985
> [!NOTE]
60-
> It might take a few minutes until the operation completes.
86+
> It might take a few minutes until the operation finishes.
6187
6288
```azurecli
6389
az aks update --resource-group <resource-group-name> --name <aks-cluster-name>
6490
```
6591

92+
### Additional troubleshooting tips
93+
94+
- Check the Azure Activity Log for any recent changes or deletions.
95+
6696
[!INCLUDE [Azure Help Support](../../../includes/azure-help-support.md)]

support/azure/virtual-machines/linux/suse-public-cloud-connectivity-registration-issues.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
---
22
title: Troubleshoot connectivity and registration for SUSE SLES VMs
33
description: Troubleshoot scenarios in which an Azure VM that has a SUSE Linux Enterprise Server image can't connect to the SUSE Subscription Management Tool (SMT) repository.
4-
ms.date: 02/26/2025
4+
ms.date: 05/12/2025
55
author: rnirek
66
ms.author: hokamath
7-
ms.reviewer: adelgadohell, mahuss, esanchezvela, scotro, v-weizhu, divargas
7+
ms.reviewer: adelgadohell, mahuss, esanchezvela, scotro, v-weizhu, divargas, vkchilak
88
editor: v-jsitser
99
ms.service: azure-virtual-machines
1010
ms.custom: sap:VM Admin - Linux (Guest OS), linux-related-content
@@ -219,7 +219,7 @@ If instances aren't regularly updated, they can become incompatible with our upd
219219
3. Download the following packages:
220220

221221
```bash
222-
sudo zypper --pkg-cache-dir /root/packages/ download cloud-regionsrv-client cloud-regionsrv-client-plugin-azure regionServiceClientConfigAzure python3-azuremetadata SUSEConnect python3-cssselect python3-toml python3-lxml python3-M2Crypto python3-zypp-plugin libsuseconnect suseconnect-ruby-bindings docker libcontainers-common
222+
sudo zypper --pkg-cache-dir /root/packages/ download cloud-regionsrv-client cloud-regionsrv-client-plugin-azure regionServiceClientConfigAzure python3-azuremetadata SUSEConnect python3-cssselect python3-toml python3-lxml python3-M2Crypto python3-zypp-plugin libsuseconnect suseconnect-ruby-bindings docker libcontainers-common containerd libcontainers-sles-mounts runc
223223
```
224224
4. Run the following commands:
225225

support/sql/analysis-services/cannot-connect-named-instance.md

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
11
---
22
title: Can't connect to a named instance
33
description: This article provides resolutions where you might not be able to connect to a named instance of Analysis Services that is installed on a failover cluster.
4-
ms.date: 07/22/2020
4+
ms.date: 05/09/2025
55
ms.custom: sap:Analysis Services
6-
ms.reviewer: karang
6+
ms.reviewer: karang, gasadas
77
---
88
# Cannot connect to a named instance of a clustered analysis service
99

1010
This article helps you resolve the problem where you might not be able to connect to a named instance of Analysis Services that is installed on a failover cluster.
1111

12-
_Original product version:_ &nbsp; SQL Server 2008 R2 Enterprise, SQL Server 2008 Enterprise, Microsoft SQL Server 2005 Enterprise Edition
12+
_Original product version:_ &nbsp; SQL Server
1313
_Original KB number:_ &nbsp; 2429685
1414

1515
## Symptoms
@@ -27,14 +27,13 @@ You might not able to connect to a named instance of Analysis Services that is i
2727
## Cause
2828

2929
The problem occurs when you start the named instance of SQL Server Analysis services (SSAS) using either SQL Server Configuration Manager or the Services applet in the Control panel.
30-
When you start an SSAS instance on a failover cluster using a tool other than Failover Cluster Management (Cluster administrator on older Operating Systems), that SSAS instance will run as a stand-alone instance and will listen on a non-default port resulting in connection failures from various applications.
30+
When you start an SSAS instance on a failover cluster using a tool other than Failover Cluster Management (Cluster administrator on older Operating Systems), that SSAS instance will run as a stand-alone instance and will listen on a nondefault port resulting in connection failures from various applications.
3131

3232
## Resolution
3333

3434
Stop and restart the SQL Server Analysis services using the Failover Cluster Management tool.
3535

3636
## More information
3737

38-
An SSAS instance started on a cluster (default or named instance) will start listening on all IP addresses of the cluster group using the default port of 2383. The server setting `<Port>` property does not change the port number of SSAS service on a cluster.
38+
An SSAS instance started on a cluster (default or named instance) will start listening on all IP addresses of the cluster group using the default port of 2383. The server setting `<Port>` property doesn't change the port number of SSAS service on a cluster.
3939

40-
For more information, see the KB article: [How to determine and change the port of an SSAS Instance](https://support.microsoft.com/help/2466860)

support/sql/analysis-services/writeback-performance-issue-cell-security-enable.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,27 +1,27 @@
11
---
22
title: Writeback performance issue
33
description: This article provides workarounds for the writeback performance problem that occurs when cell security is enabled in SQL Server Analysis Services.
4-
ms.date: 07/22/2020
4+
ms.date: 05/09/2025
55
ms.custom: sap:Analysis Services
6-
ms.reviewer: haidongh, heidist
6+
ms.reviewer: haidongh, heidist, gasadas
77
---
88
# Writeback performance problem when cell security is enabled in SQL Server Analysis Services
99

1010
This article helps you work around the writeback performance problem that occurs when cell security is enabled in SQL Server Analysis Services.
1111

12-
_Original product version:_ &nbsp; SQL Server 2012 Analysis Services
12+
_Original product version:_ &nbsp; SQL Server 2012 Analysis Services and later versions
1313
_Original KB number:_ &nbsp; 2747616
1414

1515
## Symptoms
1616

17-
Assume that you are running Microsoft SQL Server Analysis Services (SSAS) under a role for which cell security is enabled. When you try to execute an UPDATE CUBE Multidimensional Expressions (MDX) statement, the statement execution may take longer to execute than for a role for which cell security is not enabled.
17+
Assume that you're running Microsoft SQL Server Analysis Services (SSAS) under a role for which cell security is enabled. When you try to execute an UPDATE CUBE Multidimensional Expressions (MDX) statement, the statement execution may take longer to execute than for a role for which cell security isn't enabled.
1818

1919
## Cause
2020

2121
This behavior is by design. When cell security is enabled, the Analysis Services engine executes the queries in cell-by-cell mode. If the writeback operation performs allocation at a high level, the space of leaf level cells will be large.
2222

2323
> [!NOTE]
24-
> The space is not the number of rows in the fact table. The space is the full cross join space of all dimension granularity attributes. It takes a long time to enumerate those cells one-by-one in order to check the cell security.
24+
> The space isn't the number of rows in the fact table. The space is the full cross join space of all dimension granularity attributes. It takes a long time to enumerate those cells one-by-one in order to check the cell security.
2525
2626
## Workaround
2727

@@ -36,7 +36,7 @@ To work around this issue, use one of the following methods.
3636
3737
- Method 2
3838

39-
Perform the writeback operation at the lowest granularity level of a certain member. You cannot allocate for many detailed granularity members.
39+
Perform the writeback operation at the lowest granularity level of a certain member. You can't allocate for many detailed granularity members.
4040

4141
> [!NOTE]
4242
> You may have to create dummy members in dimension tables that are marked as adjustment members in each dimension, to support the writeback operation.

support/sql/analytics-platform-system/detect-data-skew-distribution-key-values.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,16 @@
11
---
22
title: Detect skew on distribution key values
33
description: This article describes how to detect skew on the distribution key of a distributed table in a Parallel Data Warehouse appliance.
4-
ms.date: 07/22/2020
4+
ms.date: 05/09/2025
55
ms.custom: sap:Parallel Data Warehouse (APS)
66
ms.topic: how-to
7+
ms.reviewer: jopilov, ccaldera
78
---
89
# Detect data skew on the distribution key values
910

1011
This article shows how to detect skew on the distribution key of a distributed table in a Parallel Data Warehouse appliance.
1112

12-
_Original product version:_ &nbsp; SQL Server 2008 R2 Parallel Data Warehouse
13+
_Original product version:_ &nbsp; SQL Server Parallel Data Warehouse
1314
_Original KB number:_ &nbsp; 3046863
1415

1516
## Summary
@@ -29,9 +30,9 @@ order by count(distribtuion_key) desc
2930
```
3031

3132
> [!NOTE]
32-
> The `having` clause is commented out. However, if you want to perform a quick check of whether there is significant skew, this clause may tell you. You may have to adjust the having value to something that makes sense for your result set. For example, if all values have 5,000 records, we recommend that you set this value to 7,500 or 10,000 to indicate an issue.
33+
> The `having` clause is commented out. However, if you want to perform a quick check of whether there's significant skew, this clause may tell you. You may have to adjust the having value to something that makes sense for your result set. For example, if all values have 5,000 records, we recommend that you set this value to 7,500 or 10,000 to indicate an issue.
3334
34-
The question of when skew becomes a problem does not have a deterministic answer. Skew becomes a problem when performance of skewed distributions becomes noticeable and the application cannot tolerate the situation. The rule of thumb is that the appliance can tolerate a skew of 10 to 20 percent across all the tables. Within this threshold, the skewed distributions should even out under concurrency. Above this threshold, you may start to see some long-running distributions when the data is processed. Some implementations may be able to tolerate greater skew, and some implementations may be unable to tolerate this much. Testing is required to determine the actual threshold for your implementation.
35+
The question of when skew becomes a problem doesn't have a deterministic answer. Skew becomes a problem when performance of skewed distributions becomes noticeable and the application can't tolerate the situation. The rule of thumb is that the appliance can tolerate a skew of 10 to 20 percent across all the tables. Within this threshold, the skewed distributions should even out under concurrency. Above this threshold, you may start to see some long-running distributions when the data is processed. Some implementations may be able to tolerate greater skew, and some implementations may be unable to tolerate this much. Testing is required to determine the actual threshold for your implementation.
3536

3637
## More information
3738

support/sql/analytics-platform-system/error-cetas-to-blob-storage.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
11
---
22
title: Error 105005 when you do CETAS to blob storage
33
description: This article provides resolutions for the error that occurs when you do a CETAS operation to Azure Blob storage by using PolyBase.
4-
ms.date: 08/05/2020
4+
ms.date: 05/09/2025
55
ms.custom: sap:Parallel Data Warehouse (APS)
6-
ms.reviewer: daleche, christys
6+
ms.reviewer: daleche, christys, ccaldera
77
---
88
# Error 105005 when you do CETAS operation to Azure blob storage
99

1010
This article helps you resolve the problem that occurs when you do a `CREATE EXTERNAL TABLE AS SELECT (CETAS)` operation to Azure Blob storage by using PolyBase.
1111

12-
_Original product version:_ &nbsp; SQL Server 2012 Parallel Data Warehouse (APS), SQL Server 2008 R2 Parallel Data Warehouse
12+
_Original product version:_ &nbsp; SQL Server Parallel Data Warehouse (APS)
1313
_Original KB number:_ &nbsp; 3210540
1414

1515
## Symptoms

0 commit comments

Comments
 (0)