Skip to content

Commit e711e11

Browse files
authored
Merge pull request #280158 from MicrosoftDocs/main
7/5/2024 PM Publish
2 parents d9a06a6 + 355c0f1 commit e711e11

File tree

112 files changed

+603
-356
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

112 files changed

+603
-356
lines changed

articles/aks/advanced-network-observability-cli.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -585,7 +585,7 @@ rm hubble-linux-${HUBBLE_ARCH}.tar.gz{,.sha256sum}
585585
1. Set up port forwarding for Hubble UI using the `kubectl port-forward` command.
586586
587587
```azurecli-interactive
588-
kubectl port-forward svc/hubble-ui 12000:80
588+
kubectl -n kube-system port-forward svc/hubble-ui 12000:80
589589
```
590590
591591
1. Access Hubble UI by entering `http://localhost:12000/` into your web browser.

articles/aks/cluster-autoscaler-overview.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ ms.author: schaffererin
1111

1212
# Cluster autoscaling in Azure Kubernetes Service (AKS) overview
1313

14-
To keep up with application demands in Azure Kubernetes Service (AKS), you might need to adjust the number of nodes that run your workloads. The cluster autoscaler component watches for pods in your cluster that can't be scheduled because of resource constraints. When the cluster autoscaler detects issues, it scales up the number of nodes in the node pool to meet the application demand. It also regularly checks nodes for a lack of running pods and scales down the number of nodes as needed.
14+
To keep up with application demands in Azure Kubernetes Service (AKS), you might need to adjust the number of nodes that run your workloads. The cluster autoscaler component watches for pods in your cluster that can't be scheduled because of resource constraints. When the cluster autoscaler detects unscheduled pods, it scales up the number of nodes in the node pool to meet the application demand. It also regularly checks nodes that don't have any scheduled pods and scales down the number of nodes as needed.
1515

1616
This article helps you understand how the cluster autoscaler works in AKS. It also provides guidance, best practices, and considerations when configuring the cluster autoscaler for your AKS workloads. If you want to enable, disable, or update the cluster autoscaler for your AKS workloads, see [Use the cluster autoscaler in AKS](./cluster-autoscaler.md).
1717

@@ -25,7 +25,7 @@ Clusters often need a way to scale automatically to adjust to changing applicati
2525

2626
:::image type="content" source="media/cluster-autoscaler/cluster-autoscaler.png" alt-text="Screenshot of how the cluster autoscaler and horizontal pod autoscaler often work together to support the required application demands.":::
2727

28-
It's a common practice to enable cluster autoscaler for nodes and either the Vertical Pod Autoscaler or Horizontal Pod Autoscaler for pods. When you enable the cluster autoscaler, it applies the specified scaling rules when the node pool size is lower than the minimum or greater than the maximum. The cluster autoscaler waits to take effect until a new node is needed in the node pool or until a node might be safely deleted from the current node pool. For more information, see [How does scale down work?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-down-work)
28+
It's a common practice to enable cluster autoscaler for nodes and either the Vertical Pod Autoscaler or Horizontal Pod Autoscaler for pods. When you enable the cluster autoscaler, it applies the specified scaling rules when the node pool size is lower than the minimum node count, up to the maximum node count. The cluster autoscaler waits to take effect until a new node is needed in the node pool or until a node might be safely deleted from the current node pool. For more information, see [How does scale down work?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-down-work)
2929

3030
## Best practices and considerations
3131

@@ -34,7 +34,7 @@ It's a common practice to enable cluster autoscaler for nodes and either the Ver
3434
* To **effectively run workloads concurrently on both Spot and Fixed node pools**, consider using [*priority expanders*](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-expanders). This approach allows you to schedule pods based on the priority of the node pool.
3535
* Exercise caution when **assigning CPU/Memory requests on pods**. The cluster autoscaler scales up based on pending pods rather than CPU/Memory pressure on nodes.
3636
* For **clusters concurrently hosting both long-running workloads, like web apps, and short/bursty job workloads**, we recommend separating them into distinct node pools with [Affinity Rules](./operator-best-practices-advanced-scheduler.md#node-affinity)/[expanders](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-expanders) or using [PriorityClass](https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass) to help prevent unnecessary node drain or scale down operations.
37-
* In an autoscaler-enabled node pool, scale down nodes by removing workloads, instead of manually reducing the node count. This can be problematic if the node pool is already at maximum capacity or if there are active workloads running on the nodes, potentially causing unexpected behavior by the cluster autoscaler
37+
* In an autoscaler-enabled node pool, scale down nodes by removing workloads, instead of manually reducing the node count. This can be problematic if the node pool is already at maximum capacity or if there are active workloads running on the nodes, potentially causing unexpected behavior by the cluster autoscaler.
3838
* Nodes don't scale up if pods have a PriorityClass value below -10. Priority -10 is reserved for [overprovisioning pods](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-can-i-configure-overprovisioning-with-cluster-autoscaler). For more information, see [Using the cluster autoscaler with Pod Priority and Preemption](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-cluster-autoscaler-work-with-pod-priority-and-preemption).
3939
* **Don't combine other node autoscaling mechanisms**, such as Virtual Machine Scale Set autoscalers, with the cluster autoscaler.
4040
* The cluster autoscaler **might be unable to scale down if pods can't move, such as in the following situations**:
@@ -43,7 +43,8 @@ It's a common practice to enable cluster autoscaler for nodes and either the Ver
4343
* A pod uses node selectors or anti-affinity that can't be honored if scheduled on a different node.
4444
For more information, see [What types of pods can prevent the cluster autoscaler from removing a node?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-types-of-pods-can-prevent-ca-from-removing-a-node).
4545
>[!IMPORTANT]
46-
> **Do not make changes to individual nodes within the autoscaled node pools**. All nodes in the same node group should have uniform capacity, labels, taints and system pods running on them.
46+
> **Don't make changes to individual nodes within the autoscaled node pools**. All nodes in the same node group should have uniform capacity, labels, taints and system pods running on them.
47+
* The cluster autoscaler isn't responsible for enforcing a "maximum node count" in a cluster node pool irrespective of pod scheduling considerations. If any non-cluster autoscaler actor sets the node pool count to a number beyond the cluster autoscaler's configured maximum, the cluster autoscaler doesn't automatically remove nodes. The cluster autoscaler scale down behaviors remain scoped to removing only nodes that have no scheduled pods. The sole purpose of the cluster autoscaler's max node count configuration is to enforce an upper limit for scale up operations. It doesn't have any effect on scale down considerations.
4748

4849
## Cluster autoscaler profile
4950

@@ -57,7 +58,7 @@ It's important to note that the cluster autoscaler profile settings are cluster-
5758

5859
#### Example 1: Optimizing for performance
5960

60-
For clusters that handle substantial and bursty workloads with a primary focus on performance, we recommend increasing the `scan-interval` and decreasing the `scale-down-utilization-threshold`. These settings help batch multiple scaling operations into a single call, optimizing scaling time and the utilization of compute read/write quotas. It also helps mitigate the risk of swift scale down operations on underutilized nodes, enhancing the pod scheduling efficiency. Also increase `ok-total-unready-count`and `max-total-unready-percentage`.
61+
For clusters that handle substantial and bursty workloads with a primary focus on performance, we recommend increasing the `scan-interval` and decreasing the `scale-down-utilization-threshold`. These settings help batch multiple scaling operations into a single call, optimizing scaling time and the utilization of compute read/write quotas. It also helps mitigate the risk of swift scale down operations on underutilized nodes, enhancing the pod scheduling efficiency. Also increase `ok-total-unready-count`and `max-total-unready-percentage`.
6162

6263
For clusters with daemonset pods, we recommend setting `ignore-daemonset-utilization` to `true`, which effectively ignores node utilization by daemonset pods and minimizes unnecessary scale down operations. See [profile for bursty workloads](./cluster-autoscaler.md#configure-cluster-autoscaler-profile-for-bursty-workloads)
6364

@@ -69,7 +70,7 @@ If you want a [cost-optimized profile](./cluster-autoscaler.md#configure-cluster
6970
* Increase `scale-down-utilization-threshold`, which is the utilization threshold for removing nodes.
7071
* Increase `max-empty-bulk-delete`, which is the maximum number of nodes that can be deleted in a single call.
7172
* Set `skip-nodes-with-local-storage` to false.
72-
* Increase `ok-total-unready-count`and `max-total-unready-percentage`
73+
* Increase `ok-total-unready-count`and `max-total-unready-percentage`.
7374

7475
## Common issues and mitigation recommendations
7576
View scaling failures and scale-up not triggered events via [CLI or Portal](./cluster-autoscaler.md#retrieve-cluster-autoscaler-logs-and-status).
@@ -113,4 +114,3 @@ Depending on how long the scaling operations have been experiencing failures, it
113114
<!-- LINKS --->
114115
[vertical-pod-autoscaler]: vertical-pod-autoscaler.md
115116
[horizontal-pod-autoscaler]:concepts-scale.md#horizontal-pod-autoscaler
116-
25.2 KB
Loading
11.4 KB
Loading
3.92 KB
Loading

articles/aks/private-clusters.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,8 @@
22
title: Create a private Azure Kubernetes Service (AKS) cluster
33
description: Learn how to create a private Azure Kubernetes Service (AKS) cluster
44
ms.topic: article
5+
ms.author: schaffererin
6+
author: schaffererin
57
ms.date: 06/29/2023
68
ms.custom: references_regions, devx-track-azurecli
79
---
@@ -211,6 +213,7 @@ The API server endpoint has no public IP address. To manage the API server, you'
211213
* Use an [Express Route or VPN][express-route-or-VPN] connection.
212214
* Use the [AKS `command invoke` feature][command-invoke].
213215
* Use a [private endpoint][private-endpoint-service] connection.
216+
* Use a [Cloud Shell][cloud-shell-vnet] instance deployed into a subnet that's connected to the API server for the cluster.
214217

215218
> [!NOTE]
216219
> Creating a VM in the same VNet as the AKS cluster is the easiest option. Express Route and VPNs add costs and require additional networking complexity. Virtual network peering requires you to plan your network CIDR ranges to ensure there are no overlapping ranges.
@@ -401,3 +404,4 @@ For associated best practices, see [Best practices for network connectivity and
401404
[az-network-vnet-peering-create]: /cli/azure/network/vnet/peering#az_network_vnet_peering_create
402405
[az-network-vnet-peering-list]: /cli/azure/network/vnet/peering#az_network_vnet_peering_list
403406
[intro-azure-linux]: ../azure-linux/intro-azure-linux.md
407+
[cloud-shell-vnet]: ../cloud-shell/vnet/overview.md

articles/aks/stop-cluster-upgrade-api-breaking-changes.md

Lines changed: 16 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -4,17 +4,20 @@ description: Learn how to stop Azure Kubernetes Service (AKS) cluster upgrades a
44
ms.topic: article
55
ms.custom: azure-kubernetes-service
66
ms.subservice: aks-upgrade
7-
ms.date: 10/19/2023
7+
ms.date: 07/05/2024
88
author: schaffererin
99
ms.author: schaffererin
10-
1110
---
1211

1312
# Stop Azure Kubernetes Service (AKS) cluster upgrades automatically on API breaking changes
1413

14+
This article shows you how to stop Azure Kubernetes Service (AKS) cluster upgrades automatically on API breaking changes.
15+
16+
## Overview
17+
1518
To stay within a supported Kubernetes version, you have to upgrade your cluster at least once per year and prepare for all possible disruptions. These disruptions include ones caused by API breaking changes, deprecations, and dependencies such as Helm and Container Storage Interface (CSI). It can be difficult to anticipate these disruptions and migrate critical workloads without experiencing any downtime.
1619

17-
AKS now automatically stops upgrade operations consisting of a minor version change with deprecated APIs and sends you an error message to alert you about the issue.
20+
You can configure your AKS cluster to automatically stop upgrade operations consisting of a minor version change with deprecated APIs and alert you to the issue. This feature helps you avoid unexpected disruptions and gives you time to address the deprecated APIs before proceeding with the upgrade.
1821

1922
## Before you begin
2023

@@ -36,43 +39,38 @@ Bad Request({
3639
})
3740
```
3841

39-
You have two options to mitigate the issue. You can either [remove usage of deprecated APIs (recommended)](#remove-usage-of-deprecated-apis-recommended) or [bypass validation to ignore API changes](#bypass-validation-to-ignore-api-changes).
42+
You have two options to mitigate the issue: you can [remove usage of deprecated APIs (recommended)](#remove-usage-of-deprecated-apis-recommended) or [bypass validation to ignore API changes](#bypass-validation-to-ignore-api-changes).
4043

4144
### Remove usage of deprecated APIs (recommended)
4245

43-
1. In the Azure portal, navigate to your cluster's overview page, and select **Diagnose and solve problems**.
44-
45-
2. Navigate to the **Create, Upgrade, Delete, and Scale** category, and select **Kubernetes API deprecations**.
46+
1. In the Azure portal, navigate to your cluster resource and select **Diagnose and solve problems**
47+
2. Select **Create, Upgrade, Delete, and Scale** > **Kubernetes API deprecations**.
4648

4749
:::image type="content" source="./media/upgrade-cluster/applens-api-detection-full-v2.png" alt-text="A screenshot of the Azure portal showing the 'Selected Kubernetes API deprecations' section.":::
4850

49-
3. Wait 12 hours from the time the last deprecated API usage was seen. Check the verb in the deprecated API usage to know if it's a [watch][k8s-api].
50-
51+
3. Wait 12 hours from the time the last deprecated API usage was seen. Check the verb in the deprecated API usage to know if it's a [watch][k8s-api]. If it's a watch, you can wait for the usage to drop to zero. (You can also check past API usage by enabling [Container insights][container-insights] and exploring kube audit logs.)
5152
4. Retry your cluster upgrade.
5253

53-
You can also check past API usage by enabling [Container Insights][container-insights] and exploring kube audit logs. Check the verb in the deprecated API usage to understand if it's a [watch][k8s-api] use case.
54-
5554
### Bypass validation to ignore API changes
5655

5756
> [!NOTE]
58-
> This method requires you to use the Azure CLI version 2.53 or later. If you have the `aks-preview` CLI extension installed, you'll need to update to version `0.5.154` or later. This method isn't recommended, as deprecated APIs in the targeted Kubernetes version may not work long term. We recommend removing them as soon as possible after the upgrade completes.
57+
> This method requires you to use the Azure CLI version 2.53 or later. If you have the `aks-preview` CLI extension installed, you need to update to version `0.5.154` or later. This method isn't recommended, as deprecated APIs in the targeted Kubernetes version might not work long term. We recommend removing them as soon as possible after the upgrade completes.
5958
60-
* Bypass validation to ignore API breaking changes using the [`az aks update`][az-aks-update] command. Specify the `enable-force-upgrade` flag and set the `upgrade-override-until` property to define the end of the window during which validation is bypassed. If no value is set, it defaults the window to three days from the current time. The date and time you specify must be in the future.
59+
1. Bypass validation to ignore API breaking changes using the [`az aks update`][az-aks-update] command. Specify the `enable-force-upgrade` flag and set the `upgrade-override-until` property to define the end of the window during which validation is bypassed. If no value is set, it defaults the window to three days from the current time. The date and time you specify must be in the future.
6160

6261
```azurecli-interactive
63-
az aks update --name myAKSCluster --resource-group myResourceGroup --enable-force-upgrade --upgrade-override-until 2023-10-01T13:00:00Z
62+
az aks update --name $CLUSTER_NAME --resource-group $RESOURCE_GROUP_NAME --enable-force-upgrade --upgrade-override-until 2023-10-01T13:00:00Z
6463
```
6564
6665
> [!NOTE]
6766
> `Z` is the zone designator for the zero UTC/GMT offset, also known as 'Zulu' time. This example sets the end of the window to `13:00:00` GMT. For more information, see [Combined date and time representations](https://wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations).
6867
69-
* Once the previous command has succeeded, you can retry the upgrade operation.
68+
2. Retry your cluster upgrade using the [`az aks upgrade`][az-aks-upgrade] command.
7069
7170
```azurecli-interactive
72-
az aks upgrade --name myAKSCluster --resource-group myResourceGroup --kubernetes-version <KUBERNETES_VERSION>
71+
az aks upgrade --name $CLUSTER_NAME --resource-group $RESOURCE_GROUP_NAME --kubernetes-version $KUBERNETES_VERSION
7372
```
7473
75-
7674
## Next steps
7775
7876
This article showed you how to stop AKS cluster upgrades automatically on API breaking changes. To learn more about more upgrade options for AKS clusters, see [Upgrade options for Azure Kubernetes Service (AKS) clusters](./upgrade-cluster.md).
@@ -82,5 +80,5 @@ This article showed you how to stop AKS cluster upgrades automatically on API br
8280
8381
<!-- LINKS - internal -->
8482
[az-aks-update]: /cli/azure/aks#az_aks_update
83+
[az-aks-upgrade]: /cli/azure/aks#az_aks_upgrade
8584
[container-insights]:/azure/azure-monitor/containers/container-insights-log-query#resource-logs
86-

articles/azure-arc/servers/prepare-extended-security-updates.md

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: How to prepare to deliver Extended Security Updates for Windows Server 2012 through Azure Arc
33
description: Learn how to prepare to deliver Extended Security Updates for Windows Server 2012 through Azure Arc.
4-
ms.date: 01/03/2024
4+
ms.date: 07/03/2024
55
ms.topic: conceptual
66
---
77

@@ -68,6 +68,17 @@ Connectivity options include public endpoint, proxy server, and private link or
6868
> [!TIP]
6969
> To take advantage of the full range of offerings for Arc-enabled servers, such as extensions and remote connectivity, ensure that you allow the additional URLs that apply to your scenario. For more information, see [Connected machine agent networking requirements](network-requirements.md).
7070
71+
## Required Certificate Authorities
72+
73+
The following [Certificate Authorities](/azure/security/fundamentals/azure-ca-details?tabs=root-and-subordinate-cas-list) are required for Extended Security Updates for Windows Server 2012:
74+
75+
- [Microsoft Azure RSA TLS Issuing CA 03](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2003%20-%20xsign.crt)
76+
- [Microsoft Azure RSA TLS Issuing CA 04](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2004%20-%20xsign.crt)
77+
- [Microsoft Azure RSA TLS Issuing CA 07](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2007%20-%20xsign.crt)
78+
- [Microsoft Azure RSA TLS Issuing CA 08](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2008%20-%20xsign.crt)
79+
80+
If necessary, these Certificate Authorities can be [manually download and installed](troubleshoot-extended-security-updates.md#option-2-manually-download-and-install-the-intermediate-ca-certificates).
81+
7182
## Next steps
7283

7384
* Find out more about [planning for Windows Server and SQL Server end of support](https://www.microsoft.com/en-us/windows-server/extended-security-updates) and [getting Extended Security Updates](/windows-server/get-started/extended-security-updates-deploy).

0 commit comments

Comments
 (0)