Skip to content

Commit 1024969

Browse files
committed
More articles
1 parent ef657dd commit 1024969

File tree

5 files changed

+51
-57
lines changed

5 files changed

+51
-57
lines changed

AKS-Arc/app-availability.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -3,13 +3,13 @@ title: Application availability in AKS enabled by Azure Arc
33
description: Learn about application availability in AKS enabled by Arc
44
author: sethmanheim
55
ms.topic: conceptual
6-
ms.date: 04/17/2024
6+
ms.date: 04/01/2025
77
ms.author: sethm
88
ms.lastreviewed: 1/14/2022
99
ms.reviewer: rbaziwane
1010

11-
# Intent: As an IT Pro, I need to understand how disruptions can impact the availability of applications on my AKS deployments on Azure Local and Windows Server.
12-
# Keyword: AKS on Azure Local and Windows Server architecture live migration disruption Kubernetes container orchestration
11+
# Intent: As an IT Pro, I need to understand how disruptions can impact the availability of applications on my AKS deployments on Windows Server.
12+
# Keyword: AKS on Windows Server architecture live migration disruption Kubernetes container orchestration
1313
---
1414

1515
# Application availability in AKS enabled by Azure Arc
@@ -18,25 +18,25 @@ ms.reviewer: rbaziwane
1818

1919
Azure Kubernetes Service (AKS) enabled by Azure Arc offers a fully supported container platform that can run cloud-native applications on the [Kubernetes container orchestration platform](https://kubernetes.io/). The architecture supports running virtualized Windows and Linux workloads.
2020

21-
The AKS architecture is built with failover clustering and live migration that is automatically enabled for target (workload) clusters. During various disruption events, virtual machines that host customer workloads are freely moved around without perceived application downtime. This architecture means that a traditional enterprise customer, who's managing a legacy application as a singleton to AKS on Azure Local or Windows Server, gets similar (or better) uptime than what's currently experienced on a legacy VM application.
21+
The AKS architecture is built with failover clustering and live migration that is automatically enabled for target (workload) clusters. During various disruption events, virtual machines that host customer workloads are freely moved around without perceived application downtime. This architecture means that a traditional enterprise customer, who's managing a legacy application as a singleton to AKS on Windows Server, gets similar (or better) uptime than what's currently experienced on a legacy VM application.
2222

2323
This article describes some fundamental concepts for users who want to run containerized applications on AKS Arc with live migration enabled in order to ensure applications are available during a disruption. Kubernetes terminology, such as *voluntary disruption* and *involuntary disruption*, is used to refer to downtime of an application running in a pod.
2424

2525
## What is live migration?
2626

2727
[*Live migration*](/windows-server/virtualization/hyper-v/manage/live-migration-overview) is a Hyper-V feature that allows you to transparently move running virtual machines from one Hyper-V host to another without perceived downtime. The primary benefit of live migration is flexibility; running virtual machines is not tied to a single host machine. This allows users to perform actions such as draining a specific host of virtual machines before decommissioning or upgrading the host. When paired with Windows Failover Clustering, live migration enables the creation of highly available and fault tolerant systems.
2828

29-
The current architecture of AKS on Azure Local and Windows Server assumes that you enabled live migration in your Azure Local clustered environment. Therefore, all Kubernetes worker node VMs are created with live migration configured. These nodes can be moved around physical hosts in the event of a disruption to ensure the platform is highly available.
29+
The current architecture of AKS on Windows Server assumes that you enabled live migration in your Windows Server clustered environment. Therefore, all Kubernetes worker node VMs are created with live migration configured. These nodes can be moved around physical hosts in the event of a disruption to ensure the platform is highly available.
3030

31-
:::image type="content" source="media/app-availability/cluster-architecture.png" alt-text="Diagram showing AKS on Azure Local and Windows Server with Failover Clustering enabled." lightbox="media/app-availability/cluster-architecture.png":::
31+
:::image type="content" source="media/app-availability/cluster-architecture.png" alt-text="Diagram showing AKS on Windows Server with Failover Clustering enabled." lightbox="media/app-availability/cluster-architecture.png":::
3232

3333
When you run a legacy application as a singleton on top of Kubernetes, this architecture meets your high availability needs. Kubernetes manages the scheduling of pods on available worker nodes while live migration manages the scheduling of worker node VMs on available physical hosts.
3434

3535
:::image type="content" source="media/app-availability/singleton.png" alt-text="Diagram showing an example legacy application running as a singleton." lightbox="media/app-availability/singleton.png":::
3636

3737
## Application disruption scenarios
3838

39-
A comparative study of the recovery times for applications running in VMs on AKS on Azure Local and Windows Server clearly shows that there is minimal impact on the application when common disruption events occur. Three example disruption scenarios include:
39+
A comparative study of the recovery times for applications running in VMs on AKS on Windows Server clearly shows that there is minimal impact on the application when common disruption events occur. Three example disruption scenarios include:
4040

4141
- Applying an update that results in a reboot of the physical machine.
4242
- Applying an update that involves recreating the worker node.
@@ -45,11 +45,11 @@ A comparative study of the recovery times for applications running in VMs on AKS
4545
> [!NOTE]
4646
> These scenarios assume that the application owner still uses Kubernetes affinity and anti-affinity settings to ensure proper scheduling of pods across worker nodes.
4747
48-
| Disruption event | Running applications in VMs on Azure Local | Running applications in VMs on AKS on Azure Local or Windows Server |
49-
| ------------------------------------------------------------ | ---------------------------- | ----------------- |
50-
| Apply an update that results in a reboot of the physical machine | No impact | No impact |
51-
| Apply an update that involves recreating the worker node (or rebooting the VM) | No impact | Varies |
52-
| Unplanned hardware failure of a physical machine | 6-8 minutes | 6-8 minutes |
48+
| Disruption event | Running applications in VMs on Windows Server |
49+
| ------------------------------------------------------------ | ---------------------------- |
50+
| Apply an update that results in a reboot of the physical machine | No impact |
51+
| Apply an update that involves recreating the worker node (or rebooting the VM) | Varies |
52+
| Unplanned hardware failure of a physical machine | 6-8 minutes |
5353

5454
### Apply an update that results in a reboot of the physical machine
5555

@@ -68,8 +68,8 @@ In this scenario, an involuntary disruption event occurs to a physical machine h
6868

6969
## Conclusion
7070

71-
AKS failover clustering technologies are designed to ensure that computing environments in both Azure Local and Windows Server are highly available and fault tolerant. However, the application owner still has to configure deployments to use Kubernetes features, such as `Deployments`, `Affinity Mapping`, `RelicaSets`, to ensure that the pods are resilient in disruption scenarios.
71+
AKS failover clustering technologies are designed to ensure that computing environments in Windows Server are highly available and fault tolerant. However, the application owner still has to configure deployments to use Kubernetes features, such as `Deployments`, `Affinity Mapping`, `RelicaSets`, to ensure that the pods are resilient in disruption scenarios.
7272

7373
## Next steps
7474

75-
[AKS on Windows Server and Azure Local overview](overview.md)
75+
[AKS on Windows Server overview](overview.md)

AKS-Arc/azure-hybrid-benefit-22h2.md

Lines changed: 9 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
---
2-
title: Azure Hybrid Benefit for AKS enabled by Azure Arc (AKS on Azure Local 22H2)
3-
description: Activate Azure Hybrid Benefit for AKS enabled by Arc on Azure Local 22H2.
2+
title: Azure Hybrid Benefit for AKS enabled by Azure Arc
3+
description: Activate Azure Hybrid Benefit for AKS enabled by Arc on Windows Server.
44
author: sethmanheim
55
ms.author: sethm
6-
ms.date: 01/09/2025
6+
ms.date: 04/01/2025
77
ms.topic: conceptual
88
ms.reviewer: rbaziwane
99
ms.lastreviewed: 01/30/2024
@@ -14,7 +14,7 @@ ms.custom:
1414
# Keyword: Azure Hybrid Benefit for AKS
1515
---
1616

17-
# Azure Hybrid Benefit for AKS enabled by Azure Arc (AKS on Azure Local 22H2)
17+
# Azure Hybrid Benefit for AKS enabled by Azure Arc
1818

1919
[!INCLUDE [aks-hybrid-applies-to-azure-stack-hci-windows-server-sku](includes/aks-hci-applies-to-skus/aks-hybrid-applies-to-azure-stack-hci-windows-server-sku.md)]
2020

@@ -25,22 +25,19 @@ Azure Hybrid Benefit is a program that enables you to significantly reduce the c
2525
Azure Hybrid Benefit for AKS enabled by Arc is a new benefit that can help you significantly reduce the cost of running Kubernetes on-premises or at the edge. It works by letting you apply your on-premises Windows Server Datacenter or Standard licenses with Software Assurance (SA) to pay for AKS. Each Windows Server core license entitles use on 1 virtual core of AKS. There are a few important details to note regarding activation of the benefit for AKS:
2626

2727
- Azure Hybrid Benefit for AKS is enabled at the management cluster (or AKS host) level. You don't need to enable the benefit for workload clusters.
28-
- If you have multiple AKS on Azure Local or Windows Server deployments, you must enable Azure Hybrid Benefit individually for each deployment.
28+
- If you have multiple AKS on Windows Server deployments, you must enable Azure Hybrid Benefit individually for each deployment.
2929
- If you enable Azure Hybrid Benefit on an AKS Arc deployment during the trial period, it doesn't nullify your trial period. The benefit is activated immediately, and is applied at the end of the trial period.
3030
- Reinstalling AKS Arc doesn't automatically reinstate the benefit. You must reactivate this benefit for the new deployment.
3131

3232
For more information about Software Assurance and with which agreements it's available, see [Benefits of Software Assurance](https://www.microsoft.com/licensing/licensing-programs/software-assurance-by-benefits).
3333

34-
The rest of this article describes how to activate this benefit for AKS on Azure Local or Windows Server.
35-
36-
> [!TIP]
37-
> You can maximize cost savings by also using Azure Hybrid Benefit for Azure Local. For more information, see [Azure Hybrid Benefit for Azure Local](/azure/azure-local/concepts/azure-hybrid-benefit).
34+
The rest of this article describes how to activate this benefit for AKS on Windows Server.
3835

3936
## Activate Azure Hybrid Benefit for AKS
4037

4138
### Prerequisites
4239

43-
Make sure you have an AKS cluster deployed on either an Azure Local or a Windows Server host.
40+
Make sure you have an AKS cluster deployed on a Windows Server host.
4441

4542
# [Azure PowerShell](#tab/powershell)
4643

@@ -146,7 +143,7 @@ az connectedk8s update -n <name> -g <resource group name> --azure-hybrid-benefit
146143
#### Sample output
147144

148145
```shell
149-
I confirm I have an eligible Windows Server license with Azure Hybrid Benefit to apply this benefit to AKS on Azure Local or Windows Server. Visit https://aka.ms/ahb-aks for details (y/n)
146+
I confirm I have an eligible Windows Server license with Azure Hybrid Benefit to apply this benefit to AKS on Windows Server. Visit https://aka.ms/ahb-aks for details (y/n)
150147
```
151148

152149
> [!NOTE]
@@ -174,7 +171,7 @@ az connectedk8s show -n <management cluster name> -g <resource group>
174171

175172
After activating Azure Hybrid Benefit for AKS, you must regularly check and maintain compliance for Azure Hybrid Benefit. You can perform an inventory of how many units you're running, and check this list against the Software Assurance licenses you have. To determine how many clusters with Azure Hybrid Benefit for AKS you're running, you can look at your Microsoft Azure bill.
176173

177-
To qualify for the Azure Hybrid Benefit for AKS, you must be running AKS on first-party Microsoft infrastructure such as Azure Local or Windows Server 2019/2022 and have the appropriate license to cover the underlying infrastructure. You can only use Azure Hybrid Benefit for AKS during the Software Assurance term. When the Software Assurance term is nearing expiry, you must either renew your agreement with Software Assurance, or deactivate the Azure Hybrid Benefit functionality.
174+
To qualify for the Azure Hybrid Benefit for AKS, you must be running AKS on first-party Microsoft infrastructure such as Windows Server 2019/2022 and have the appropriate license to cover the underlying infrastructure. You can only use Azure Hybrid Benefit for AKS during the Software Assurance term. When the Software Assurance term is nearing expiry, you must either renew your agreement with Software Assurance, or deactivate the Azure Hybrid Benefit functionality.
178175

179176
### Verify that Azure Hybrid Benefit for AKS is applied to my Microsoft Azure Bill
180177

AKS-Arc/concepts-cluster-autoscaling.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
---
22
title: Cluster autoscaling in AKS enabled by Azure Arc
3-
description: Learn about automatically scaling node pools in AKS Arc on Azure Local
3+
description: Learn about automatically scaling node pools in AKS enabled by Azure Arc
44
ms.topic: conceptual
55
author: sethmanheim
66
ms.author: sethm
7-
ms.date: 01/29/2024
7+
ms.date: 04/04/2025
88

99
# Intent: As a Kubernetes user, I want to use cluster autoscaler to grow my nodes to keep up with application demand.
1010
# Keyword: cluster autoscaling

AKS-Arc/concepts-container-networking.md

Lines changed: 8 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,11 @@
11
---
22
title: Container networking concepts
33
description: Learn about container networking in AKS enabled by Azure Arc.
4+
author: sethmanheim
45
ms.topic: conceptual
5-
ms.date: 10/21/2024
6+
ms.date: 04/01/2025
67
ms.author: sethm
78
ms.lastreviewed: 05/31/2022
8-
ms.reviewer: mikek
9-
author: sethmanheim
109

1110
# Intent: As an IT Pro, I want to learn about the advantages of using conainer networking in AKS Arc.
1211
# Keyword: Container applications networking
@@ -51,7 +50,7 @@ For other control and routing of the inbound traffic, you can use an ingress con
5150
5251
**ExternalName**: creates a specific DNS entry for easier application access. The IP addresses for load balancers and services can be internal or external addresses depending on your overall network setup and can be dynamically assigned. Or, you can specify an existing static IP address to use. An existing static IP address is often tied to a DNS entry. Internal load balancers are only assigned a private IP address, so they can't be accessed from the Internet.
5352

54-
## Kubernetes networking basics on Azure Local
53+
## Kubernetes networking basics
5554

5655
To allow access to your applications, or for application components to communicate with each other, Kubernetes provides an abstraction layer to virtual networking. Kubernetes nodes are connected to the virtual network and can provide inbound and outbound connectivity for pods. The *kube-proxy* component running on each node provides these network features.
5756

@@ -60,22 +59,21 @@ In Kubernetes, *Services* logically group pods to allow:
6059
- Direct access via a single IP address or DNS name and a specific port.
6160
- Distribute traffic using a *load balancer* between multiple pods hosting the same service or application.
6261

63-
The Azure Local platform also helps to simplify virtual networking for AKS on Azure Local clusters by providing the "underlay" network in a highly available manner.
6462
When you create an AKS cluster, we also create and configure an underlying `HAProxy` load balancer resource. As you deploy applications in a Kubernetes cluster, IP addresses are configured for your pods and Kubernetes services as endpoints in this load balancer.
6563

6664
## IP address resources
6765

6866
To simplify the network configuration for application workloads, AKS Arc assigns IP addresses to the following objects in a deployment:
6967

7068
- **Kubernetes cluster API server**: the API server is a component of the Kubernetes control plane that exposes the Kubernetes API. The API server is the front end for the Kubernetes control plane. Static IP addresses are always allocated to API servers irrespective of the underlying networking model.
71-
- **Kubernetes nodes (virtual machines)**: a Kubernetes cluster consists of a set of worker machines, called nodes, and the nodes host containerized applications. In addition to the control plane nodes, every cluster has at least one worker node. For an AKS cluster, Kubernetes nodes are configured as virtual machines. These virtual machines are created as highly available virtual machines in Azure Local, for more information, see [Node networking concepts](concepts-node-networking.md).
69+
- **Kubernetes nodes (virtual machines)**: a Kubernetes cluster consists of a set of worker machines, called nodes, and the nodes host containerized applications. In addition to the control plane nodes, every cluster has at least one worker node. For an AKS cluster, Kubernetes nodes are configured as virtual machines. These virtual machines are created as highly available virtual machines. For more information, see [Node networking concepts](concepts-node-networking.md).
7270
- **Kubernetes services**: in Kubernetes, *Services* logically group pod IP addresses to allow for direct access via a single IP address or DNS name on a specific port. Services can also distribute traffic using a *load balancer*. Static IP addresses are always allocated to Kubernetes services irrespective of the underlying networking model.
73-
- **HAProxy load balancers**: [HAProxy](https://www.haproxy.org/#desc) is a TCP/HTTP load balancer and proxy server that spreads incoming requests across multiple endpoints. Every workload cluster in an AKS on Azure Local deployment has a HAProxy load balancer deployed and configured as a specialized virtual machine.
74-
- **Microsoft On-premises Cloud Service**: This is the Azure Local cloud provider that enables the creation and management of the virtualized environment hosting Kubernetes on an on-premises Azure Local cluster or Windows Server cluster. The networking model followed by your Azure Local or Windows Server cluster determines the IP address allocation method used by the Microsoft On-Premises Cloud Service. To learn more about the networking concepts implemented by the Microsoft On-Premises Cloud Service, see [Node networking concepts](concepts-node-networking.md).
71+
- **HAProxy load balancers**: [HAProxy](https://www.haproxy.org/#desc) is a TCP/HTTP load balancer and proxy server that spreads incoming requests across multiple endpoints. Every workload cluster in an AKS on Windows Server deployment has a HAProxy load balancer deployed and configured as a specialized virtual machine.
72+
- **Microsoft On-premises Cloud Service**: This is the cloud provider that enables the creation and management of the virtualized environment hosting Kubernetes on an on-premises Windows Server cluster. The networking model followed by your Windows Server cluster determines the IP address allocation method used by the Microsoft On-Premises Cloud Service. To learn more about the networking concepts implemented by the Microsoft On-Premises Cloud Service, see [Node networking concepts](concepts-node-networking.md).
7573

7674
## Kubernetes networks
7775

78-
In AKS on Azure Local, you can deploy a cluster that uses one of the following network models:
76+
In AKS on Windows Server, you can deploy a cluster that uses one of the following network models:
7977

8078
- Flannel Overlay networking - The network resources are typically created and configured as the cluster is deployed.
8179
- Project Calico networking - This model offers additional networking features, such as network policies and flow control.
@@ -130,7 +128,7 @@ New-AksHciCluster -name MyCluster -primaryNetworkPlugin 'flannel'
130128

131129
## Next steps
132130

133-
This article covers networking concepts for containers in AKS nodes on Azure Local. For more information about AKS on Azure Local concepts, see the following articles:
131+
This article covers networking concepts for containers in AKS nodes on Windows Server. For more information about AKS on Windows Server concepts, see the following articles:
134132

135133
- [Network concepts for AKS nodes](./concepts-node-networking.md)
136134
- [Clusters and workloads](./kubernetes-concepts.md)

0 commit comments

Comments
 (0)