You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: AKS-Arc/app-availability.md
+14-14Lines changed: 14 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,13 +3,13 @@ title: Application availability in AKS enabled by Azure Arc
3
3
description: Learn about application availability in AKS enabled by Arc
4
4
author: sethmanheim
5
5
ms.topic: conceptual
6
-
ms.date: 04/17/2024
6
+
ms.date: 04/01/2025
7
7
ms.author: sethm
8
8
ms.lastreviewed: 1/14/2022
9
9
ms.reviewer: rbaziwane
10
10
11
-
# Intent: As an IT Pro, I need to understand how disruptions can impact the availability of applications on my AKS deployments on Azure Local and Windows Server.
12
-
# Keyword: AKS on Azure Local and Windows Server architecture live migration disruption Kubernetes container orchestration
11
+
# Intent: As an IT Pro, I need to understand how disruptions can impact the availability of applications on my AKS deployments on Windows Server.
12
+
# Keyword: AKS on Windows Server architecture live migration disruption Kubernetes container orchestration
13
13
---
14
14
15
15
# Application availability in AKS enabled by Azure Arc
@@ -18,25 +18,25 @@ ms.reviewer: rbaziwane
18
18
19
19
Azure Kubernetes Service (AKS) enabled by Azure Arc offers a fully supported container platform that can run cloud-native applications on the [Kubernetes container orchestration platform](https://kubernetes.io/). The architecture supports running virtualized Windows and Linux workloads.
20
20
21
-
The AKS architecture is built with failover clustering and live migration that is automatically enabled for target (workload) clusters. During various disruption events, virtual machines that host customer workloads are freely moved around without perceived application downtime. This architecture means that a traditional enterprise customer, who's managing a legacy application as a singleton to AKS on Azure Local or Windows Server, gets similar (or better) uptime than what's currently experienced on a legacy VM application.
21
+
The AKS architecture is built with failover clustering and live migration that is automatically enabled for target (workload) clusters. During various disruption events, virtual machines that host customer workloads are freely moved around without perceived application downtime. This architecture means that a traditional enterprise customer, who's managing a legacy application as a singleton to AKS on Windows Server, gets similar (or better) uptime than what's currently experienced on a legacy VM application.
22
22
23
23
This article describes some fundamental concepts for users who want to run containerized applications on AKS Arc with live migration enabled in order to ensure applications are available during a disruption. Kubernetes terminology, such as *voluntary disruption* and *involuntary disruption*, is used to refer to downtime of an application running in a pod.
24
24
25
25
## What is live migration?
26
26
27
27
[*Live migration*](/windows-server/virtualization/hyper-v/manage/live-migration-overview) is a Hyper-V feature that allows you to transparently move running virtual machines from one Hyper-V host to another without perceived downtime. The primary benefit of live migration is flexibility; running virtual machines is not tied to a single host machine. This allows users to perform actions such as draining a specific host of virtual machines before decommissioning or upgrading the host. When paired with Windows Failover Clustering, live migration enables the creation of highly available and fault tolerant systems.
28
28
29
-
The current architecture of AKS on Azure Local and Windows Server assumes that you enabled live migration in your Azure Local clustered environment. Therefore, all Kubernetes worker node VMs are created with live migration configured. These nodes can be moved around physical hosts in the event of a disruption to ensure the platform is highly available.
29
+
The current architecture of AKS on Windows Server assumes that you enabled live migration in your Windows Server clustered environment. Therefore, all Kubernetes worker node VMs are created with live migration configured. These nodes can be moved around physical hosts in the event of a disruption to ensure the platform is highly available.
30
30
31
-
:::image type="content" source="media/app-availability/cluster-architecture.png" alt-text="Diagram showing AKS on Azure Local and Windows Server with Failover Clustering enabled." lightbox="media/app-availability/cluster-architecture.png":::
31
+
:::image type="content" source="media/app-availability/cluster-architecture.png" alt-text="Diagram showing AKS on Windows Server with Failover Clustering enabled." lightbox="media/app-availability/cluster-architecture.png":::
32
32
33
33
When you run a legacy application as a singleton on top of Kubernetes, this architecture meets your high availability needs. Kubernetes manages the scheduling of pods on available worker nodes while live migration manages the scheduling of worker node VMs on available physical hosts.
34
34
35
35
:::image type="content" source="media/app-availability/singleton.png" alt-text="Diagram showing an example legacy application running as a singleton." lightbox="media/app-availability/singleton.png":::
36
36
37
37
## Application disruption scenarios
38
38
39
-
A comparative study of the recovery times for applications running in VMs on AKS on Azure Local and Windows Server clearly shows that there is minimal impact on the application when common disruption events occur. Three example disruption scenarios include:
39
+
A comparative study of the recovery times for applications running in VMs on AKS on Windows Server clearly shows that there is minimal impact on the application when common disruption events occur. Three example disruption scenarios include:
40
40
41
41
- Applying an update that results in a reboot of the physical machine.
42
42
- Applying an update that involves recreating the worker node.
@@ -45,11 +45,11 @@ A comparative study of the recovery times for applications running in VMs on AKS
45
45
> [!NOTE]
46
46
> These scenarios assume that the application owner still uses Kubernetes affinity and anti-affinity settings to ensure proper scheduling of pods across worker nodes.
47
47
48
-
| Disruption event | Running applications in VMs on Azure Local |Running applications in VMs on AKS on Azure Local or Windows Server |
| Apply an update that results in a reboot of the physical machine | No impact |
51
+
| Apply an update that involves recreating the worker node (or rebooting the VM) | Varies |
52
+
| Unplanned hardware failure of a physical machine | 6-8 minutes |
53
53
54
54
### Apply an update that results in a reboot of the physical machine
55
55
@@ -68,8 +68,8 @@ In this scenario, an involuntary disruption event occurs to a physical machine h
68
68
69
69
## Conclusion
70
70
71
-
AKS failover clustering technologies are designed to ensure that computing environments in both Azure Local and Windows Server are highly available and fault tolerant. However, the application owner still has to configure deployments to use Kubernetes features, such as `Deployments`, `Affinity Mapping`, `RelicaSets`, to ensure that the pods are resilient in disruption scenarios.
71
+
AKS failover clustering technologies are designed to ensure that computing environments in Windows Server are highly available and fault tolerant. However, the application owner still has to configure deployments to use Kubernetes features, such as `Deployments`, `Affinity Mapping`, `RelicaSets`, to ensure that the pods are resilient in disruption scenarios.
72
72
73
73
## Next steps
74
74
75
-
[AKS on Windows Server and Azure Local overview](overview.md)
@@ -25,22 +25,19 @@ Azure Hybrid Benefit is a program that enables you to significantly reduce the c
25
25
Azure Hybrid Benefit for AKS enabled by Arc is a new benefit that can help you significantly reduce the cost of running Kubernetes on-premises or at the edge. It works by letting you apply your on-premises Windows Server Datacenter or Standard licenses with Software Assurance (SA) to pay for AKS. Each Windows Server core license entitles use on 1 virtual core of AKS. There are a few important details to note regarding activation of the benefit for AKS:
26
26
27
27
- Azure Hybrid Benefit for AKS is enabled at the management cluster (or AKS host) level. You don't need to enable the benefit for workload clusters.
28
-
- If you have multiple AKS on Azure Local or Windows Server deployments, you must enable Azure Hybrid Benefit individually for each deployment.
28
+
- If you have multiple AKS on Windows Server deployments, you must enable Azure Hybrid Benefit individually for each deployment.
29
29
- If you enable Azure Hybrid Benefit on an AKS Arc deployment during the trial period, it doesn't nullify your trial period. The benefit is activated immediately, and is applied at the end of the trial period.
30
30
- Reinstalling AKS Arc doesn't automatically reinstate the benefit. You must reactivate this benefit for the new deployment.
31
31
32
32
For more information about Software Assurance and with which agreements it's available, see [Benefits of Software Assurance](https://www.microsoft.com/licensing/licensing-programs/software-assurance-by-benefits).
33
33
34
-
The rest of this article describes how to activate this benefit for AKS on Azure Local or Windows Server.
35
-
36
-
> [!TIP]
37
-
> You can maximize cost savings by also using Azure Hybrid Benefit for Azure Local. For more information, see [Azure Hybrid Benefit for Azure Local](/azure/azure-local/concepts/azure-hybrid-benefit).
34
+
The rest of this article describes how to activate this benefit for AKS on Windows Server.
38
35
39
36
## Activate Azure Hybrid Benefit for AKS
40
37
41
38
### Prerequisites
42
39
43
-
Make sure you have an AKS cluster deployed on either an Azure Local or a Windows Server host.
40
+
Make sure you have an AKS cluster deployed on a Windows Server host.
44
41
45
42
# [Azure PowerShell](#tab/powershell)
46
43
@@ -146,7 +143,7 @@ az connectedk8s update -n <name> -g <resource group name> --azure-hybrid-benefit
146
143
#### Sample output
147
144
148
145
```shell
149
-
I confirm I have an eligible Windows Server license with Azure Hybrid Benefit to apply this benefit to AKS on Azure Local or Windows Server. Visit https://aka.ms/ahb-aks for details (y/n)
146
+
I confirm I have an eligible Windows Server license with Azure Hybrid Benefit to apply this benefit to AKS on Windows Server. Visit https://aka.ms/ahb-aks for details (y/n)
150
147
```
151
148
152
149
> [!NOTE]
@@ -174,7 +171,7 @@ az connectedk8s show -n <management cluster name> -g <resource group>
174
171
175
172
After activating Azure Hybrid Benefit for AKS, you must regularly check and maintain compliance for Azure Hybrid Benefit. You can perform an inventory of how many units you're running, and check this list against the Software Assurance licenses you have. To determine how many clusters with Azure Hybrid Benefit for AKS you're running, you can look at your Microsoft Azure bill.
176
173
177
-
To qualify for the Azure Hybrid Benefit for AKS, you must be running AKS on first-party Microsoft infrastructure such as Azure Local or Windows Server 2019/2022 and have the appropriate license to cover the underlying infrastructure. You can only use Azure Hybrid Benefit for AKS during the Software Assurance term. When the Software Assurance term is nearing expiry, you must either renew your agreement with Software Assurance, or deactivate the Azure Hybrid Benefit functionality.
174
+
To qualify for the Azure Hybrid Benefit for AKS, you must be running AKS on first-party Microsoft infrastructure such as Windows Server 2019/2022 and have the appropriate license to cover the underlying infrastructure. You can only use Azure Hybrid Benefit for AKS during the Software Assurance term. When the Software Assurance term is nearing expiry, you must either renew your agreement with Software Assurance, or deactivate the Azure Hybrid Benefit functionality.
178
175
179
176
### Verify that Azure Hybrid Benefit for AKS is applied to my Microsoft Azure Bill
Copy file name to clipboardExpand all lines: AKS-Arc/concepts-container-networking.md
+8-10Lines changed: 8 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,12 +1,11 @@
1
1
---
2
2
title: Container networking concepts
3
3
description: Learn about container networking in AKS enabled by Azure Arc.
4
+
author: sethmanheim
4
5
ms.topic: conceptual
5
-
ms.date: 10/21/2024
6
+
ms.date: 04/01/2025
6
7
ms.author: sethm
7
8
ms.lastreviewed: 05/31/2022
8
-
ms.reviewer: mikek
9
-
author: sethmanheim
10
9
11
10
# Intent: As an IT Pro, I want to learn about the advantages of using conainer networking in AKS Arc.
12
11
# Keyword: Container applications networking
@@ -51,7 +50,7 @@ For other control and routing of the inbound traffic, you can use an ingress con
51
50
52
51
**ExternalName**: creates a specific DNS entry for easier application access. The IP addresses for load balancers and services can be internal or external addresses depending on your overall network setup and can be dynamically assigned. Or, you can specify an existing static IP address to use. An existing static IP address is often tied to a DNS entry. Internal load balancers are only assigned a private IP address, so they can't be accessed from the Internet.
53
52
54
-
## Kubernetes networking basics on Azure Local
53
+
## Kubernetes networking basics
55
54
56
55
To allow access to your applications, or for application components to communicate with each other, Kubernetes provides an abstraction layer to virtual networking. Kubernetes nodes are connected to the virtual network and can provide inbound and outbound connectivity for pods. The *kube-proxy* component running on each node provides these network features.
57
56
@@ -60,22 +59,21 @@ In Kubernetes, *Services* logically group pods to allow:
60
59
- Direct access via a single IP address or DNS name and a specific port.
61
60
- Distribute traffic using a *load balancer* between multiple pods hosting the same service or application.
62
61
63
-
The Azure Local platform also helps to simplify virtual networking for AKS on Azure Local clusters by providing the "underlay" network in a highly available manner.
64
62
When you create an AKS cluster, we also create and configure an underlying `HAProxy` load balancer resource. As you deploy applications in a Kubernetes cluster, IP addresses are configured for your pods and Kubernetes services as endpoints in this load balancer.
65
63
66
64
## IP address resources
67
65
68
66
To simplify the network configuration for application workloads, AKS Arc assigns IP addresses to the following objects in a deployment:
69
67
70
68
-**Kubernetes cluster API server**: the API server is a component of the Kubernetes control plane that exposes the Kubernetes API. The API server is the front end for the Kubernetes control plane. Static IP addresses are always allocated to API servers irrespective of the underlying networking model.
71
-
-**Kubernetes nodes (virtual machines)**: a Kubernetes cluster consists of a set of worker machines, called nodes, and the nodes host containerized applications. In addition to the control plane nodes, every cluster has at least one worker node. For an AKS cluster, Kubernetes nodes are configured as virtual machines. These virtual machines are created as highly available virtual machines in Azure Local, for more information, see [Node networking concepts](concepts-node-networking.md).
69
+
-**Kubernetes nodes (virtual machines)**: a Kubernetes cluster consists of a set of worker machines, called nodes, and the nodes host containerized applications. In addition to the control plane nodes, every cluster has at least one worker node. For an AKS cluster, Kubernetes nodes are configured as virtual machines. These virtual machines are created as highly available virtual machines. For more information, see [Node networking concepts](concepts-node-networking.md).
72
70
-**Kubernetes services**: in Kubernetes, *Services* logically group pod IP addresses to allow for direct access via a single IP address or DNS name on a specific port. Services can also distribute traffic using a *load balancer*. Static IP addresses are always allocated to Kubernetes services irrespective of the underlying networking model.
73
-
-**HAProxy load balancers**: [HAProxy](https://www.haproxy.org/#desc) is a TCP/HTTP load balancer and proxy server that spreads incoming requests across multiple endpoints. Every workload cluster in an AKS on Azure Local deployment has a HAProxy load balancer deployed and configured as a specialized virtual machine.
74
-
-**Microsoft On-premises Cloud Service**: This is the Azure Local cloud provider that enables the creation and management of the virtualized environment hosting Kubernetes on an on-premises Azure Local cluster or Windows Server cluster. The networking model followed by your Azure Local or Windows Server cluster determines the IP address allocation method used by the Microsoft On-Premises Cloud Service. To learn more about the networking concepts implemented by the Microsoft On-Premises Cloud Service, see [Node networking concepts](concepts-node-networking.md).
71
+
-**HAProxy load balancers**: [HAProxy](https://www.haproxy.org/#desc) is a TCP/HTTP load balancer and proxy server that spreads incoming requests across multiple endpoints. Every workload cluster in an AKS on Windows Server deployment has a HAProxy load balancer deployed and configured as a specialized virtual machine.
72
+
-**Microsoft On-premises Cloud Service**: This is the cloud provider that enables the creation and management of the virtualized environment hosting Kubernetes on an on-premises Windows Server cluster. The networking model followed by your Windows Server cluster determines the IP address allocation method used by the Microsoft On-Premises Cloud Service. To learn more about the networking concepts implemented by the Microsoft On-Premises Cloud Service, see [Node networking concepts](concepts-node-networking.md).
75
73
76
74
## Kubernetes networks
77
75
78
-
In AKS on Azure Local, you can deploy a cluster that uses one of the following network models:
76
+
In AKS on Windows Server, you can deploy a cluster that uses one of the following network models:
79
77
80
78
- Flannel Overlay networking - The network resources are typically created and configured as the cluster is deployed.
81
79
- Project Calico networking - This model offers additional networking features, such as network policies and flow control.
This article covers networking concepts for containers in AKS nodes on Azure Local. For more information about AKS on Azure Local concepts, see the following articles:
131
+
This article covers networking concepts for containers in AKS nodes on Windows Server. For more information about AKS on Windows Server concepts, see the following articles:
134
132
135
133
-[Network concepts for AKS nodes](./concepts-node-networking.md)
136
134
-[Clusters and workloads](./kubernetes-concepts.md)
0 commit comments