You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -23,19 +23,19 @@ Security in AKS enabled by Azure Arc involves securing the infrastructure and th
23
23
24
24
AKS enabled by Arc applies various security measures to secure its infrastructure. The following diagram highlights these measures:
25
25
26
-
:::image type="content" source="media/concepts-security/security-infrastructure.png" alt-text="Illustration showing the infrastructure security of Azure Kubernetes Service on Azure Local." lightbox="media/concepts-security/security-infrastructure.png":::
26
+
:::image type="content" source="media/concepts-security/security-infrastructure.png" alt-text="Illustration showing the infrastructure security of Azure Kubernetes Service." lightbox="media/concepts-security/security-infrastructure.png":::
27
27
28
-
The following table describes the security-hardening aspects of AKS on Azure Local that are shown in the previous diagram. For conceptual background information on the infrastructure for an AKS deployment, see [Clusters and workloads](./kubernetes-concepts.md).
28
+
The following table describes the security-hardening aspects of AKS on Windows Server that are shown in the previous diagram. For conceptual background information on the infrastructure for an AKS deployment, see [Clusters and workloads](./kubernetes-concepts.md).
29
29
30
30
| Security aspect | Description |
31
31
| ------ | --------|
32
32
| 1 | Because the AKS host has access to all of the workload (target) clusters, this cluster can be a single point of compromise. However, access to the AKS host is carefully controlled as the management cluster's purpose is limited to provisioning workload clusters and collecting aggregated cluster metrics. |
33
33
| 2 | To reduce deployment cost and complexity, workload clusters share the underlying Windows Server. However, depending on the security needs, admins can choose to deploy a workload cluster on a dedicated Windows Server. When workload clusters share the underlying Windows Server, each cluster is deployed as a virtual machine, which ensures strong isolation guarantees between the workload clusters. |
34
34
| 3 | Customer workloads are deployed as containers and share the same virtual machine. The containers are process-isolated from one another, which is a weaker form of isolation compared to strong isolation guarantees offered by virtual machines. |
35
35
| 4 | Containers communicate with each other over an overlay network. Admins can configure Calico policies to define networking isolation rules between containers. Calico policy support on AKS Arc is only for Linux containers, and is supported as-is. |
36
-
| 5 | Communication between built-in Kubernetes components of AKS on Azure Local, including communication between the API server and the container host, is encrypted via certificates. AKS offers an out-of-the-box certificate provisioning, renewal, and revocation for built-in certificates. |
36
+
| 5 | Communication between built-in Kubernetes components of AKS on Windows Server, including communication between the API server and the container host, is encrypted via certificates. AKS offers an out-of-the-box certificate provisioning, renewal, and revocation for built-in certificates. |
37
37
| 6 | Communication with the API server from Windows client machines is secured using Microsoft Entra credentials for users. |
38
-
| 7 | For every release, Microsoft provides the VHDs for AKS VMs on Azure Local and applies the appropriate security patches when needed. |
38
+
| 7 | For every release, Microsoft provides the VHDs for AKS VMs on Windows Server and applies the appropriate security patches when needed. |
Copy file name to clipboardExpand all lines: AKS-Arc/concepts-support.md
+15-15Lines changed: 15 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,12 +3,12 @@ title: Tested resource limits, VM sizes, and regions for AKS enabled by Azure Ar
3
3
description: Resource limits, VM sizes, regions for Azure Kubernetes Service (AKS) enabled by Azure Arc.
4
4
author: sethmanheim
5
5
ms.topic: conceptual
6
-
ms.date: 02/20/2025
6
+
ms.date: 03/31/2025
7
7
ms.author: sethm
8
8
ms.lastreviewed: 02/03/2022
9
9
ms.reviewer: mamezgeb
10
10
ms.custom: references_regions
11
-
#intent: As an IT Pro, I need to understand and also leverage how resource limits, VM sizes, and regions work together for AKS on Azure Local or Windows Server.
11
+
#intent: As an IT Pro, I need to understand and also leverage how resource limits, VM sizes, and regions work together for AKS on Windows Server.
This article provides information about tested configurations, resource limits, VM sizes, and regions for Azure Kubernetes Service (AKS) enabled by Azure Arc. The tests used the latest release of AKS on Azure Local.
20
+
This article provides information about tested configurations, resource limits, VM sizes, and regions for Azure Kubernetes Service (AKS) enabled by Azure Arc. The tests used the latest release of AKS enabled by Azure Arc.
21
21
22
22
## Maximum specifications
23
23
24
-
AKS enabled by Arc deployments have been validated with the following configurations, including the specified maximums. Keep in mind that exceeding these maximums is at your own risk and might lead to unexpected behaviors and failures. This article provides some guidance on how to avoid common configuration mistakes and can help you create a larger configuration. If in doubt, contact your local Microsoft office for assistance or submit a question in the[Azure Local community](https://feedback.azure.com/d365community/search/?q=Azure+Kubernetes).
24
+
AKS enabled by Arc deployments have been validated with the following configurations, including the specified maximums. Keep in mind that exceeding these maximums is at your own risk and might lead to unexpected behaviors and failures. This article provides some guidance on how to avoid common configuration mistakes and can help you create a larger configuration. If in doubt, contact your local Microsoft office for assistance or [submit a question to the community](https://feedback.azure.com/d365community/search/?q=Azure+Kubernetes).
25
25
26
26
| Resource | Maximum |
27
27
| ---------------------------- | --------|
@@ -38,23 +38,23 @@ The recommended limits were tested with the default virtual machine (VM) sizes,
38
38
|Target Cluster Linux worker node|**Standard_K8S3_v1**|
39
39
|Target Cluster Windows worker node|**Standard_K8S3_v1**|
40
40
41
-
The hardware configuration of each physical node in the Azure Local cluster is as follows:
41
+
The hardware configuration of each physical node in the AKS Arc cluster is as follows:
42
42
43
43
- Chassis: Dell PowerEdge R650 Server or similar.
44
44
- RAM: RDIMM, 3200 MT/s, Dual Rank, total of 256 GB.
- Disk: 8x HDDs (2 TB or larger) and 2x 1.6 TB NVMe to support S2D storage configurations.
47
47
- Network: Four (4) 100-Gbit NICs (Mellanox or Intel).
48
48
49
-
Microsoft engineering tested AKS enabled by Arc using the above configuration. For single node. 2 node, 4 node and 8 node Windows failover clusters. If you have a requirement to exceed the tested configuration, see [Scaling AKS on Azure Local](#scaling-aks-on-azure-local).
49
+
Microsoft engineering tested AKS enabled by Arc using the above configuration. For single node. 2 node, 4 node and 8 node Windows failover clusters. If you have a requirement to exceed the tested configuration, see [Scaling AKS](#scaling-aks).
50
50
51
51
> [!IMPORTANT]
52
52
> When you upgrade a deployment of AKS, extra resources are temporarily consumed.
53
53
> Each virtual machine is upgraded in a rolling update flow, starting with the control plane nodes. For each node in the Kubernetes cluster, a new node VM is created. The old node VM is restricted in order to prevent workloads from being deployed to it. The restricted VM is then drained of all containers to distribute the containers to other VMs in the system. The drained VM is then removed from the cluster, shut down, and replaced by the new, updated VM. This process repeats until all VMs are updated.
54
54
55
55
## Available VM sizes
56
56
57
-
The following VM sizes for control plane nodes, Linux worker nodes, and Windows worker nodes are available for AKS on Azure Local. While VM sizes such as **Standard_K8S2_v1** and **Standard_K8S_v1** are supported for testing and low resource requirement deployments, use these sizes with care and apply stringent testing as they may result in unexpected failures due to out of memory conditions.
57
+
The following VM sizes for control plane nodes, Linux worker nodes, and Windows worker nodes are available for AKS on Windows Server. While VM sizes such as **Standard_K8S2_v1** and **Standard_K8S_v1** are supported for testing and low resource requirement deployments, use these sizes with care and apply stringent testing as they may result in unexpected failures due to out of memory conditions.
58
58
59
59
| VM Size | CPU | Memory (GB) | GPU type | GPU count |
@@ -86,9 +86,9 @@ AKS enabled by Arc is supported in the following Azure regions:
86
86
- Southeast Asia
87
87
- West Europe
88
88
89
-
## Scaling AKS on Azure Local
89
+
## Scaling AKS
90
90
91
-
Scaling an AKS deployment on Azure Local involves planning ahead by knowing your workloads and target cluster utilization. Additionally, consider hardware resources in your underlying infrastructure such as total CPU cores, total memory, storage, IP Addresses and so on.
91
+
Scaling an AKS deployment involves planning ahead by knowing your workloads and target cluster utilization. Additionally, consider hardware resources in your underlying infrastructure such as total CPU cores, total memory, storage, IP Addresses and so on.
92
92
93
93
The following examples assume that only AKS-based workloads are deployed on the underlying infrastructure. Deploying non-AKS workloads such as stand-alone or clustered virtual machines, or database servers, reduces the resources available to AKS, which you must take into account.
94
94
@@ -151,16 +151,16 @@ Other considerations:
151
151
152
152
The following scaling example is based on these general assumptions/use cases:
153
153
154
-
- You want to be able to completely tolerate the loss of one physical node in the Azure Local cluster.
154
+
- You want to be able to completely tolerate the loss of one physical node in the Kubernetes cluster.
155
155
- You want to support upgrading target clusters to newer versions.
156
156
- You want to allow for high availability of the target cluster control plane nodes and load balancer nodes.
157
-
- You want to reserve a part of the overall Azure Local capacity for these cases.
157
+
- You want to reserve a part of the overall Windows Server capacity for these cases.
158
158
159
159
#### Suggestions
160
160
161
161
- For optimal performance, make sure to set at least 15 percent (100/8=12.5) of cluster capacity aside to allow all resources from one physical node to be redistributed to the other seven (7) nodes. This configuration ensures that you have some reserve available to do an upgrade or other AKS day two operations.
162
162
163
-
- If you want to grow beyond the 200-VM limit for a maximum hardware sized eight (8) node Azure Local cluster, increase the size of the AKS host VM. Doubling in size results in roughly double the number of VMs it can manage. In an 8-node Azure Local cluster, you can get to 8,192 (8x1024) VMs based on the Azure Local recommended resource limits documented in the [Maximum supported hardware specifications](/azure-stack/hci/concepts/system-requirements#maximum-supported-hardware-specifications). You should reserve approximately 30% of capacity, which leaves you with a theoretical limit of 5,734 VMs across all nodes.
163
+
- If you want to grow beyond the 200-VM limit for a maximum hardware sized eight (8) node cluster, increase the size of the AKS host VM. Doubling in size results in roughly double the number of VMs it can manage. In an 8-node Kubernetes cluster, you can get to 8,192 (8x1024) VMs based on the recommended resource limits documented in the [Maximum supported hardware specifications](/azure/azure-local/concepts/system-requirements#maximum-supported-hardware-specifications). You should reserve approximately 30% of capacity, which leaves you with a theoretical limit of 5,734 VMs across all nodes.
164
164
165
165
- **Standard_D32s_v3**, for the AKS host with 32 cores and 128 GB - can support a maximum of 1,600 nodes.
166
166
@@ -171,17 +171,17 @@ The following scaling example is based on these general assumptions/use cases:
171
171
- To run 200 worker nodes in one target cluster, you can use the default control plane and load balancer size. Depending on the number of pods per node, you can go up at least one size on the control plane and use **Standard_D8s_v3**.
172
172
- Depending on the number of Kubernetes services hosted in each target cluster, you might have to increase the size of the load balancer VM as well at target cluster creation to ensure that services can be reached with high performance and traffic is routed accordingly.
173
173
174
-
The deployment of AKS enabled by Arc distributes the worker nodes for each node pool in a target cluster across the available Azure Local nodes using the Azure Local placement logic.
174
+
The deployment of AKS enabled by Arc distributes the worker nodes for each node pool in a target cluster across the available nodes using placement logic.
175
175
176
176
> [!IMPORTANT]
177
-
> The node placement is not preserved during platform and AKS upgrades and will change over time. A failed physical node will also impact the distribution of virtual machines across the remaining cluster nodes.
177
+
> The node placement is not preserved during platform and AKS upgrades and will change over time. A failed physical node also impacts the distribution of virtual machines across the remaining cluster nodes.
178
178
179
179
> [!NOTE]
180
180
> Do not run more than four target cluster creations at the same time if the physical cluster is already 50% full, as that can lead to temporary resource contention.
181
181
> When scaling up target cluster node pools by large numbers, take into account available physical resources, as AKS does not verify resource availability for parallel running creation/scaling processes.
182
182
> Always ensure enough reserve to allow for upgrades and failover. Especially in very large environments, these operations, when run in parallel, can lead to rapid resource exhaustion.
183
183
184
-
If in doubt, contact your local Microsoft office for assistance or post in the [Azure Local community forum](https://feedback.azure.com/d365community/search/?q=Azure+Kubernetes).
184
+
If in doubt, contact your local Microsoft office for assistance or [post in the community forum](https://feedback.azure.com/d365community/search/?q=Azure+Kubernetes).
Copy file name to clipboardExpand all lines: AKS-Arc/tutorial-kubernetes-app-update.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,7 +31,7 @@ This tutorial, part six of seven, describes how to update the sample Azure Vote
31
31
In previous tutorials you learned how to:
32
32
33
33
* Package an application into a container image and upload the image to Azure Container Registry.
34
-
* Create a Kubernetes cluster on Azure Local and deploy the application to the cluster.
34
+
* Create a Kubernetes cluster and deploy the application to the cluster.
35
35
* Clone an application repository that includes the application source code and a pre-created Docker Compose file you can use in this tutorial.
36
36
37
37
Verify that you created a clone of the repo, and changed directories into the cloned directory. If you haven't completed these steps, start with [Tutorial 1 - Create container images](tutorial-kubernetes-prepare-application.md).
0 commit comments