Skip to content

Commit ef657dd

Browse files
committed
More updates
1 parent 892c331 commit ef657dd

File tree

3 files changed

+22
-22
lines changed

3 files changed

+22
-22
lines changed

AKS-Arc/concepts-security.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -3,10 +3,10 @@ title: Security concepts in AKS enabled by Azure Arc
33
description: Learn about securing the infrastructure and applications on a Kubernetes cluster in AKS enabled by Arc.
44
author: sethmanheim
55
ms.topic: conceptual
6-
ms.date: 10/21/2024
6+
ms.date: 03/31/2025
77
ms.author: sethm
88
ms.lastreviewed: 1/14/2022
9-
ms.reviewer: lahirisl
9+
ms.reviewer: leslielin
1010

1111
# Intent: As an IT Pro, I want to learn how to improve the security of the applications and infrastructure in AKS enabled by Azure Arc.
1212
# Keyword: security concepts infrastructure security
@@ -23,19 +23,19 @@ Security in AKS enabled by Azure Arc involves securing the infrastructure and th
2323

2424
AKS enabled by Arc applies various security measures to secure its infrastructure. The following diagram highlights these measures:
2525

26-
:::image type="content" source="media/concepts-security/security-infrastructure.png" alt-text="Illustration showing the infrastructure security of Azure Kubernetes Service on Azure Local." lightbox="media/concepts-security/security-infrastructure.png":::
26+
:::image type="content" source="media/concepts-security/security-infrastructure.png" alt-text="Illustration showing the infrastructure security of Azure Kubernetes Service." lightbox="media/concepts-security/security-infrastructure.png":::
2727

28-
The following table describes the security-hardening aspects of AKS on Azure Local that are shown in the previous diagram. For conceptual background information on the infrastructure for an AKS deployment, see [Clusters and workloads](./kubernetes-concepts.md).
28+
The following table describes the security-hardening aspects of AKS on Windows Server that are shown in the previous diagram. For conceptual background information on the infrastructure for an AKS deployment, see [Clusters and workloads](./kubernetes-concepts.md).
2929

3030
| Security aspect | Description |
3131
| ------ | --------|
3232
| 1 | Because the AKS host has access to all of the workload (target) clusters, this cluster can be a single point of compromise. However, access to the AKS host is carefully controlled as the management cluster's purpose is limited to provisioning workload clusters and collecting aggregated cluster metrics. |
3333
| 2 | To reduce deployment cost and complexity, workload clusters share the underlying Windows Server. However, depending on the security needs, admins can choose to deploy a workload cluster on a dedicated Windows Server. When workload clusters share the underlying Windows Server, each cluster is deployed as a virtual machine, which ensures strong isolation guarantees between the workload clusters. |
3434
| 3 | Customer workloads are deployed as containers and share the same virtual machine. The containers are process-isolated from one another, which is a weaker form of isolation compared to strong isolation guarantees offered by virtual machines. |
3535
| 4 | Containers communicate with each other over an overlay network. Admins can configure Calico policies to define networking isolation rules between containers. Calico policy support on AKS Arc is only for Linux containers, and is supported as-is. |
36-
| 5 | Communication between built-in Kubernetes components of AKS on Azure Local, including communication between the API server and the container host, is encrypted via certificates. AKS offers an out-of-the-box certificate provisioning, renewal, and revocation for built-in certificates. |
36+
| 5 | Communication between built-in Kubernetes components of AKS on Windows Server, including communication between the API server and the container host, is encrypted via certificates. AKS offers an out-of-the-box certificate provisioning, renewal, and revocation for built-in certificates. |
3737
| 6 | Communication with the API server from Windows client machines is secured using Microsoft Entra credentials for users. |
38-
| 7 | For every release, Microsoft provides the VHDs for AKS VMs on Azure Local and applies the appropriate security patches when needed. |
38+
| 7 | For every release, Microsoft provides the VHDs for AKS VMs on Windows Server and applies the appropriate security patches when needed. |
3939

4040
## Application security
4141

AKS-Arc/concepts-support.md

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -3,12 +3,12 @@ title: Tested resource limits, VM sizes, and regions for AKS enabled by Azure Ar
33
description: Resource limits, VM sizes, regions for Azure Kubernetes Service (AKS) enabled by Azure Arc.
44
author: sethmanheim
55
ms.topic: conceptual
6-
ms.date: 02/20/2025
6+
ms.date: 03/31/2025
77
ms.author: sethm
88
ms.lastreviewed: 02/03/2022
99
ms.reviewer: mamezgeb
1010
ms.custom: references_regions
11-
#intent: As an IT Pro, I need to understand and also leverage how resource limits, VM sizes, and regions work together for AKS on Azure Local or Windows Server.
11+
#intent: As an IT Pro, I need to understand and also leverage how resource limits, VM sizes, and regions work together for AKS on Windows Server.
1212
#keyword: Resource limits VM sizes
1313

1414
---
@@ -17,11 +17,11 @@ ms.custom: references_regions
1717

1818
[!INCLUDE [applies-to-azure stack-hci-and-windows-server-skus](includes/aks-hci-applies-to-skus/aks-hybrid-applies-to-azure-stack-hci-windows-server-sku.md)]
1919

20-
This article provides information about tested configurations, resource limits, VM sizes, and regions for Azure Kubernetes Service (AKS) enabled by Azure Arc. The tests used the latest release of AKS on Azure Local.
20+
This article provides information about tested configurations, resource limits, VM sizes, and regions for Azure Kubernetes Service (AKS) enabled by Azure Arc. The tests used the latest release of AKS enabled by Azure Arc.
2121

2222
## Maximum specifications
2323

24-
AKS enabled by Arc deployments have been validated with the following configurations, including the specified maximums. Keep in mind that exceeding these maximums is at your own risk and might lead to unexpected behaviors and failures. This article provides some guidance on how to avoid common configuration mistakes and can help you create a larger configuration. If in doubt, contact your local Microsoft office for assistance or submit a question in the [Azure Local community](https://feedback.azure.com/d365community/search/?q=Azure+Kubernetes).
24+
AKS enabled by Arc deployments have been validated with the following configurations, including the specified maximums. Keep in mind that exceeding these maximums is at your own risk and might lead to unexpected behaviors and failures. This article provides some guidance on how to avoid common configuration mistakes and can help you create a larger configuration. If in doubt, contact your local Microsoft office for assistance or [submit a question to the community](https://feedback.azure.com/d365community/search/?q=Azure+Kubernetes).
2525

2626
| Resource | Maximum |
2727
| ---------------------------- | --------|
@@ -38,23 +38,23 @@ The recommended limits were tested with the default virtual machine (VM) sizes,
3838
|Target Cluster Linux worker node| **Standard_K8S3_v1**|
3939
|Target Cluster Windows worker node| **Standard_K8S3_v1**|
4040

41-
The hardware configuration of each physical node in the Azure Local cluster is as follows:
41+
The hardware configuration of each physical node in the AKS Arc cluster is as follows:
4242

4343
- Chassis: Dell PowerEdge R650 Server or similar.
4444
- RAM: RDIMM, 3200 MT/s, Dual Rank, total of 256 GB.
4545
- CPU: Two (2) Intel Xeon Silver 4316 2.3G, 20C/40T, 10.4 GT/s, 30M Cache, Turbo, HT (150 W) DDR4-2666.
4646
- Disk: 8x HDDs (2 TB or larger) and 2x 1.6 TB NVMe to support S2D storage configurations.
4747
- Network: Four (4) 100-Gbit NICs (Mellanox or Intel).
4848

49-
Microsoft engineering tested AKS enabled by Arc using the above configuration. For single node. 2 node, 4 node and 8 node Windows failover clusters. If you have a requirement to exceed the tested configuration, see [Scaling AKS on Azure Local](#scaling-aks-on-azure-local).
49+
Microsoft engineering tested AKS enabled by Arc using the above configuration. For single node. 2 node, 4 node and 8 node Windows failover clusters. If you have a requirement to exceed the tested configuration, see [Scaling AKS](#scaling-aks).
5050

5151
> [!IMPORTANT]
5252
> When you upgrade a deployment of AKS, extra resources are temporarily consumed.
5353
> Each virtual machine is upgraded in a rolling update flow, starting with the control plane nodes. For each node in the Kubernetes cluster, a new node VM is created. The old node VM is restricted in order to prevent workloads from being deployed to it. The restricted VM is then drained of all containers to distribute the containers to other VMs in the system. The drained VM is then removed from the cluster, shut down, and replaced by the new, updated VM. This process repeats until all VMs are updated.
5454
5555
## Available VM sizes
5656

57-
The following VM sizes for control plane nodes, Linux worker nodes, and Windows worker nodes are available for AKS on Azure Local. While VM sizes such as **Standard_K8S2_v1** and **Standard_K8S_v1** are supported for testing and low resource requirement deployments, use these sizes with care and apply stringent testing as they may result in unexpected failures due to out of memory conditions.
57+
The following VM sizes for control plane nodes, Linux worker nodes, and Windows worker nodes are available for AKS on Windows Server. While VM sizes such as **Standard_K8S2_v1** and **Standard_K8S_v1** are supported for testing and low resource requirement deployments, use these sizes with care and apply stringent testing as they may result in unexpected failures due to out of memory conditions.
5858

5959
| VM Size | CPU | Memory (GB) | GPU type | GPU count |
6060
| -------------- | ----| ------------| -------- | --------- |
@@ -86,9 +86,9 @@ AKS enabled by Arc is supported in the following Azure regions:
8686
- Southeast Asia
8787
- West Europe
8888

89-
## Scaling AKS on Azure Local
89+
## Scaling AKS
9090

91-
Scaling an AKS deployment on Azure Local involves planning ahead by knowing your workloads and target cluster utilization. Additionally, consider hardware resources in your underlying infrastructure such as total CPU cores, total memory, storage, IP Addresses and so on.
91+
Scaling an AKS deployment involves planning ahead by knowing your workloads and target cluster utilization. Additionally, consider hardware resources in your underlying infrastructure such as total CPU cores, total memory, storage, IP Addresses and so on.
9292

9393
The following examples assume that only AKS-based workloads are deployed on the underlying infrastructure. Deploying non-AKS workloads such as stand-alone or clustered virtual machines, or database servers, reduces the resources available to AKS, which you must take into account.
9494

@@ -151,16 +151,16 @@ Other considerations:
151151

152152
The following scaling example is based on these general assumptions/use cases:
153153

154-
- You want to be able to completely tolerate the loss of one physical node in the Azure Local cluster.
154+
- You want to be able to completely tolerate the loss of one physical node in the Kubernetes cluster.
155155
- You want to support upgrading target clusters to newer versions.
156156
- You want to allow for high availability of the target cluster control plane nodes and load balancer nodes.
157-
- You want to reserve a part of the overall Azure Local capacity for these cases.
157+
- You want to reserve a part of the overall Windows Server capacity for these cases.
158158

159159
#### Suggestions
160160

161161
- For optimal performance, make sure to set at least 15 percent (100/8=12.5) of cluster capacity aside to allow all resources from one physical node to be redistributed to the other seven (7) nodes. This configuration ensures that you have some reserve available to do an upgrade or other AKS day two operations.
162162

163-
- If you want to grow beyond the 200-VM limit for a maximum hardware sized eight (8) node Azure Local cluster, increase the size of the AKS host VM. Doubling in size results in roughly double the number of VMs it can manage. In an 8-node Azure Local cluster, you can get to 8,192 (8x1024) VMs based on the Azure Local recommended resource limits documented in the [Maximum supported hardware specifications](/azure-stack/hci/concepts/system-requirements#maximum-supported-hardware-specifications). You should reserve approximately 30% of capacity, which leaves you with a theoretical limit of 5,734 VMs across all nodes.
163+
- If you want to grow beyond the 200-VM limit for a maximum hardware sized eight (8) node cluster, increase the size of the AKS host VM. Doubling in size results in roughly double the number of VMs it can manage. In an 8-node Kubernetes cluster, you can get to 8,192 (8x1024) VMs based on the recommended resource limits documented in the [Maximum supported hardware specifications](/azure/azure-local/concepts/system-requirements#maximum-supported-hardware-specifications). You should reserve approximately 30% of capacity, which leaves you with a theoretical limit of 5,734 VMs across all nodes.
164164

165165
- **Standard_D32s_v3**, for the AKS host with 32 cores and 128 GB - can support a maximum of 1,600 nodes.
166166

@@ -171,17 +171,17 @@ The following scaling example is based on these general assumptions/use cases:
171171
- To run 200 worker nodes in one target cluster, you can use the default control plane and load balancer size. Depending on the number of pods per node, you can go up at least one size on the control plane and use **Standard_D8s_v3**.
172172
- Depending on the number of Kubernetes services hosted in each target cluster, you might have to increase the size of the load balancer VM as well at target cluster creation to ensure that services can be reached with high performance and traffic is routed accordingly.
173173

174-
The deployment of AKS enabled by Arc distributes the worker nodes for each node pool in a target cluster across the available Azure Local nodes using the Azure Local placement logic.
174+
The deployment of AKS enabled by Arc distributes the worker nodes for each node pool in a target cluster across the available nodes using placement logic.
175175

176176
> [!IMPORTANT]
177-
> The node placement is not preserved during platform and AKS upgrades and will change over time. A failed physical node will also impact the distribution of virtual machines across the remaining cluster nodes.
177+
> The node placement is not preserved during platform and AKS upgrades and will change over time. A failed physical node also impacts the distribution of virtual machines across the remaining cluster nodes.
178178

179179
> [!NOTE]
180180
> Do not run more than four target cluster creations at the same time if the physical cluster is already 50% full, as that can lead to temporary resource contention.
181181
> When scaling up target cluster node pools by large numbers, take into account available physical resources, as AKS does not verify resource availability for parallel running creation/scaling processes.
182182
> Always ensure enough reserve to allow for upgrades and failover. Especially in very large environments, these operations, when run in parallel, can lead to rapid resource exhaustion.
183183

184-
If in doubt, contact your local Microsoft office for assistance or post in the [Azure Local community forum](https://feedback.azure.com/d365community/search/?q=Azure+Kubernetes).
184+
If in doubt, contact your local Microsoft office for assistance or [post in the community forum](https://feedback.azure.com/d365community/search/?q=Azure+Kubernetes).
185185

186186
## Next steps
187187

AKS-Arc/tutorial-kubernetes-app-update.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ This tutorial, part six of seven, describes how to update the sample Azure Vote
3131
In previous tutorials you learned how to:
3232

3333
* Package an application into a container image and upload the image to Azure Container Registry.
34-
* Create a Kubernetes cluster on Azure Local and deploy the application to the cluster.
34+
* Create a Kubernetes cluster and deploy the application to the cluster.
3535
* Clone an application repository that includes the application source code and a pre-created Docker Compose file you can use in this tutorial.
3636

3737
Verify that you created a clone of the repo, and changed directories into the cloned directory. If you haven't completed these steps, start with [Tutorial 1 - Create container images](tutorial-kubernetes-prepare-application.md).

0 commit comments

Comments
 (0)