You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: AKS-Arc/aks-edge-concept-clusters-nodes.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ description: Learn about clusters and nodes running on AKS Edge Essentials.
4
4
author: sethmanheim
5
5
ms.author: sethm
6
6
ms.topic: concept-article
7
-
ms.date: 07/11/2024
7
+
ms.date: 07/03/2025
8
8
ms.custom: template-concept
9
9
---
10
10
@@ -18,7 +18,7 @@ When you create an AKS Edge Essentials deployment, AKS Edge Essentials creates a
18
18
19
19

20
20
21
-
Deployments can only create one Linux VM on a given host machine. This Linux VM can act as both the control plane node and as a worker node based on your deployment needs. This curated VM is based on [CBL-Mariner](https://github.com/microsoft/CBL-Mariner). CBL-Mariner is an internal Linux distribution for Microsoft's cloud infrastructure and edge products and services. CBL-Mariner is designed to provide a consistent platform for these devices and services and enhances Microsoft's ability to stay current on Linux updates. For more information, see [CBL-Mariner security](https://github.com/microsoft/CBL-Mariner/blob/2.0/SECURITY.md). The Linux virtual machine is built on four-point comprehensive premises:
21
+
Deployments can only create one Linux VM on a given host machine. This Linux VM can act as both the control plane node and as a worker node based on your deployment needs. This curated VM is based on [CBL-Mariner](https://github.com/microsoft/CBL-Mariner). CBL-Mariner is an internal Linux distribution for Microsoft's cloud infrastructure and edge products and services. CBL-Mariner is designed to provide a consistent platform for these devices and services and enhances Microsoft's ability to stay current on Linux updates. For more information, see [CBL-Mariner security](https://github.com/microsoft/CBL-Mariner/blob/2.0/SECURITY.md). The Linux virtual machine is built on a four-point comprehensive premise:
22
22
23
23
- Servicing updates
24
24
- Read-only root filesystem
@@ -27,7 +27,7 @@ Deployments can only create one Linux VM on a given host machine. This Linux VM
27
27
28
28
Running a Windows node is optional and you can create a Windows node if you need to deploy Windows containers. This node runs as a Windows virtual machine based on [Windows 10 IoT Enterprise LTSC 2019](/lifecycle/products/windows-10-iot-enterprise-ltsc-2019). The Windows VM brings all the security features and capabilities of Windows 10.
29
29
30
-
You can define the amount of CPU and memory resources that you'd like to allocate for each of the VMs. This static allocation enables you to control how resources are used and ensures that applications running on the host have the required resources.
30
+
You can define the amount of CPU and memory resources that you want to allocate for each of the VMs. This static allocation enables you to control how resources are used and ensures that applications running on the host have the required resources.
31
31
32
32
Finally, AKS Edge Essentials doesn't offer dynamic creation of virtual machines. If a node VM goes down, you have to recreate it. That said, if you have a full deployment with multiple control plane nodes and worker nodes, if a VM goes down, Kubernetes moves workloads to an active node.
33
33
@@ -47,7 +47,7 @@ After you set up your machines, you can deploy AKS Edge Essentials in the follow
47
47
48
48

49
49
50
-
Once you've created your cluster, you can deploy your applications and connect your cluster to Arc, to enable Arc extensions such as Azure Monitor and Azure Policy. You can also choose to use GitOps to manage your deployments.
50
+
Once you create your cluster, you can deploy your applications and connect your cluster to Arc, to enable Arc extensions such as Azure Monitor and Azure Policy. You can also choose to use GitOps to manage your deployments.
Copy file name to clipboardExpand all lines: AKS-Arc/certificates-overview.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,10 +3,10 @@ title: Overview of certificate management in AKS on Windows Server
3
3
description: Learn how to manage certificates for secure communication between in-cluster components in AKS by provisioning and managing certificates in AKS on Windows Server.
4
4
author: sethmanheim
5
5
ms.topic: concept-article
6
-
ms.date: 01/10/2024
6
+
ms.date: 07/03/2025
7
7
ms.author: sethm
8
8
ms.lastreviewed: 04/01/2023
9
-
ms.reviewer: sulahiri
9
+
ms.reviewer: leslielin
10
10
11
11
# Intent: As an IT Pro, I want to learn how to use certificates to secure communication between in-cluster components on my AKS deployment.
12
12
# Keyword: control plane nodes secure communication certificate revocation
@@ -105,9 +105,9 @@ A `notBefore` time can be specified to revoke only certificates that are issued
105
105
> [!NOTE]
106
106
> Revocation of `kubelet` server certificates is currently not available.
107
107
108
-
If you use a serial number when you perform a revocation, you can use the `Repair-AksHciClusterCerts` PowerShell command, described below, to get your cluster into a working state. If you use any of the other fields listed earlier, make sure to specify a `notBefore` time.
108
+
If you use a serial number when you perform a revocation, you can use the `Repair-AksHciClusterCerts` PowerShell command, described as follows, to get your cluster into a working state. If you use any of the other fields listed earlier, make sure to specify a `notBefore` time.
Copy file name to clipboardExpand all lines: AKS-Arc/concepts-security-access-identity.md
+6-7Lines changed: 6 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: Access and identity options for Azure Kubernetes Service (AKS) Arc
3
3
description: Learn about options in access and identity management on a Kubernetes cluster in AKS on Azure Local.
4
4
author: sethmanheim
5
5
ms.topic: how-to
6
-
ms.date: 07/30/2024
6
+
ms.date: 07/03/2025
7
7
ms.author: sethm
8
8
ms.lastreviewed: 07/30/2024
9
9
ms.reviewer: leslielin
@@ -41,7 +41,7 @@ For more information, see [Using Kubernetes RBAC authorization](https://kubernet
41
41
42
42
#### Roles
43
43
44
-
Before assigning permissions to users with Kubernetes RBAC, you define user permissions as a *role*. Grant permissions within a Kubernetes namespace using roles.
44
+
Before assigning permissions to users with Kubernetes RBAC, you define user permissions as a role. Grant permissions within a Kubernetes namespace using roles.
45
45
46
46
Kubernetes roles grant permissions; they don't deny permissions. To grant permissions across the entire cluster or to cluster resources outside a given namespace, you can use *ClusterRoles*.
47
47
@@ -51,7 +51,7 @@ A ClusterRole grants and applies permissions to resources across the entire clus
51
51
52
52
### RoleBindings and ClusterRoleBindings
53
53
54
-
Once you define roles to grant permissions to resources, you assign those Kubernetes RBAC permissions with a *RoleBinding*. If your AKS cluster [integrates with Microsoft Entra ID](#microsoft-entra-integration), RoleBindings grant permissions to Microsoft Entra users to perform actions within the cluster. See [Control access using Microsoft Entra ID and Kubernetes RBAC](kubernetes-rbac-local.md)
54
+
Once you define roles to grant permissions to resources, you assign those Kubernetes RBAC permissions with a *RoleBinding*. If your AKS cluster [integrates with Microsoft Entra ID](#microsoft-entra-integration), RoleBindings grant permissions to Microsoft Entra users to perform actions within the cluster. See [Control access using Microsoft Entra ID and Kubernetes RBAC](kubernetes-rbac-local.md).
55
55
56
56
#### RoleBindings
57
57
@@ -82,7 +82,7 @@ Azure Role-based Access Control (RBAC) is an authorization system built on [Azur
82
82
83
83
With Azure RBAC, you create a *role definition* that outlines the permissions to be applied. You then assign a user or group this role definition via a *role assignment* for a particular *scope*. The scope can be an individual resource, a resource group, or across the subscription.
84
84
85
-
For more information, see [What is Azure role-based access control (Azure RBAC)?](/azure/role-based-access-control/overview)
85
+
For more information, see [What is Azure role-based access control (Azure RBAC)?](/azure/role-based-access-control/overview).
86
86
87
87
There are two required levels of access to fully operate an AKS Arc cluster:
88
88
@@ -113,15 +113,15 @@ In this scenario, you use Azure RBAC mechanisms and APIs to assign users built-i
113
113
With this feature, you not only give users permissions to the AKS resource across subscriptions, but you also configure the role and permissions for inside each of those clusters controlling Kubernetes API access. There are four built-in roles available for this data plane action, each with its own scope of permissions, [as described in the built-in roles](#built-in-roles) section.
114
114
115
115
> [!IMPORTANT]
116
-
> You must enable Azure RBAC for Kubernetes authorization before doing role assignment. For more details and step by step guidance, see [Use Azure RBAC for Kubernetes authorization](azure-rbac-local.md).
116
+
> You must enable Azure RBAC for Kubernetes authorization before doing role assignment. For more details and step-by-step guidance, see [Use Azure RBAC for Kubernetes authorization](azure-rbac-local.md).
Enhance your AKS cluster security with Microsoft Entra integration. Built on enterprise identity management experience, Microsoft Entra ID is a multitenant, cloud-based directory and identity management service that combines core directory services, application access management, and identity protection. With Microsoft Entra ID, you can integrate on-premises identities into AKS clusters to provide a single source for account management and security.
124
+
Microsoft Entra integration can help to enhance your AKS cluster security. Built on enterprise identity management experience, Microsoft Entra ID is a multitenant, cloud-based directory and identity management service that combines core directory services, application access management, and identity protection. With Microsoft Entra ID, you can integrate on-premises identities into AKS clusters to provide a single source for account management and security.
| Admin login using client certificate |[Azure Kubernetes Service Arc Cluster Admin Role](/azure/role-based-access-control/built-in-roles/containers#azure-kubernetes-service-arc-cluster-admin-role). This role allows `az aksarc get-credentials` to be used with the `--admin` flag, which downloads a non-Microsoft Entra cluster admin certificate into the user's **.kube/config**. This is the only purpose of the Azure Kubernetes Admin role. | n/a | If you're permanently blocked by not having access to a valid Microsoft Entra group with access to your cluster. |
Copy file name to clipboardExpand all lines: AKS-Arc/container-storage-interface-disks.md
+17-16Lines changed: 17 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: Use Container Storage Interface (CSI) disk drivers in AKS enabled by Azur
3
3
description: Learn how to use Container Storage Interface (CSI) drivers to manage disks in AKS enabled by Arc.
4
4
author: sethmanheim
5
5
ms.topic: how-to
6
-
ms.date: 03/14/2024
6
+
ms.date: 07/03/2025
7
7
ms.author: sethm
8
8
ms.lastreviewed: 01/14/2022
9
9
ms.reviewer: abha
@@ -23,19 +23,19 @@ This article describes how to use Container Storage Interface (CSI) built-in sto
23
23
24
24
## Dynamically create disk persistent volumes using built-in storage class
25
25
26
-
A *storage class* is used to define how a unit of storage is dynamically created with a persistent volume. For more information on how to use storage classes, see [Kubernetes storage classes](https://kubernetes.io/docs/concepts/storage/storage-classes/).
26
+
A *storage class* is used to define how a unit of storage is dynamically created with a persistent volume. For more information about how to use storage classes, see [Kubernetes storage classes](https://kubernetes.io/docs/concepts/storage/storage-classes/).
27
27
28
-
In AKS Arc, the **default** storage class is created by default and uses CSI to create VHDX-backed volumes. The reclaim policy ensures that the underlying VHDX is deleted when the persistent volume that used it is deleted. The storage class also configures the persistent volumes to be expandable; you just need to edit the persistent volume claim with the new size.
28
+
In AKS Arc, the default storage class uses CSI to create VHDX-backed volumes. The reclaim policy ensures that the underlying VHDX is deleted when the persistent volume that used it is deleted. The storage class also configures the persistent volumes to be expandable; you just need to edit the persistent volume claim with the new size.
29
29
30
-
To leverage this storage class, create a [PVC](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) and a respective pod that references and uses it. A PVC is used to automatically provision storage based on a storage class. A PVC can use one of the pre-created storage classes or a user-defined storage class to create a VHDX of the desired size. When you create a pod definition, the PVC is specified to request the desired storage.
30
+
To use this storage class, create a [Persistent Volume Claim (PVC)](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) and a respective pod that references and uses it. A PVC is used to automatically provision storage based on a storage class. A PVC can use one of the pre-created storage classes or a user-defined storage class to create a VHDX of the desired size. When you create a pod definition, the PVC is specified to request the desired storage.
31
31
32
32
## Create custom storage class for disks
33
33
34
34
The default storage class is suitable for most common scenarios. However, in some cases, you may want to create your own storage class that stores PVs at a particular location mapped to a specific performance tier.
35
35
36
36
If you have Linux workloads (pods), you must create a custom storage class with the parameter `fsType: ext4`. This requirement applies to Kubernetes versions 1.19 and 1.20 or later. The following example shows a custom storage class definition with `fsType` parameter defined:
37
37
38
-
```YAML
38
+
```yaml
39
39
apiVersion: storage.k8s.io/v1
40
40
kind: StorageClass
41
41
metadata:
@@ -56,7 +56,7 @@ volumeBindingMode: Immediate
56
56
allowVolumeExpansion: true
57
57
```
58
58
59
-
If you create a custom storage class, you can specify the location where you want to store PVs. If the underlying infrastructure is Azure Local, this new location could be a volume that's backed by high-performing SSDs/NVMe or a cost-optimized volume backed by HDDs.
59
+
If you create a custom storage class, you can specify the location in which you want to store PVs. If the underlying infrastructure is Azure Local, this new location could be a volume that's backed by high-performing SSDs/NVMe, or a cost-optimized volume backed by HDDs.
60
60
61
61
Creating a custom storage class is a two-step process:
62
62
@@ -75,7 +75,8 @@ Creating a custom storage class is a two-step process:
75
75
```azurecli
76
76
$storagepathID = az stack-hci-vm storagepath show --name $storagepathname --resource-group $resource_group --query "id" -o tsv
77
77
```
78
-
2. Create a new custom storage class using the new storage path.
78
+
79
+
1. Create a new custom storage class using the new storage path.
79
80
80
81
1. Create a file named **sc-aks-hci-disk-custom.yaml**, and then copy the manifest from the following YAML file. The storage class is the same as the default storage class except with the new `container`. Use the `storage path ID` created in the previous step for `container`. For `group` and `hostname`, query the default storage class by running `kubectl get storageclass default -o yaml`, and then use the values that are specified:
81
82
@@ -100,9 +101,9 @@ Creating a custom storage class is a two-step process:
100
101
volumeBindingMode: Immediate
101
102
```
102
103
103
-
2. Create the storage class with the [kubectl apply](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply/) command and specify your **sc-aks-hci-disk-custom.yaml** file:
104
+
1. Create the storage class with the [kubectl apply](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply/) command and specify your **sc-aks-hci-disk-custom.yaml** file:
104
105
105
-
```console
106
+
```azurecli
106
107
$ kubectl apply -f sc-aks-hci-disk-custom.yaml
107
108
storageclass.storage.k8s.io/aks-hci-disk-custom created
108
109
```
@@ -121,7 +122,7 @@ Creating a custom storage class is a two-step process:
2. Create a new custom storage class using the new storage path.
125
+
1. Create a new custom storage class using the new storage path.
125
126
126
127
1. Create a file named **sc-aks-hci-disk-custom.yaml**, and then copy the manifest from the following YAML file. The storage class is the same as the default storage class except with the new `container`. Use the `storage container name` created in the previous step for `container`. For `group` and `hostname`, query the default storage class by running `kubectl get storageclass default -o yaml`, and then use the values that are specified:
127
128
@@ -146,15 +147,15 @@ Creating a custom storage class is a two-step process:
146
147
volumeBindingMode: Immediate
147
148
```
148
149
149
-
2. Create the storage class with the [kubectl apply](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply/) command and specify your **sc-aks-hci-disk-custom.yaml** file:
150
-
151
-
```console
152
-
$ kubectl apply -f sc-aks-hci-disk-custom.yaml
153
-
storageclass.storage.k8s.io/aks-hci-disk-custom created
150
+
1. Create the storage class with the [kubectl apply](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply/) command and specify your **sc-aks-hci-disk-custom.yaml** file:
151
+
152
+
```azurecli
153
+
$ kubectl apply -f sc-aks-hci-disk-custom.yaml
154
+
storageclass.storage.k8s.io/aks-hci-disk-custom created
154
155
```
155
156
156
157
---
157
158
158
159
## Next steps
159
160
160
-
- [Use the file Container Storage Interface drivers](container-storage-interface-files.md)
161
+
[Use the Container Storage Interface file drivers](container-storage-interface-files.md)
0 commit comments