You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: AKS-Arc/aks-edge-concept-clusters-nodes.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ description: Learn about clusters and nodes running on AKS Edge Essentials.
4
4
author: sethmanheim
5
5
ms.author: sethm
6
6
ms.topic: concept-article
7
-
ms.date: 07/11/2024
7
+
ms.date: 07/03/2025
8
8
ms.custom: template-concept
9
9
---
10
10
@@ -18,7 +18,7 @@ When you create an AKS Edge Essentials deployment, AKS Edge Essentials creates a
18
18
19
19

20
20
21
-
Deployments can only create one Linux VM on a given host machine. This Linux VM can act as both the control plane node and as a worker node based on your deployment needs. This curated VM is based on [CBL-Mariner](https://github.com/microsoft/CBL-Mariner). CBL-Mariner is an internal Linux distribution for Microsoft's cloud infrastructure and edge products and services. CBL-Mariner is designed to provide a consistent platform for these devices and services and enhances Microsoft's ability to stay current on Linux updates. For more information, see [CBL-Mariner security](https://github.com/microsoft/CBL-Mariner/blob/2.0/SECURITY.md). The Linux virtual machine is built on four-point comprehensive premises:
21
+
Deployments can only create one Linux VM on a given host machine. This Linux VM can act as both the control plane node and as a worker node based on your deployment needs. This curated VM is based on [CBL-Mariner](https://github.com/microsoft/CBL-Mariner). CBL-Mariner is an internal Linux distribution for Microsoft's cloud infrastructure and edge products and services. CBL-Mariner is designed to provide a consistent platform for these devices and services and enhances Microsoft's ability to stay current on Linux updates. For more information, see [CBL-Mariner security](https://github.com/microsoft/CBL-Mariner/blob/2.0/SECURITY.md). The Linux virtual machine is built on a four-point comprehensive premise:
22
22
23
23
- Servicing updates
24
24
- Read-only root filesystem
@@ -27,7 +27,7 @@ Deployments can only create one Linux VM on a given host machine. This Linux VM
27
27
28
28
Running a Windows node is optional and you can create a Windows node if you need to deploy Windows containers. This node runs as a Windows virtual machine based on [Windows 10 IoT Enterprise LTSC 2019](/lifecycle/products/windows-10-iot-enterprise-ltsc-2019). The Windows VM brings all the security features and capabilities of Windows 10.
29
29
30
-
You can define the amount of CPU and memory resources that you'd like to allocate for each of the VMs. This static allocation enables you to control how resources are used and ensures that applications running on the host have the required resources.
30
+
You can define the amount of CPU and memory resources that you want to allocate for each of the VMs. This static allocation enables you to control how resources are used and ensures that applications running on the host have the required resources.
31
31
32
32
Finally, AKS Edge Essentials doesn't offer dynamic creation of virtual machines. If a node VM goes down, you have to recreate it. That said, if you have a full deployment with multiple control plane nodes and worker nodes, if a VM goes down, Kubernetes moves workloads to an active node.
33
33
@@ -47,7 +47,7 @@ After you set up your machines, you can deploy AKS Edge Essentials in the follow
47
47
48
48

49
49
50
-
Once you've created your cluster, you can deploy your applications and connect your cluster to Arc, to enable Arc extensions such as Azure Monitor and Azure Policy. You can also choose to use GitOps to manage your deployments.
50
+
Once you create your cluster, you can deploy your applications and connect your cluster to Arc, to enable Arc extensions such as Azure Monitor and Azure Policy. You can also choose to use GitOps to manage your deployments.
Copy file name to clipboardExpand all lines: AKS-Arc/certificates-overview.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,10 +3,10 @@ title: Overview of certificate management in AKS on Windows Server
3
3
description: Learn how to manage certificates for secure communication between in-cluster components in AKS by provisioning and managing certificates in AKS on Windows Server.
4
4
author: sethmanheim
5
5
ms.topic: concept-article
6
-
ms.date: 01/10/2024
6
+
ms.date: 07/03/2025
7
7
ms.author: sethm
8
8
ms.lastreviewed: 04/01/2023
9
-
ms.reviewer: sulahiri
9
+
ms.reviewer: leslielin
10
10
11
11
# Intent: As an IT Pro, I want to learn how to use certificates to secure communication between in-cluster components on my AKS deployment.
12
12
# Keyword: control plane nodes secure communication certificate revocation
@@ -105,9 +105,9 @@ A `notBefore` time can be specified to revoke only certificates that are issued
105
105
> [!NOTE]
106
106
> Revocation of `kubelet` server certificates is currently not available.
107
107
108
-
If you use a serial number when you perform a revocation, you can use the `Repair-AksHciClusterCerts` PowerShell command, described below, to get your cluster into a working state. If you use any of the other fields listed earlier, make sure to specify a `notBefore` time.
108
+
If you use a serial number when you perform a revocation, you can use the `Repair-AksHciClusterCerts` PowerShell command, described as follows, to get your cluster into a working state. If you use any of the other fields listed earlier, make sure to specify a `notBefore` time.
Copy file name to clipboardExpand all lines: AKS-Arc/container-storage-interface-disks.md
+17-16Lines changed: 17 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: Use Container Storage Interface (CSI) disk drivers in AKS enabled by Azur
3
3
description: Learn how to use Container Storage Interface (CSI) drivers to manage disks in AKS enabled by Arc.
4
4
author: sethmanheim
5
5
ms.topic: how-to
6
-
ms.date: 03/14/2024
6
+
ms.date: 07/03/2025
7
7
ms.author: sethm
8
8
ms.lastreviewed: 01/14/2022
9
9
ms.reviewer: abha
@@ -23,19 +23,19 @@ This article describes how to use Container Storage Interface (CSI) built-in sto
23
23
24
24
## Dynamically create disk persistent volumes using built-in storage class
25
25
26
-
A *storage class* is used to define how a unit of storage is dynamically created with a persistent volume. For more information on how to use storage classes, see [Kubernetes storage classes](https://kubernetes.io/docs/concepts/storage/storage-classes/).
26
+
A *storage class* is used to define how a unit of storage is dynamically created with a persistent volume. For more information about how to use storage classes, see [Kubernetes storage classes](https://kubernetes.io/docs/concepts/storage/storage-classes/).
27
27
28
-
In AKS Arc, the **default** storage class is created by default and uses CSI to create VHDX-backed volumes. The reclaim policy ensures that the underlying VHDX is deleted when the persistent volume that used it is deleted. The storage class also configures the persistent volumes to be expandable; you just need to edit the persistent volume claim with the new size.
28
+
In AKS Arc, the default storage class uses CSI to create VHDX-backed volumes. The reclaim policy ensures that the underlying VHDX is deleted when the persistent volume that used it is deleted. The storage class also configures the persistent volumes to be expandable; you just need to edit the persistent volume claim with the new size.
29
29
30
-
To leverage this storage class, create a [PVC](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) and a respective pod that references and uses it. A PVC is used to automatically provision storage based on a storage class. A PVC can use one of the pre-created storage classes or a user-defined storage class to create a VHDX of the desired size. When you create a pod definition, the PVC is specified to request the desired storage.
30
+
To use this storage class, create a [Persistent Volume Claim (PVC)](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) and a respective pod that references and uses it. A PVC is used to automatically provision storage based on a storage class. A PVC can use one of the pre-created storage classes or a user-defined storage class to create a VHDX of the desired size. When you create a pod definition, the PVC is specified to request the desired storage.
31
31
32
32
## Create custom storage class for disks
33
33
34
34
The default storage class is suitable for most common scenarios. However, in some cases, you may want to create your own storage class that stores PVs at a particular location mapped to a specific performance tier.
35
35
36
36
If you have Linux workloads (pods), you must create a custom storage class with the parameter `fsType: ext4`. This requirement applies to Kubernetes versions 1.19 and 1.20 or later. The following example shows a custom storage class definition with `fsType` parameter defined:
37
37
38
-
```YAML
38
+
```yaml
39
39
apiVersion: storage.k8s.io/v1
40
40
kind: StorageClass
41
41
metadata:
@@ -56,7 +56,7 @@ volumeBindingMode: Immediate
56
56
allowVolumeExpansion: true
57
57
```
58
58
59
-
If you create a custom storage class, you can specify the location where you want to store PVs. If the underlying infrastructure is Azure Local, this new location could be a volume that's backed by high-performing SSDs/NVMe or a cost-optimized volume backed by HDDs.
59
+
If you create a custom storage class, you can specify the location in which you want to store PVs. If the underlying infrastructure is Azure Local, this new location could be a volume that's backed by high-performing SSDs/NVMe, or a cost-optimized volume backed by HDDs.
60
60
61
61
Creating a custom storage class is a two-step process:
62
62
@@ -75,7 +75,8 @@ Creating a custom storage class is a two-step process:
75
75
```azurecli
76
76
$storagepathID = az stack-hci-vm storagepath show --name $storagepathname --resource-group $resource_group --query "id" -o tsv
77
77
```
78
-
2. Create a new custom storage class using the new storage path.
78
+
79
+
1. Create a new custom storage class using the new storage path.
79
80
80
81
1. Create a file named **sc-aks-hci-disk-custom.yaml**, and then copy the manifest from the following YAML file. The storage class is the same as the default storage class except with the new `container`. Use the `storage path ID` created in the previous step for `container`. For `group` and `hostname`, query the default storage class by running `kubectl get storageclass default -o yaml`, and then use the values that are specified:
81
82
@@ -100,9 +101,9 @@ Creating a custom storage class is a two-step process:
100
101
volumeBindingMode: Immediate
101
102
```
102
103
103
-
2. Create the storage class with the [kubectl apply](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply/) command and specify your **sc-aks-hci-disk-custom.yaml** file:
104
+
1. Create the storage class with the [kubectl apply](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply/) command and specify your **sc-aks-hci-disk-custom.yaml** file:
104
105
105
-
```console
106
+
```azurecli
106
107
$ kubectl apply -f sc-aks-hci-disk-custom.yaml
107
108
storageclass.storage.k8s.io/aks-hci-disk-custom created
108
109
```
@@ -121,7 +122,7 @@ Creating a custom storage class is a two-step process:
2. Create a new custom storage class using the new storage path.
125
+
1. Create a new custom storage class using the new storage path.
125
126
126
127
1. Create a file named **sc-aks-hci-disk-custom.yaml**, and then copy the manifest from the following YAML file. The storage class is the same as the default storage class except with the new `container`. Use the `storage container name` created in the previous step for `container`. For `group` and `hostname`, query the default storage class by running `kubectl get storageclass default -o yaml`, and then use the values that are specified:
127
128
@@ -146,15 +147,15 @@ Creating a custom storage class is a two-step process:
146
147
volumeBindingMode: Immediate
147
148
```
148
149
149
-
2. Create the storage class with the [kubectl apply](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply/) command and specify your **sc-aks-hci-disk-custom.yaml** file:
150
-
151
-
```console
152
-
$ kubectl apply -f sc-aks-hci-disk-custom.yaml
153
-
storageclass.storage.k8s.io/aks-hci-disk-custom created
150
+
1. Create the storage class with the [kubectl apply](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply/) command and specify your **sc-aks-hci-disk-custom.yaml** file:
151
+
152
+
```azurecli
153
+
$ kubectl apply -f sc-aks-hci-disk-custom.yaml
154
+
storageclass.storage.k8s.io/aks-hci-disk-custom created
154
155
```
155
156
156
157
---
157
158
158
159
## Next steps
159
160
160
-
- [Use the file Container Storage Interface drivers](container-storage-interface-files.md)
161
+
[Use the Container Storage Interface file drivers](container-storage-interface-files.md)
Copy file name to clipboardExpand all lines: AKS-Arc/deploy-load-balancer-cli.md
+21-21Lines changed: 21 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -62,37 +62,37 @@ If you don't have [Graph permission Application.Read.All](/graph/permissions-ref
62
62
63
63
1. Register the `Microsoft.KubernetesRuntime RP` if you haven't already done so. Note that you only need to register once per Azure subscription. You can also register resource providers using the Azure portal. For more information about how to register resource providers and required permissions, see [how to register a resource provider](/azure/azure-resource-manager/management/resource-providers-and-types#register-resource-provider).
64
64
65
-
```azurecli
66
-
az provider register -n Microsoft.KubernetesRuntime
67
-
```
65
+
```azurecli
66
+
az provider register -n Microsoft.KubernetesRuntime
67
+
```
68
68
69
-
You can check if the resource provider has been registered successfully by running the following command.
69
+
You can check if the resource provider has been registered successfully by running the following command.
70
70
71
-
```azurecli
72
-
az provider show -n Microsoft.KubernetesRuntime -o table
73
-
```
71
+
```azurecli
72
+
az provider show -n Microsoft.KubernetesRuntime -o table
1. To install the Arc extension for MetalLB, obtain the AppID of the MetalLB extension resource provider, and then run the extension create command. You must run the following commands once per Arc Kubernetes cluster.
84
84
85
-
Obtain the Application ID of the Arc extension by running [az ad sp list](/cli/azure/ad/sp#az-ad-sp-list). In order to run the following command, you must be a `user` member of your Azure tenant. For more information about user and guest membership, see [default user permissions in Microsoft Entra ID](/entra/fundamentals/users-default-permissions).
85
+
Obtain the Application ID of the Arc extension by running [az ad sp list](/cli/azure/ad/sp#az-ad-sp-list). In order to run the following command, you must be a `user` member of your Azure tenant. For more information about user and guest membership, see [default user permissions in Microsoft Entra ID](/entra/fundamentals/users-default-permissions).
86
86
87
-
```azurecli
88
-
$objID = az ad sp list --filter "appId eq '00001111-aaaa-2222-bbbb-3333cccc4444'" --query "[].id" --output tsv
89
-
```
87
+
```azurecli
88
+
$objID = az ad sp list --filter "appId eq '00001111-aaaa-2222-bbbb-3333cccc4444'" --query "[].id" --output tsv
89
+
```
90
90
91
-
Once you have the `objID`, you can install the MetalLB Arc extension on your Kubernetes cluster. To run the following command, you must have the [**Kubernetes extension contributor**](/azure/role-based-access-control/built-in-roles/containers#kubernetes-extension-contributor) role.
91
+
Once you have the `objID`, you can install the MetalLB Arc extension on your Kubernetes cluster. To run the following command, you must have the [**Kubernetes extension contributor**](/azure/role-based-access-control/built-in-roles/containers#kubernetes-extension-contributor) role.
Copy file name to clipboardExpand all lines: AKS-Arc/deploy-windows-application.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: Deploy Windows .NET applications
3
3
description: Learn how to deploy a Windows.NET application to your Kubernetes cluster using a custom image stored in Azure Container Registry in AKS on Windows Server.
Copy file name to clipboardExpand all lines: AKS-Arc/includes/csi-in-aks-hybrid-overview.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,15 +3,15 @@ author: sethmanheim
3
3
ms.author: sethm
4
4
ms.service: azure-stack
5
5
ms.topic: include
6
-
ms.date: 02/29/2024
6
+
ms.date: 07/03/2025
7
7
ms.reviewer: abha
8
8
ms.lastreviewed: 10/18/2022
9
9
10
10
# Overview of CSI file and driver functionality in AKS enabled by Azure Arc
11
11
12
12
---
13
13
14
-
The Container Storage Interface (CSI) is a standard for exposing arbitrary block and file storage systems to containerized workloads on Kubernetes. By using CSI, AKS enabled by Arc can write, deploy, and iterate plug-ins to expose new storage systems. Using CSI can also improve existing ones in Kubernetes without having to touch the core Kubernetes code and then wait for its release cycles.
14
+
The Container Storage Interface (CSI) is a standard for exposing arbitrary block and file storage systems to containerized workloads on Kubernetes. By using CSI, AKS enabled by Arc can write, deploy, and iterate plug-ins to expose new storage systems. CSI can also improve existing ones in Kubernetes without having to touch the core Kubernetes code and then wait for its release cycles.
15
15
16
16
The disk and file CSI drivers used by AKS Arc are [CSI specification](https://github.com/container-storage-interface/spec/blob/master/spec.md)-compliant drivers.
> For information about managing node pools in AKS on Azure Local 22H2, see [Manage node pools](manage-node-pools-22h2.md).
19
+
> For information about managing node pools in AKS on Windows Server, see [Manage node pools](manage-node-pools-22h2.md).
20
20
21
-
In AKS enabled by Azure Arc, nodes of the same configuration are grouped together into *node pools*. These node pools contain the underlying VMs that run your applications. This article shows you how to create and manage node pools for an AKS cluster.
21
+
In AKS on Azure Local, nodes of the same configuration are grouped together into *node pools*. These node pools contain the underlying VMs that run your applications. This article shows you how to create and manage node pools for an AKS cluster.
0 commit comments