Skip to content

Commit 43ee941

Browse files
committed
More articles
1 parent 3f6471e commit 43ee941

8 files changed

+54
-53
lines changed

AKS-Arc/aks-edge-concept-clusters-nodes.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: Learn about clusters and nodes running on AKS Edge Essentials.
44
author: sethmanheim
55
ms.author: sethm
66
ms.topic: concept-article
7-
ms.date: 07/11/2024
7+
ms.date: 07/03/2025
88
ms.custom: template-concept
99
---
1010

@@ -18,7 +18,7 @@ When you create an AKS Edge Essentials deployment, AKS Edge Essentials creates a
1818

1919
![Screenshot showing the the VMs in AKS Edge.](./media/aks-edge/aks-edge-vm.png)
2020

21-
Deployments can only create one Linux VM on a given host machine. This Linux VM can act as both the control plane node and as a worker node based on your deployment needs. This curated VM is based on [CBL-Mariner](https://github.com/microsoft/CBL-Mariner). CBL-Mariner is an internal Linux distribution for Microsoft's cloud infrastructure and edge products and services. CBL-Mariner is designed to provide a consistent platform for these devices and services and enhances Microsoft's ability to stay current on Linux updates. For more information, see [CBL-Mariner security](https://github.com/microsoft/CBL-Mariner/blob/2.0/SECURITY.md). The Linux virtual machine is built on four-point comprehensive premises:
21+
Deployments can only create one Linux VM on a given host machine. This Linux VM can act as both the control plane node and as a worker node based on your deployment needs. This curated VM is based on [CBL-Mariner](https://github.com/microsoft/CBL-Mariner). CBL-Mariner is an internal Linux distribution for Microsoft's cloud infrastructure and edge products and services. CBL-Mariner is designed to provide a consistent platform for these devices and services and enhances Microsoft's ability to stay current on Linux updates. For more information, see [CBL-Mariner security](https://github.com/microsoft/CBL-Mariner/blob/2.0/SECURITY.md). The Linux virtual machine is built on a four-point comprehensive premise:
2222

2323
- Servicing updates
2424
- Read-only root filesystem
@@ -27,7 +27,7 @@ Deployments can only create one Linux VM on a given host machine. This Linux VM
2727

2828
Running a Windows node is optional and you can create a Windows node if you need to deploy Windows containers. This node runs as a Windows virtual machine based on [Windows 10 IoT Enterprise LTSC 2019](/lifecycle/products/windows-10-iot-enterprise-ltsc-2019). The Windows VM brings all the security features and capabilities of Windows 10.
2929

30-
You can define the amount of CPU and memory resources that you'd like to allocate for each of the VMs. This static allocation enables you to control how resources are used and ensures that applications running on the host have the required resources.
30+
You can define the amount of CPU and memory resources that you want to allocate for each of the VMs. This static allocation enables you to control how resources are used and ensures that applications running on the host have the required resources.
3131

3232
Finally, AKS Edge Essentials doesn't offer dynamic creation of virtual machines. If a node VM goes down, you have to recreate it. That said, if you have a full deployment with multiple control plane nodes and worker nodes, if a VM goes down, Kubernetes moves workloads to an active node.
3333

@@ -47,7 +47,7 @@ After you set up your machines, you can deploy AKS Edge Essentials in the follow
4747

4848
![Diagram showing AKS Edge Essentials deployment scenarios.](./media/aks-edge/aks-edge-deployment-options.jpg)
4949

50-
Once you've created your cluster, you can deploy your applications and connect your cluster to Arc, to enable Arc extensions such as Azure Monitor and Azure Policy. You can also choose to use GitOps to manage your deployments.
50+
Once you create your cluster, you can deploy your applications and connect your cluster to Arc, to enable Arc extensions such as Azure Monitor and Azure Policy. You can also choose to use GitOps to manage your deployments.
5151

5252
## Next steps
5353

AKS-Arc/certificates-overview.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,10 +3,10 @@ title: Overview of certificate management in AKS on Windows Server
33
description: Learn how to manage certificates for secure communication between in-cluster components in AKS by provisioning and managing certificates in AKS on Windows Server.
44
author: sethmanheim
55
ms.topic: concept-article
6-
ms.date: 01/10/2024
6+
ms.date: 07/03/2025
77
ms.author: sethm
88
ms.lastreviewed: 04/01/2023
9-
ms.reviewer: sulahiri
9+
ms.reviewer: leslielin
1010

1111
# Intent: As an IT Pro, I want to learn how to use certificates to secure communication between in-cluster components on my AKS deployment.
1212
# Keyword: control plane nodes secure communication certificate revocation
@@ -105,9 +105,9 @@ A `notBefore` time can be specified to revoke only certificates that are issued
105105
> [!NOTE]
106106
> Revocation of `kubelet` server certificates is currently not available.
107107
108-
If you use a serial number when you perform a revocation, you can use the `Repair-AksHciClusterCerts` PowerShell command, described below, to get your cluster into a working state. If you use any of the other fields listed earlier, make sure to specify a `notBefore` time.
108+
If you use a serial number when you perform a revocation, you can use the `Repair-AksHciClusterCerts` PowerShell command, described as follows, to get your cluster into a working state. If you use any of the other fields listed earlier, make sure to specify a `notBefore` time.
109109

110-
```console
110+
```yaml
111111
apiVersion: certificates.microsoft.com/v1
112112
kind: RenewRevocation
113113
metadata:

AKS-Arc/container-storage-interface-disks.md

Lines changed: 17 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Use Container Storage Interface (CSI) disk drivers in AKS enabled by Azur
33
description: Learn how to use Container Storage Interface (CSI) drivers to manage disks in AKS enabled by Arc.
44
author: sethmanheim
55
ms.topic: how-to
6-
ms.date: 03/14/2024
6+
ms.date: 07/03/2025
77
ms.author: sethm
88
ms.lastreviewed: 01/14/2022
99
ms.reviewer: abha
@@ -23,19 +23,19 @@ This article describes how to use Container Storage Interface (CSI) built-in sto
2323

2424
## Dynamically create disk persistent volumes using built-in storage class
2525

26-
A *storage class* is used to define how a unit of storage is dynamically created with a persistent volume. For more information on how to use storage classes, see [Kubernetes storage classes](https://kubernetes.io/docs/concepts/storage/storage-classes/).
26+
A *storage class* is used to define how a unit of storage is dynamically created with a persistent volume. For more information about how to use storage classes, see [Kubernetes storage classes](https://kubernetes.io/docs/concepts/storage/storage-classes/).
2727

28-
In AKS Arc, the **default** storage class is created by default and uses CSI to create VHDX-backed volumes. The reclaim policy ensures that the underlying VHDX is deleted when the persistent volume that used it is deleted. The storage class also configures the persistent volumes to be expandable; you just need to edit the persistent volume claim with the new size.
28+
In AKS Arc, the default storage class uses CSI to create VHDX-backed volumes. The reclaim policy ensures that the underlying VHDX is deleted when the persistent volume that used it is deleted. The storage class also configures the persistent volumes to be expandable; you just need to edit the persistent volume claim with the new size.
2929

30-
To leverage this storage class, create a [PVC](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) and a respective pod that references and uses it. A PVC is used to automatically provision storage based on a storage class. A PVC can use one of the pre-created storage classes or a user-defined storage class to create a VHDX of the desired size. When you create a pod definition, the PVC is specified to request the desired storage.
30+
To use this storage class, create a [Persistent Volume Claim (PVC)](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) and a respective pod that references and uses it. A PVC is used to automatically provision storage based on a storage class. A PVC can use one of the pre-created storage classes or a user-defined storage class to create a VHDX of the desired size. When you create a pod definition, the PVC is specified to request the desired storage.
3131

3232
## Create custom storage class for disks
3333

3434
The default storage class is suitable for most common scenarios. However, in some cases, you may want to create your own storage class that stores PVs at a particular location mapped to a specific performance tier.
3535

3636
If you have Linux workloads (pods), you must create a custom storage class with the parameter `fsType: ext4`. This requirement applies to Kubernetes versions 1.19 and 1.20 or later. The following example shows a custom storage class definition with `fsType` parameter defined:
3737

38-
```YAML
38+
```yaml
3939
apiVersion: storage.k8s.io/v1
4040
kind: StorageClass
4141
metadata:
@@ -56,7 +56,7 @@ volumeBindingMode: Immediate
5656
allowVolumeExpansion: true
5757
```
5858
59-
If you create a custom storage class, you can specify the location where you want to store PVs. If the underlying infrastructure is Azure Local, this new location could be a volume that's backed by high-performing SSDs/NVMe or a cost-optimized volume backed by HDDs.
59+
If you create a custom storage class, you can specify the location in which you want to store PVs. If the underlying infrastructure is Azure Local, this new location could be a volume that's backed by high-performing SSDs/NVMe, or a cost-optimized volume backed by HDDs.
6060
6161
Creating a custom storage class is a two-step process:
6262
@@ -75,7 +75,8 @@ Creating a custom storage class is a two-step process:
7575
```azurecli
7676
$storagepathID = az stack-hci-vm storagepath show --name $storagepathname --resource-group $resource_group --query "id" -o tsv
7777
```
78-
2. Create a new custom storage class using the new storage path.
78+
79+
1. Create a new custom storage class using the new storage path.
7980

8081
1. Create a file named **sc-aks-hci-disk-custom.yaml**, and then copy the manifest from the following YAML file. The storage class is the same as the default storage class except with the new `container`. Use the `storage path ID` created in the previous step for `container`. For `group` and `hostname`, query the default storage class by running `kubectl get storageclass default -o yaml`, and then use the values that are specified:
8182

@@ -100,9 +101,9 @@ Creating a custom storage class is a two-step process:
100101
volumeBindingMode: Immediate
101102
```
102103

103-
2. Create the storage class with the [kubectl apply](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply/) command and specify your **sc-aks-hci-disk-custom.yaml** file:
104+
1. Create the storage class with the [kubectl apply](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply/) command and specify your **sc-aks-hci-disk-custom.yaml** file:
104105

105-
```console
106+
```azurecli
106107
$ kubectl apply -f sc-aks-hci-disk-custom.yaml
107108
storageclass.storage.k8s.io/aks-hci-disk-custom created
108109
```
@@ -121,7 +122,7 @@ Creating a custom storage class is a two-step process:
121122
Get-AksHciStorageContainer -Name "customStorageContainer"
122123
```
123124

124-
2. Create a new custom storage class using the new storage path.
125+
1. Create a new custom storage class using the new storage path.
125126

126127
1. Create a file named **sc-aks-hci-disk-custom.yaml**, and then copy the manifest from the following YAML file. The storage class is the same as the default storage class except with the new `container`. Use the `storage container name` created in the previous step for `container`. For `group` and `hostname`, query the default storage class by running `kubectl get storageclass default -o yaml`, and then use the values that are specified:
127128

@@ -146,15 +147,15 @@ Creating a custom storage class is a two-step process:
146147
volumeBindingMode: Immediate
147148
```
148149

149-
2. Create the storage class with the [kubectl apply](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply/) command and specify your **sc-aks-hci-disk-custom.yaml** file:
150-
151-
```console
152-
$ kubectl apply -f sc-aks-hci-disk-custom.yaml
153-
storageclass.storage.k8s.io/aks-hci-disk-custom created
150+
1. Create the storage class with the [kubectl apply](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply/) command and specify your **sc-aks-hci-disk-custom.yaml** file:
151+
152+
```azurecli
153+
$ kubectl apply -f sc-aks-hci-disk-custom.yaml
154+
storageclass.storage.k8s.io/aks-hci-disk-custom created
154155
```
155156

156157
---
157158

158159
## Next steps
159160

160-
- [Use the file Container Storage Interface drivers](container-storage-interface-files.md)
161+
[Use the Container Storage Interface file drivers](container-storage-interface-files.md)

AKS-Arc/deploy-load-balancer-cli.md

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -62,37 +62,37 @@ If you don't have [Graph permission Application.Read.All](/graph/permissions-ref
6262

6363
1. Register the `Microsoft.KubernetesRuntime RP` if you haven't already done so. Note that you only need to register once per Azure subscription. You can also register resource providers using the Azure portal. For more information about how to register resource providers and required permissions, see [how to register a resource provider](/azure/azure-resource-manager/management/resource-providers-and-types#register-resource-provider).
6464

65-
```azurecli
66-
az provider register -n Microsoft.KubernetesRuntime
67-
```
65+
```azurecli
66+
az provider register -n Microsoft.KubernetesRuntime
67+
```
6868

69-
You can check if the resource provider has been registered successfully by running the following command.
69+
You can check if the resource provider has been registered successfully by running the following command.
7070

71-
```azurecli
72-
az provider show -n Microsoft.KubernetesRuntime -o table
73-
```
71+
```azurecli
72+
az provider show -n Microsoft.KubernetesRuntime -o table
73+
```
7474

75-
Expected output:
75+
Expected output:
7676

77-
```output
78-
Namespace RegistrationPolicy RegistrationState
79-
--------------------------- -------------------- -------------------
80-
Microsoft.KubernetesRuntime RegistrationRequired Registered
81-
```
77+
```output
78+
Namespace RegistrationPolicy RegistrationState
79+
--------------------------- -------------------- -------------------
80+
Microsoft.KubernetesRuntime RegistrationRequired Registered
81+
```
8282

8383
1. To install the Arc extension for MetalLB, obtain the AppID of the MetalLB extension resource provider, and then run the extension create command. You must run the following commands once per Arc Kubernetes cluster.
8484

85-
Obtain the Application ID of the Arc extension by running [az ad sp list](/cli/azure/ad/sp#az-ad-sp-list). In order to run the following command, you must be a `user` member of your Azure tenant. For more information about user and guest membership, see [default user permissions in Microsoft Entra ID](/entra/fundamentals/users-default-permissions).
85+
Obtain the Application ID of the Arc extension by running [az ad sp list](/cli/azure/ad/sp#az-ad-sp-list). In order to run the following command, you must be a `user` member of your Azure tenant. For more information about user and guest membership, see [default user permissions in Microsoft Entra ID](/entra/fundamentals/users-default-permissions).
8686

87-
```azurecli
88-
$objID = az ad sp list --filter "appId eq '00001111-aaaa-2222-bbbb-3333cccc4444'" --query "[].id" --output tsv
89-
```
87+
```azurecli
88+
$objID = az ad sp list --filter "appId eq '00001111-aaaa-2222-bbbb-3333cccc4444'" --query "[].id" --output tsv
89+
```
9090

91-
Once you have the `objID`, you can install the MetalLB Arc extension on your Kubernetes cluster. To run the following command, you must have the [**Kubernetes extension contributor**](/azure/role-based-access-control/built-in-roles/containers#kubernetes-extension-contributor) role.
91+
Once you have the `objID`, you can install the MetalLB Arc extension on your Kubernetes cluster. To run the following command, you must have the [**Kubernetes extension contributor**](/azure/role-based-access-control/built-in-roles/containers#kubernetes-extension-contributor) role.
9292

93-
```azurecli
94-
az k8s-extension create --cluster-name $clusterName -g $rgName --cluster-type connectedClusters --extension-type microsoft.arcnetworking --config k8sRuntimeFpaObjectId=$objID -n arcnetworking
95-
```
93+
```azurecli
94+
az k8s-extension create --cluster-name $clusterName -g $rgName --cluster-type connectedClusters --extension-type microsoft.arcnetworking --config k8sRuntimeFpaObjectId=$objID -n arcnetworking
95+
```
9696

9797
## Deploy MetalLB load balancer on your Kubernetes cluster
9898

AKS-Arc/deploy-windows-application.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Deploy Windows .NET applications
33
description: Learn how to deploy a Windows.NET application to your Kubernetes cluster using a custom image stored in Azure Container Registry in AKS on Windows Server.
44
author: sethmanheim
55
ms.topic: tutorial
6-
ms.date: 06/26/2024
6+
ms.date: 07/03/2025
77
ms.author: sethm
88
ms.lastreviewed: 1/14/2022
99
ms.reviewer: abha

AKS-Arc/includes/csi-in-aks-hybrid-overview.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,15 +3,15 @@ author: sethmanheim
33
ms.author: sethm
44
ms.service: azure-stack
55
ms.topic: include
6-
ms.date: 02/29/2024
6+
ms.date: 07/03/2025
77
ms.reviewer: abha
88
ms.lastreviewed: 10/18/2022
99

1010
# Overview of CSI file and driver functionality in AKS enabled by Azure Arc
1111

1212
---
1313

14-
The Container Storage Interface (CSI) is a standard for exposing arbitrary block and file storage systems to containerized workloads on Kubernetes. By using CSI, AKS enabled by Arc can write, deploy, and iterate plug-ins to expose new storage systems. Using CSI can also improve existing ones in Kubernetes without having to touch the core Kubernetes code and then wait for its release cycles.
14+
The Container Storage Interface (CSI) is a standard for exposing arbitrary block and file storage systems to containerized workloads on Kubernetes. By using CSI, AKS enabled by Arc can write, deploy, and iterate plug-ins to expose new storage systems. CSI can also improve existing ones in Kubernetes without having to touch the core Kubernetes code and then wait for its release cycles.
1515

1616
The disk and file CSI drivers used by AKS Arc are [CSI specification](https://github.com/container-storage-interface/spec/blob/master/spec.md)-compliant drivers.
1717

AKS-Arc/manage-node-pools.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Manage node pools for an AKS cluster
33
description: Learn how to manage multiple node pools in AKS on Azure Local.
44
ms.topic: how-to
55
ms.custom: devx-track-azurecli
6-
ms.date: 06/03/2024
6+
ms.date: 07/03/2025
77
author: sethmanheim
88
ms.author: sethm
99
ms.reviewer: rbaziwane
@@ -16,9 +16,9 @@ ms.lastreviewed: 06/03/2024
1616
[!INCLUDE [hci-applies-to-23h2](includes/hci-applies-to-23h2.md)]
1717

1818
> [!NOTE]
19-
> For information about managing node pools in AKS on Azure Local 22H2, see [Manage node pools](manage-node-pools-22h2.md).
19+
> For information about managing node pools in AKS on Windows Server, see [Manage node pools](manage-node-pools-22h2.md).
2020
21-
In AKS enabled by Azure Arc, nodes of the same configuration are grouped together into *node pools*. These node pools contain the underlying VMs that run your applications. This article shows you how to create and manage node pools for an AKS cluster.
21+
In AKS on Azure Local, nodes of the same configuration are grouped together into *node pools*. These node pools contain the underlying VMs that run your applications. This article shows you how to create and manage node pools for an AKS cluster.
2222

2323
## Create a Kubernetes cluster
2424

AKS-Arc/ssh-connection.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -65,5 +65,5 @@ After you use SSH to connect to the node, you can run `net user administrator *`
6565

6666
## Next steps
6767

68-
- [Known issues](known-issues.yml).
69-
- [Windows Admin Center known issues](/azure-stack/aks-hci/known-issues-windows-admin-center).
68+
- [Known issues](known-issues.yml)
69+
- [Windows Admin Center known issues](/azure-stack/aks-hci/known-issues-windows-admin-center)

0 commit comments

Comments
 (0)