Skip to content

Commit 7fc20d4

Browse files
Learn Build Service GitHub AppLearn Build Service GitHub App
authored andcommitted
Merging changes synced from https://github.com/MicrosoftDocs/azure-stack-docs-pr (branch live)
2 parents 30afa71 + 0d173d4 commit 7fc20d4

17 files changed

+99
-98
lines changed

AKS-Arc/aks-edge-concept-clusters-nodes.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: Learn about clusters and nodes running on AKS Edge Essentials.
44
author: sethmanheim
55
ms.author: sethm
66
ms.topic: concept-article
7-
ms.date: 07/11/2024
7+
ms.date: 07/03/2025
88
ms.custom: template-concept
99
---
1010

@@ -18,7 +18,7 @@ When you create an AKS Edge Essentials deployment, AKS Edge Essentials creates a
1818

1919
![Screenshot showing the the VMs in AKS Edge.](./media/aks-edge/aks-edge-vm.png)
2020

21-
Deployments can only create one Linux VM on a given host machine. This Linux VM can act as both the control plane node and as a worker node based on your deployment needs. This curated VM is based on [CBL-Mariner](https://github.com/microsoft/CBL-Mariner). CBL-Mariner is an internal Linux distribution for Microsoft's cloud infrastructure and edge products and services. CBL-Mariner is designed to provide a consistent platform for these devices and services and enhances Microsoft's ability to stay current on Linux updates. For more information, see [CBL-Mariner security](https://github.com/microsoft/CBL-Mariner/blob/2.0/SECURITY.md). The Linux virtual machine is built on four-point comprehensive premises:
21+
Deployments can only create one Linux VM on a given host machine. This Linux VM can act as both the control plane node and as a worker node based on your deployment needs. This curated VM is based on [CBL-Mariner](https://github.com/microsoft/CBL-Mariner). CBL-Mariner is an internal Linux distribution for Microsoft's cloud infrastructure and edge products and services. CBL-Mariner is designed to provide a consistent platform for these devices and services and enhances Microsoft's ability to stay current on Linux updates. For more information, see [CBL-Mariner security](https://github.com/microsoft/CBL-Mariner/blob/2.0/SECURITY.md). The Linux virtual machine is built on a four-point comprehensive premise:
2222

2323
- Servicing updates
2424
- Read-only root filesystem
@@ -27,7 +27,7 @@ Deployments can only create one Linux VM on a given host machine. This Linux VM
2727

2828
Running a Windows node is optional and you can create a Windows node if you need to deploy Windows containers. This node runs as a Windows virtual machine based on [Windows 10 IoT Enterprise LTSC 2019](/lifecycle/products/windows-10-iot-enterprise-ltsc-2019). The Windows VM brings all the security features and capabilities of Windows 10.
2929

30-
You can define the amount of CPU and memory resources that you'd like to allocate for each of the VMs. This static allocation enables you to control how resources are used and ensures that applications running on the host have the required resources.
30+
You can define the amount of CPU and memory resources that you want to allocate for each of the VMs. This static allocation enables you to control how resources are used and ensures that applications running on the host have the required resources.
3131

3232
Finally, AKS Edge Essentials doesn't offer dynamic creation of virtual machines. If a node VM goes down, you have to recreate it. That said, if you have a full deployment with multiple control plane nodes and worker nodes, if a VM goes down, Kubernetes moves workloads to an active node.
3333

@@ -47,7 +47,7 @@ After you set up your machines, you can deploy AKS Edge Essentials in the follow
4747

4848
![Diagram showing AKS Edge Essentials deployment scenarios.](./media/aks-edge/aks-edge-deployment-options.jpg)
4949

50-
Once you've created your cluster, you can deploy your applications and connect your cluster to Arc, to enable Arc extensions such as Azure Monitor and Azure Policy. You can also choose to use GitOps to manage your deployments.
50+
Once you create your cluster, you can deploy your applications and connect your cluster to Arc, to enable Arc extensions such as Azure Monitor and Azure Policy. You can also choose to use GitOps to manage your deployments.
5151

5252
## Next steps
5353

AKS-Arc/certificates-overview.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,10 +3,10 @@ title: Overview of certificate management in AKS on Windows Server
33
description: Learn how to manage certificates for secure communication between in-cluster components in AKS by provisioning and managing certificates in AKS on Windows Server.
44
author: sethmanheim
55
ms.topic: concept-article
6-
ms.date: 01/10/2024
6+
ms.date: 07/03/2025
77
ms.author: sethm
88
ms.lastreviewed: 04/01/2023
9-
ms.reviewer: sulahiri
9+
ms.reviewer: leslielin
1010

1111
# Intent: As an IT Pro, I want to learn how to use certificates to secure communication between in-cluster components on my AKS deployment.
1212
# Keyword: control plane nodes secure communication certificate revocation
@@ -105,9 +105,9 @@ A `notBefore` time can be specified to revoke only certificates that are issued
105105
> [!NOTE]
106106
> Revocation of `kubelet` server certificates is currently not available.
107107
108-
If you use a serial number when you perform a revocation, you can use the `Repair-AksHciClusterCerts` PowerShell command, described below, to get your cluster into a working state. If you use any of the other fields listed earlier, make sure to specify a `notBefore` time.
108+
If you use a serial number when you perform a revocation, you can use the `Repair-AksHciClusterCerts` PowerShell command, described as follows, to get your cluster into a working state. If you use any of the other fields listed earlier, make sure to specify a `notBefore` time.
109109

110-
```console
110+
```yaml
111111
apiVersion: certificates.microsoft.com/v1
112112
kind: RenewRevocation
113113
metadata:

AKS-Arc/concepts-security-access-identity.md

Lines changed: 6 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Access and identity options for Azure Kubernetes Service (AKS) Arc
33
description: Learn about options in access and identity management on a Kubernetes cluster in AKS on Azure Local.
44
author: sethmanheim
55
ms.topic: how-to
6-
ms.date: 07/30/2024
6+
ms.date: 07/03/2025
77
ms.author: sethm
88
ms.lastreviewed: 07/30/2024
99
ms.reviewer: leslielin
@@ -41,7 +41,7 @@ For more information, see [Using Kubernetes RBAC authorization](https://kubernet
4141

4242
#### Roles
4343

44-
Before assigning permissions to users with Kubernetes RBAC, you define user permissions as a *role*. Grant permissions within a Kubernetes namespace using roles.
44+
Before assigning permissions to users with Kubernetes RBAC, you define user permissions as a role. Grant permissions within a Kubernetes namespace using roles.
4545

4646
Kubernetes roles grant permissions; they don't deny permissions. To grant permissions across the entire cluster or to cluster resources outside a given namespace, you can use *ClusterRoles*.
4747

@@ -51,7 +51,7 @@ A ClusterRole grants and applies permissions to resources across the entire clus
5151

5252
### RoleBindings and ClusterRoleBindings
5353

54-
Once you define roles to grant permissions to resources, you assign those Kubernetes RBAC permissions with a *RoleBinding*. If your AKS cluster [integrates with Microsoft Entra ID](#microsoft-entra-integration), RoleBindings grant permissions to Microsoft Entra users to perform actions within the cluster. See [Control access using Microsoft Entra ID and Kubernetes RBAC](kubernetes-rbac-local.md)
54+
Once you define roles to grant permissions to resources, you assign those Kubernetes RBAC permissions with a *RoleBinding*. If your AKS cluster [integrates with Microsoft Entra ID](#microsoft-entra-integration), RoleBindings grant permissions to Microsoft Entra users to perform actions within the cluster. See [Control access using Microsoft Entra ID and Kubernetes RBAC](kubernetes-rbac-local.md).
5555

5656
#### RoleBindings
5757

@@ -82,7 +82,7 @@ Azure Role-based Access Control (RBAC) is an authorization system built on [Azur
8282

8383
With Azure RBAC, you create a *role definition* that outlines the permissions to be applied. You then assign a user or group this role definition via a *role assignment* for a particular *scope*. The scope can be an individual resource, a resource group, or across the subscription.
8484

85-
For more information, see [What is Azure role-based access control (Azure RBAC)?](/azure/role-based-access-control/overview)
85+
For more information, see [What is Azure role-based access control (Azure RBAC)?](/azure/role-based-access-control/overview).
8686

8787
There are two required levels of access to fully operate an AKS Arc cluster:
8888

@@ -113,15 +113,15 @@ In this scenario, you use Azure RBAC mechanisms and APIs to assign users built-i
113113
With this feature, you not only give users permissions to the AKS resource across subscriptions, but you also configure the role and permissions for inside each of those clusters controlling Kubernetes API access. There are four built-in roles available for this data plane action, each with its own scope of permissions, [as described in the built-in roles](#built-in-roles) section.
114114

115115
> [!IMPORTANT]
116-
> You must enable Azure RBAC for Kubernetes authorization before doing role assignment. For more details and step by step guidance, see [Use Azure RBAC for Kubernetes authorization](azure-rbac-local.md).
116+
> You must enable Azure RBAC for Kubernetes authorization before doing role assignment. For more details and step-by-step guidance, see [Use Azure RBAC for Kubernetes authorization](azure-rbac-local.md).
117117
118118
### Built-in roles
119119

120120
[!INCLUDE [built-in-roles](includes/built-in-roles.md)]
121121

122122
## Microsoft Entra integration
123123

124-
Enhance your AKS cluster security with Microsoft Entra integration. Built on enterprise identity management experience, Microsoft Entra ID is a multitenant, cloud-based directory and identity management service that combines core directory services, application access management, and identity protection. With Microsoft Entra ID, you can integrate on-premises identities into AKS clusters to provide a single source for account management and security.
124+
Microsoft Entra integration can help to enhance your AKS cluster security. Built on enterprise identity management experience, Microsoft Entra ID is a multitenant, cloud-based directory and identity management service that combines core directory services, application access management, and identity protection. With Microsoft Entra ID, you can integrate on-premises identities into AKS clusters to provide a single source for account management and security.
125125

126126
:::image type="content" source="media/concepts-security-access-identity/entra-integration.png" alt-text="Flowchart showing Entra integration." lightbox="media/concepts-security-access-identity/entra-integration.png":::
127127

@@ -143,7 +143,6 @@ The following table contains a summary of how users can authenticate to Kubernet
143143
3. Run `kubectl` commands.
144144
- The first command can trigger browser-based authentication to authenticate to the Kubernetes cluster, as described in the following table.
145145

146-
147146
| Description | Role grant required | Cluster admin Microsoft Entra groups | When to use |
148147
| ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
149148
| Admin login using client certificate | [Azure Kubernetes Service Arc Cluster Admin Role](/azure/role-based-access-control/built-in-roles/containers#azure-kubernetes-service-arc-cluster-admin-role). This role allows `az aksarc get-credentials` to be used with the `--admin` flag, which downloads a non-Microsoft Entra cluster admin certificate into the user's **.kube/config**. This is the only purpose of the Azure Kubernetes Admin role. | n/a | If you're permanently blocked by not having access to a valid Microsoft Entra group with access to your cluster. |

AKS-Arc/container-storage-interface-disks.md

Lines changed: 17 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Use Container Storage Interface (CSI) disk drivers in AKS enabled by Azur
33
description: Learn how to use Container Storage Interface (CSI) drivers to manage disks in AKS enabled by Arc.
44
author: sethmanheim
55
ms.topic: how-to
6-
ms.date: 03/14/2024
6+
ms.date: 07/03/2025
77
ms.author: sethm
88
ms.lastreviewed: 01/14/2022
99
ms.reviewer: abha
@@ -23,19 +23,19 @@ This article describes how to use Container Storage Interface (CSI) built-in sto
2323

2424
## Dynamically create disk persistent volumes using built-in storage class
2525

26-
A *storage class* is used to define how a unit of storage is dynamically created with a persistent volume. For more information on how to use storage classes, see [Kubernetes storage classes](https://kubernetes.io/docs/concepts/storage/storage-classes/).
26+
A *storage class* is used to define how a unit of storage is dynamically created with a persistent volume. For more information about how to use storage classes, see [Kubernetes storage classes](https://kubernetes.io/docs/concepts/storage/storage-classes/).
2727

28-
In AKS Arc, the **default** storage class is created by default and uses CSI to create VHDX-backed volumes. The reclaim policy ensures that the underlying VHDX is deleted when the persistent volume that used it is deleted. The storage class also configures the persistent volumes to be expandable; you just need to edit the persistent volume claim with the new size.
28+
In AKS Arc, the default storage class uses CSI to create VHDX-backed volumes. The reclaim policy ensures that the underlying VHDX is deleted when the persistent volume that used it is deleted. The storage class also configures the persistent volumes to be expandable; you just need to edit the persistent volume claim with the new size.
2929

30-
To leverage this storage class, create a [PVC](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) and a respective pod that references and uses it. A PVC is used to automatically provision storage based on a storage class. A PVC can use one of the pre-created storage classes or a user-defined storage class to create a VHDX of the desired size. When you create a pod definition, the PVC is specified to request the desired storage.
30+
To use this storage class, create a [Persistent Volume Claim (PVC)](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) and a respective pod that references and uses it. A PVC is used to automatically provision storage based on a storage class. A PVC can use one of the pre-created storage classes or a user-defined storage class to create a VHDX of the desired size. When you create a pod definition, the PVC is specified to request the desired storage.
3131

3232
## Create custom storage class for disks
3333

3434
The default storage class is suitable for most common scenarios. However, in some cases, you may want to create your own storage class that stores PVs at a particular location mapped to a specific performance tier.
3535

3636
If you have Linux workloads (pods), you must create a custom storage class with the parameter `fsType: ext4`. This requirement applies to Kubernetes versions 1.19 and 1.20 or later. The following example shows a custom storage class definition with `fsType` parameter defined:
3737

38-
```YAML
38+
```yaml
3939
apiVersion: storage.k8s.io/v1
4040
kind: StorageClass
4141
metadata:
@@ -56,7 +56,7 @@ volumeBindingMode: Immediate
5656
allowVolumeExpansion: true
5757
```
5858
59-
If you create a custom storage class, you can specify the location where you want to store PVs. If the underlying infrastructure is Azure Local, this new location could be a volume that's backed by high-performing SSDs/NVMe or a cost-optimized volume backed by HDDs.
59+
If you create a custom storage class, you can specify the location in which you want to store PVs. If the underlying infrastructure is Azure Local, this new location could be a volume that's backed by high-performing SSDs/NVMe, or a cost-optimized volume backed by HDDs.
6060
6161
Creating a custom storage class is a two-step process:
6262
@@ -75,7 +75,8 @@ Creating a custom storage class is a two-step process:
7575
```azurecli
7676
$storagepathID = az stack-hci-vm storagepath show --name $storagepathname --resource-group $resource_group --query "id" -o tsv
7777
```
78-
2. Create a new custom storage class using the new storage path.
78+
79+
1. Create a new custom storage class using the new storage path.
7980

8081
1. Create a file named **sc-aks-hci-disk-custom.yaml**, and then copy the manifest from the following YAML file. The storage class is the same as the default storage class except with the new `container`. Use the `storage path ID` created in the previous step for `container`. For `group` and `hostname`, query the default storage class by running `kubectl get storageclass default -o yaml`, and then use the values that are specified:
8182

@@ -100,9 +101,9 @@ Creating a custom storage class is a two-step process:
100101
volumeBindingMode: Immediate
101102
```
102103

103-
2. Create the storage class with the [kubectl apply](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply/) command and specify your **sc-aks-hci-disk-custom.yaml** file:
104+
1. Create the storage class with the [kubectl apply](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply/) command and specify your **sc-aks-hci-disk-custom.yaml** file:
104105

105-
```console
106+
```azurecli
106107
$ kubectl apply -f sc-aks-hci-disk-custom.yaml
107108
storageclass.storage.k8s.io/aks-hci-disk-custom created
108109
```
@@ -121,7 +122,7 @@ Creating a custom storage class is a two-step process:
121122
Get-AksHciStorageContainer -Name "customStorageContainer"
122123
```
123124

124-
2. Create a new custom storage class using the new storage path.
125+
1. Create a new custom storage class using the new storage path.
125126

126127
1. Create a file named **sc-aks-hci-disk-custom.yaml**, and then copy the manifest from the following YAML file. The storage class is the same as the default storage class except with the new `container`. Use the `storage container name` created in the previous step for `container`. For `group` and `hostname`, query the default storage class by running `kubectl get storageclass default -o yaml`, and then use the values that are specified:
127128

@@ -146,15 +147,15 @@ Creating a custom storage class is a two-step process:
146147
volumeBindingMode: Immediate
147148
```
148149

149-
2. Create the storage class with the [kubectl apply](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply/) command and specify your **sc-aks-hci-disk-custom.yaml** file:
150-
151-
```console
152-
$ kubectl apply -f sc-aks-hci-disk-custom.yaml
153-
storageclass.storage.k8s.io/aks-hci-disk-custom created
150+
1. Create the storage class with the [kubectl apply](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply/) command and specify your **sc-aks-hci-disk-custom.yaml** file:
151+
152+
```azurecli
153+
$ kubectl apply -f sc-aks-hci-disk-custom.yaml
154+
storageclass.storage.k8s.io/aks-hci-disk-custom created
154155
```
155156

156157
---
157158

158159
## Next steps
159160

160-
- [Use the file Container Storage Interface drivers](container-storage-interface-files.md)
161+
[Use the Container Storage Interface file drivers](container-storage-interface-files.md)

AKS-Arc/create-kubernetes-cluster.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Quickstart to create a local Kubernetes cluster using Windows Admin Cente
33
description: Learn how to create a local Kubernetes cluster using Windows Admin Center
44
author: sethmanheim
55
ms.topic: quickstart
6-
ms.date: 12/27/2023
6+
ms.date: 07/03/2025
77
ms.author: sethm
88
ms.lastreviewed: 1/14/2022
99
ms.reviewer: dawhite

0 commit comments

Comments
 (0)