Skip to content

Commit 9531bff

Browse files
authored
Merge pull request #18213 from MicrosoftDocs/main
Sync release-local-2506 with main
2 parents 807c4b3 + 938a130 commit 9531bff

File tree

9 files changed

+38
-79
lines changed

9 files changed

+38
-79
lines changed

AKS-Arc/concepts-storage.md

Lines changed: 18 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Concepts - Storage options for applications in AKS enabled by Azure Arc
33
description: Storage options for applications in AKS enabled by Azure Arc.
44
author: sethmanheim
55
ms.topic: conceptual
6-
ms.date: 06/24/2024
6+
ms.date: 06/16/2025
77
ms.author: sethm
88
ms.lastreviewed: 1/14/2022
99
ms.reviewer: abha
@@ -112,6 +112,23 @@ volumeMounts:
112112
name: k-dir
113113
```
114114

115+
## Secure pod access to mounted volumes
116+
117+
For your applications to run correctly, pods should run as a defined user or group and not as *root*. The `securityContext` for a pod or container lets you define settings such as *fsGroup* to assume the appropriate permissions on the mounted volumes.
118+
119+
**fsGroup** is a field within the `securityContext` of a Kubernetes pod specification. It defines a supplemental group ID that Kubernetes assigns to all processes in the pod, and recursively to the files in mounted volumes. This ensures that the pod has the correct group-level access to shared storage volumes.
120+
121+
When a volume is mounted, Kubernetes changes the ownership of the volume's contents to match the **fsGroup** value. This is particularly useful when containers run as non-root users and need write access to shared volumes.
122+
123+
The following example YAML shows the **fsgroup** value:
124+
125+
```yaml
126+
securityContext:
127+
  fsGroup: 2000
128+
```
129+
130+
In this example, all files in mounted volumes are accessible by GID 2000.
131+
115132
## Next steps
116133

117134
- [Use the AKS on Azure Local disk Container Storage Interface (CSI) drivers](./container-storage-interface-disks.md).

AKS-Arc/container-storage-interface-files.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Use Container Storage Interface (CSI) file drivers in AKS enabled by Azur
33
description: Learn how to use Container Storage Interface (CSI) drivers to manage files in AKS Arc.
44
author: sethmanheim
55
ms.topic: how-to
6-
ms.date: 08/20/2024
6+
ms.date: 06/16/2025
77
ms.author: sethm
88
ms.lastreviewed: 01/14/2022
99
ms.reviewer: abha
@@ -31,7 +31,7 @@ If multiple nodes need concurrent access to the same storage volumes in AKS Arc,
3131

3232
### [AKS on Azure Local](#tab/local)
3333

34-
1. Make sure the SMB driver is deployed. The SMB CSI driver is installed by default when you create a Kubernetes cluster using the Azure portal or the `az aksarc create` command. If you create a Kubernetes cluster by using `--disable-smb-driver`, you must enable the SMB driver on this cluster using the `az aksarc update` command:
34+
1. Make sure the SMB driver is deployed. The SMB CSI driver is installed by default when you create a Kubernetes cluster using the `az aksarc create` command. If you create a Kubernetes cluster by using the Azure portal, Azure Resource Manager (ARM) template, or Terraform, or by using the `az aksarc create` command with `--disable-smb-driver`, you must enable the SMB driver on this cluster using the `az aksarc update` command:
3535

3636
```azurecli
3737
az aksarc update -n $aksclustername -g $resource_group --enable-smb-driver
@@ -78,7 +78,7 @@ If multiple nodes need concurrent access to the same storage volumes in AKS Arc,
7878
7979
### [AKS on Azure Local](#tab/local)
8080
81-
1. Make sure the NFS driver is deployed. The NFS CSI driver is installed by default when you create a Kubernetes cluster using the Azure portal or the `az aksarc create` command. If you create a Kubernetes cluster by using `--disable-nfs-driver`, you must enable the the NFS driver on this cluster using the `az aksarc update` command:
81+
1. Make sure the NFS driver is deployed. The NFS CSI driver is installed by default when you create a Kubernetes cluster using the `az aksarc create` command. If you create a Kubernetes cluster by using the Azure portal, Azure Resource Manager (ARM) template, or Terraform, or by using the `az aksarc create` command with `--disable-nfs-driver`, you must enable the the NFS driver on this cluster using the `az aksarc update` command:
8282

8383
```azurecli
8484
az aksarc update -n $aksclustername -g $resource_group --enable-nfs-driver

azure-local/known-issues.md

Lines changed: 10 additions & 64 deletions
Large diffs are not rendered by default.

azure-local/manage/manage-secrets-rotation.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: This article describes how to manage internal secret rotation on Az
44
author: alkohli
55
ms.author: alkohli
66
ms.topic: how-to
7-
ms.date: 04/09/2025
7+
ms.date: 06/16/2025
88
ms.service: azure-local
99
---
1010

@@ -190,14 +190,14 @@ The exact steps for secret rotation are different depending on the software vers
190190
Start-SecretRotation
191191
```
192192
193-
### Azure Local instance running 2408.2 to 2405.3
193+
### Azure Local instance running 2408.2 or earlier
194194
195195
1. Sign in to one of the Azure Local nodes using deployment user credentials.
196196
1. Update the CA Certificate password in ECE store. Run the following PowerShell command:
197197
198198
```PowerShell
199199
$SecureSecretText = ConvertTo-SecureString -String "<Replace with a strong password>" -AsPlainText -Force
200-
$CACertCred = New-Object -Type PSCredential -ArgumentList "CACertificateCred,$SecureSecretText"
200+
$CACertCred = New-Object -Type PSCredential -ArgumentList (CACertificateCred),$SecureSecretText
201201
Set-ECEServiceSecret -ContainerName CACertificateCred -Credential $CACertCred
202202
```
203203
-59.5 KB
Loading

azure-local/overview.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,6 @@ Customers often choose Azure Local in the following scenarios.
7777
| Highly performant SQL Server | Azure Local provides an additional layer of resiliency to highly available, mission-critical Always On availability groups-based deployments of SQL Server. This approach also offers extra benefits associated with the single-vendor approach, including simplified support and performance optimizations built into the underlying platform. To learn more, see [Deploy SQL Server on Azure Local](./deploy/sql-server-23h2.md). |
7878
| Trusted enterprise virtualization | Azure Local satisfies the trusted enterprise virtualization requirements through its built-in support for Virtualization-based Security (VBS). VBS relies on Hyper-V to implement the mechanism referred to as virtual secure mode, which forms a dedicated, isolated memory region within its guest VMs. By using programming techniques, it's possible to perform designated, security-sensitive operations in this dedicated memory region while blocking access to it from the host OS. This considerably limits potential vulnerability to kernel-based exploits. To learn more, see [About Trusted Launch for Azure Local VMs enabled by Arc](./manage/trusted-launch-vm-overview.md). |
7979
| Scale-out storage | Storage Spaces Direct is a core technology of Azure Local that uses industry-standard servers with locally attached drives to offer high availability, performance, and scalability. Using Storage Spaces Direct results in significant cost reductions compared with competing offers based on storage area network (SAN) or network-attached storage (NAS) technologies. These benefits result from an innovative design and a wide range of enhancements, such as persistent read/write cache drives, mirror-accelerated parity, nested resiliency, and deduplication. |
80-
| Disaster recovery for virtualized workloads | A stretched cluster of Azure Local (functionality only available in Azure Stack HCI OS, version 22H2) provides automatic failover of virtualized workloads to a secondary site following a primary site failure. Synchronous replication ensures crash consistency of VM disks. |
8180
| Data center consolidation and modernization | Refreshing and consolidating aging virtualization hosts with Azure Local can improve scalability and make your environment easier to manage and secure. It's also an opportunity to retire legacy SAN storage to reduce footprint and total cost of ownership. Operations and systems administration are simplified with unified tools and interfaces and a single point of support. |
8281
| Branch office and edge | For branch office and edge workloads, you can minimize infrastructure costs by deploying two-node clusters with inexpensive witness options, such as a cloud witness. Another factor that contributes to the lower cost of two-node clusters is support for switchless networking, which relies on crossover cable between cluster nodes instead of more expensive high-speed switches. Customers can also centrally view remote Azure Local deployments in the Azure portal. To learn more, see [Deploy branch office and edge on Azure Local](deploy/branch-office-edge.md). |
8382

azure-local/upgrade/install-solution-upgrade.md

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Install solution upgrade on Azure Local
33
description: Learn how to install the solution upgrade on your Azure Local instance.
44
author: alkohli
55
ms.topic: how-to
6-
ms.date: 06/11/2025
6+
ms.date: 06/13/2025
77
ms.author: alkohli
88
ms.reviewer: alkohli
99
ms.service: azure-local
@@ -28,9 +28,6 @@ Throughout this article, we refer to OS version 23H2 as the *new* version and ve
2828
Before you install the solution upgrade, make sure that you:
2929

3030
- Validate the system using the Environment Checker as per the instructions in [Assess solution upgrade readiness](./validate-solution-upgrade-readiness.md#run-the-validation).
31-
- Verify that latest `AzureEdgeLifecycleManager` extension on each machine is installed as per the instructions in [Check the Azure Arc extension](./validate-solution-upgrade-readiness.md#remediation-9-check-the-azure-arc-lifecycle-extension).
32-
33-
:::image type="content" source="media/install-solution-upgrade/verify-lcmextension-installed.png" alt-text="Screenshot of Extensions page showing AzureEdgeLifeCycleManager extension install on an Azure Local machine." lightbox="./media/install-solution-upgrade/verify-lcmextension-installed.png":::
3431
- Have failover cluster name between 3 to 15 characters.
3532
- Create an Active Directory Lifecycle Manager (LCM) user account that's a member of the local Administrator group. For instructions, see [Prepare Active Directory for Azure Local deployment](../deploy/deployment-prep-active-directory.md).
3633
- Have IPv4 network range that matches your host IP address subnet with six, contiguous IP addresses available for new Azure Arc services. Work with your network administrator to ensure that the IP addresses aren't in use and meet the outbound connectivity requirement.

azure-local/upgrade/validate-solution-upgrade-readiness.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -150,9 +150,9 @@ Use the following commands for each machine to install the required features. If
150150
$windowsFeature = @(
151151
152152
"Failover-Clustering",
153-
"FileServerVSSAgent",
154-
"FSRM-Infrastructure",
155-
"Microsoft-Windows-GroupPolicy-ServerAdminTools-Update",
153+
"FS-VSS-Agent",
154+
"FS-Resource-Manager",
155+
"GPMC",
156156
"NetworkATC",
157157
"NetworkController",
158158
"RSAT-AD-Powershell",

0 commit comments

Comments
 (0)