Skip to content

Commit 8b1a72c

Browse files
authored
Sync release-local-2506 with main
Sync release-local-2506 with main
2 parents f661319 + ebe8043 commit 8b1a72c

File tree

182 files changed

+369
-201
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

182 files changed

+369
-201
lines changed

AKS-Arc/TOC.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -62,6 +62,8 @@
6262
href: create-clusters-terraform.md
6363
- name: Azure Resource Manager template
6464
href: resource-manager-quickstart.md
65+
- name: REST API
66+
href: aks-create-clusters-api.md
6567
- name: Networking
6668
items:
6769
- name: Create logical networks

AKS-Arc/aks-create-clusters-api.md

Lines changed: 159 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,159 @@
1+
---
2+
title: Create Kubernetes clusters using REST APIs
3+
description: Learn how to create Kubernetes clusters in Azure Local using REST API for the Hybrid Container Service.
4+
ms.topic: how-to
5+
author: rcheeran
6+
ms.date: 06/19/2025
7+
ms.author: rcheeran
8+
ms.lastreviewed: 06/19/2025
9+
ms.reviewer: rjaini
10+
---
11+
12+
# Create Kubernetes clusters using the REST API
13+
14+
[!INCLUDE [hci-applies-to-23h2](includes/hci-applies-to-23h2.md)]
15+
16+
This article describes how to create Kubernetes clusters on Azure Local using the REST API. The Azure resource type for [AKS Arc provisioned clusters](/azure/templates/microsoft.hybridcontainerservice/provisionedclusterinstances?pivots=deployment-language-arm-template) is **"Microsoft.HybridContainerService/provisionedClusterInstances"**. This resource is an extension of the [connected clusters](/azure/templates/microsoft.kubernetes/connectedclusters?pivots=deployment-language-arm-template) resource type, **"Microsoft.Kubernetes/connectedClusters"**. Due to this dependency, you must first create a connected cluster resource before creating an AKS Arc resource.
17+
18+
## Before you begin
19+
20+
Before you begin, make sure you have the following details from your on-premises infrastructure administrator:
21+
22+
- **Azure subscription ID**: The Azure subscription ID that Azure Local uses for deployment and registration.
23+
- **Custom Location ID**: The Azure Resource Manager ID of the custom location. The custom location is configured during the Azure Local cluster deployment. Your infrastructure admin should give you the Resource Manager ID of the custom location. This parameter is required in order to create Kubernetes clusters. If the infrastructure admin provides a custom location name and resource group name, you can also get the Resource Manager ID using the following command:
24+
25+
```azurecli
26+
az customlocation show --name "<custom location name>" --resource-group <azure resource group> --query "id" -o tsv
27+
```
28+
29+
- **Network ID**: The Azure Resource Manager ID of the Azure Local logical network you created [following these steps](aks-networks.md). Your admin should give you the ID of the logical network. This parameter is required in order to create Kubernetes clusters. If you know the resource group in which the logical network was created, you can also get the Azure Resource Manager ID using the following command:
30+
31+
```azurecli
32+
az stack-hci-vm network lnet show --name "<lnet name>" --resource-group <azure resource group> --query "id" -o tsv
33+
```
34+
35+
- **Create an SSH key pair**: Create an SSH key pair in Azure and store the private key file for troubleshooting and log collection purposes. For detailed instructions, see [Create and store SSH keys with the Azure CLI](/azure/virtual-machines/ssh-keys-azure-cli), or with the [Azure portal](/azure/virtual-machines/ssh-keys-portal).
36+
- To connect to the Kubernetes cluster from anywhere, create a Microsoft Entra group and add members to it. All the members in the Microsoft Entra group have cluster administrator access to the cluster. Make sure to add yourself as a member to the Microsoft Entra group. If you don't add yourself, you can't access the Kubernetes cluster using **kubectl**. For more information about creating Microsoft Entra groups and adding users, see [Manage Microsoft Entra groups and group membership](/entra/fundamentals/how-to-manage-groups).
37+
38+
## Step 1: Create a connected cluster resource
39+
40+
See the API definition for [connected clusters](/rest/api/hybridkubernetes/connected-cluster/create) and create a **PUT** request with the `kind` property set to `ProvisionedCluster`. The following example is a sample **PUT** request to create a connected cluster resource using the REST API:
41+
42+
```http
43+
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Kubernetes/connectedClusters/{connectedClusterName}?api-version=2024-01-01
44+
Content-Type: application/json
45+
Authorization: Bearer <access_token>
46+
47+
{
48+
"location": "<region>",
49+
"identity": {
50+
"type": "SystemAssigned"
51+
},
52+
"kind": "ProvisionedCluster",
53+
"properties": {
54+
"agentPublicKeyCertificate": "",
55+
"azureHybridBenefit": "NotApplicable",
56+
"distribution": "AKS",
57+
"distributionVersion": "1.0",
58+
"aadProfile": {
59+
"enableAzureRBAC": true,
60+
"adminGroupObjectIDs": [
61+
"<entra-group-id>"
62+
],
63+
"tenantID": "<tenant-id>"
64+
},
65+
}
66+
}
67+
```
68+
69+
Replace all placeholder values with your actual details. For more information, see the [connected clusters API documentation](/rest/api/hybridkubernetes/connected-cluster/create).
70+
71+
## Step 2: Create a provisioned cluster resource
72+
73+
See the API definition for [provisioned clusters](/rest/api/hybridcontainer/provisioned-cluster-instances/create-or-update). In this **PUT** call, pass the Azure Resource Manager identifier created in the previous step as the URI parameter. The following code is an example HTTP **PUT** request to create a provisioned cluster resource with only the required parameters:
74+
75+
```http
76+
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.HybridContainerService/provisionedClusterInstances/{clusterName}?api-version=2024-01-01-preview
77+
Content-Type: application/json
78+
Authorization: Bearer <access_token>
79+
80+
{
81+
"extendedLocation": {
82+
"type": "CustomLocation",
83+
"name": "<ARM ID of Custom Location>"
84+
},
85+
"properties": {
86+
"controlPlane": {
87+
"count": 1,
88+
"vmSize": "Standard_A4_v2"
89+
},
90+
"agentPoolProfiles": [
91+
{
92+
"name": "default-nodepool-1",
93+
"count": 1,
94+
"vmSize": "Standard_A4_v2",
95+
"osType": "Linux",
96+
}
97+
],
98+
"linuxProfile": {
99+
"ssh": {
100+
"publicKeys": [
101+
{
102+
"keyData": "<SSH public key>"
103+
}
104+
]
105+
}
106+
},
107+
"cloudProviderProfile": {
108+
"infraNetworkProfile": {
109+
"vnetSubnetIds": [
110+
"<ARM ID of logical network>"
111+
]
112+
}
113+
},
114+
}
115+
}
116+
117+
```
118+
119+
Replace the placeholder values with your actual details. For more information, see the [provisioned clusters API documentation](/rest/api/hybridcontainer/provisioned-cluster-instances/create-or-update).
120+
121+
## Connect to the Kubernetes cluster
122+
123+
Now you can connect to your Kubernetes cluster by running the `az connectedk8s proxy` command from your development machine. Make sure you sign in to Azure before running this command. If you have multiple Azure subscriptions, select the appropriate subscription ID using the [az account set](/cli/azure/account#az-account-set) command.
124+
125+
This command downloads the **kubeconfig** of your Kubernetes cluster to your development machine and opens a proxy connection channel to your on-premises Kubernetes cluster. The channel is open for as long as the command runs. Let this command run for as long as you want to access your cluster. If it times out, close the CLI window, open a fresh one, and then run the command again.
126+
127+
You must have Contributor permissions on the resource group that hosts the Kubernetes cluster in order to successfully run the following command:
128+
129+
```azurecli
130+
az connectedk8s proxy --name $aksclustername --resource-group $resource_group --file .\aks-arc-kube-config
131+
```
132+
133+
Expected output:
134+
135+
```output
136+
Proxy is listening on port 47011
137+
Merged "aks-workload" as current context in .\\aks-arc-kube-config
138+
Start sending kubectl requests on 'aks-workload' context using
139+
kubeconfig at .\\aks-arc-kube-config
140+
Press Ctrl+C to close proxy.
141+
```
142+
143+
Keep this session running and connect to your Kubernetes cluster from a different terminal or command prompt. Verify that you can connect to your Kubernetes cluster by running the `kubectl get` command. This command returns a list of the cluster nodes:
144+
145+
```azurecli
146+
kubectl get node -A --kubeconfig .\aks-arc-kube-config
147+
```
148+
149+
The following example output shows the node you created in the previous steps. Make sure the node status is **Ready**:
150+
151+
```output
152+
NAME STATUS ROLES AGE VERSION
153+
moc-l0ttdmaioew Ready control-plane,master 34m v1.24.11
154+
moc-ls38tngowsl Ready <none> 32m v1.24.11
155+
```
156+
157+
## Next steps
158+
159+
[AKS Arc overview](overview.md)

azure-local/known-issues.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -71,6 +71,7 @@ The following table lists the known issues in this release:
7171
|Feature |Issue |Workaround |
7272
|---------|---------|---------|
7373
| Update <!--54889342--> | A critical VM operational status not OK alert is shown in the Azure portal under **Update readiness** and in the **Alerts** pane after the update has completed successfully. Additionally, the alert appears when running the `Get-HealthFault` cmdlet. | No action is required on your part. This alert will resolve automatically in a few days. |
74+
| Deployment <!-- 33153622-->| Updating Azure Arc extensions manually from the Azure Local Machine page via the Azure portal will result in issues during deployment. The extensions that shouldn't be updated manually are: `AzureEdgeDeviceManagement`, `AzureEdgeLifecycleManager`, and `AzureEdgeAKVBackupForWindows`. | Installing extensions manually from the Azure Local machine page is not supported. |
7475

7576
## Known issues from previous releases
7677

@@ -221,7 +222,6 @@ The following table lists the fixed issues in this release:
221222
| Upgrade <!--30251075--> | Added a check to validate enough free memory to start an Azure Arc resource bridge VM. | |
222223
| Security <!--XXXX--> | Mitigation for security vulnerability CVE-2024-21302 was implemented. See the [Guidance for blocking rollback of Virtualization-based Security (VBS) related security updates](https://support.microsoft.com/topic/guidance-for-blocking-rollback-of-virtualization-based-security-vbs-related-security-updates-b2e7ebf4-f64d-4884-a390-38d63171b8d3) | |
223224
| Deployment | During Azure Local deployment via portal, **Validate selected machines** fails with this error message: `Mandatory extension [Lcm controller] installed version [30.2503.0.907] is not equal to the required version [30.2411.2.789] for Arc machine [Name of the machine]. Please create EdgeDevice resource again for this machine to fix the issue.` | Reinstall the correct version of `AzureEdgeLifecycleManager` extension. Follow these steps: <br> 1. Select the machine and then select **Install extensions**. <br> <br>![Screenshot of extension installation on Azure Local machines.](media/known-issues/select-machine-2.png)<br> <br> 2. Repeat this step for each machine you intend to cluster. It takes roughly 15 minutes for the installation to complete. <br> 3. Verify that the `AzureEdgeLifecycleManager` extension version is 30.2411.2.789. <br><br> ![Screenshot of extension version installed on Azure Local machines that can be validated.](media/known-issues/select-machine-1.png) <br><br> 4. After the extensions are installed on all the machines in the list, select **Add machines** to refresh the list. <br> 5. Select **Validate selected machines**. The validation should succeed. |
224-
| Deployment <!--31699269--> | During the Azure Local deployment and update on OEM-licensed devices, `ConfigureSecurityBaseline` fails at the **Apply security settings on servers** step. | This issue is now fixed. |
225225

226226
## Known issues in this release
227227

@@ -288,7 +288,6 @@ The following table lists the known issues from previous releases:
288288

289289
|Feature |Issue |Workaround |
290290
|---------|---------|---------|
291-
| Deployment <!--31699269-->| This issue affects deployment and update on OEM-licensed devices. During deployment, you might see this error at **Apply security settings on servers**: <br></br>`Type 'ConfigureSecurityBaseline' of Role 'AzureStackOSConfig' raised an exception: [ConfigureSecurityBaseline] ConfigureSecurityBaseline failed on <server name> with exception: -> Failed to apply OSConfiguration enforcement for ASHCIApplianceSecurityBaselineConfig on <server name>`. | If you haven’t started the update, see [Azure Local OEM license devices](https://github.com/Azure/AzureLocal-Supportability/blob/main/TSG/Security/TSG-Azure-Local-HCI-OEM-license-devices.md) to apply the preventive steps before updating to Azure Local 2411.3. <br></br> If you’ve encountered the issue, use the same insructions to validate and apply the mitigation. |
292291
| Azure Local VM management | The Mochostagent service might appear to be running but can get stuck without updating logs for over a month. You can identify this issue by checking the service logs in `C:\programdata\mochostagent\logs` to see if logs are being updated. | Run the following command to restart the mochostagent service: `restart-service mochostagent`. |
293292
| Update | When viewing the readiness check results for an Azure Local instance via the Azure Update Manager, there might be multiple readiness checks with the same name. |There's no known workaround in this release. Select **View details** to view specific information about the readiness check. |
294293
| Update | There's an intermittent issue in this release when the Azure portal incorrectly reports the update status as **Failed to update** or **In progress** though the update is complete. |[Connect to your Azure Local instance](./update/update-via-powershell-23h2.md#connect-to-your-azure-local) via a remote PowerShell session. To confirm the update status, run the following PowerShell cmdlets: <br><br> `$Update = get-solutionupdate`\| `? version -eq "<version string>"`<br><br>Replace the version string with the version you're running. For example, "10.2405.0.23". <br><br>`$Update.state`<br><br>If the update status is **Installed**, no further action is required on your part. Azure portal refreshes the status correctly within 24 hours. <br> To refresh the status sooner, follow these steps on one of the nodes. <br>Restart the Cloud Management cluster group.<br>`Stop-ClusterGroup "Cloud Management"`<br>`Start-ClusterGroup "Cloud Management"`|
@@ -330,11 +329,13 @@ The following issues are fixed in this release:
330329

331330
## Known issues in this release
332331

333-
The following table lists the known issues in this release:
332+
There's no known issue in this release. Any previously known issues have been fixed in subsequent releases.
333+
334+
<!--The following table lists the known issues in this release:
334335
335336
|Feature |Issue |Workaround |
336337
|---------|---------|---------|
337-
| Deployment <!--31699269-->| This issue affects deployment and update on OEM-licensed devices. During deployment, you might see this error at **Apply security settings on servers**: <br></br>`Type 'ConfigureSecurityBaseline' of Role 'AzureStackOSConfig' raised an exception: [ConfigureSecurityBaseline] ConfigureSecurityBaseline failed on <server name> with exception: -> Failed to apply OSConfiguration enforcement for ASHCIApplianceSecurityBaselineConfig on <server name>`. | If you haven’t started the update, see [Azure Local OEM license devices](https://github.com/Azure/AzureLocal-Supportability/blob/main/TSG/Security/TSG-Azure-Local-HCI-OEM-license-devices.md) to apply the preventive steps before updating to Azure Local 2411.3. <br></br> If you’ve encountered the issue, use the same insructions to validate and apply the mitigation. |
338+
| Deployment <!--31699269--| This issue affects deployment and update on OEM-licensed devices. During deployment, you might see this error at **Apply security settings on servers**: <br></br>`Type 'ConfigureSecurityBaseline' of Role 'AzureStackOSConfig' raised an exception: [ConfigureSecurityBaseline] ConfigureSecurityBaseline failed on <server name> with exception: -> Failed to apply OSConfiguration enforcement for ASHCIApplianceSecurityBaselineConfig on <server name>`. | If you haven’t started the update, see [Azure Local OEM license devices](https://github.com/Azure/AzureLocal-Supportability/blob/main/TSG/Security/TSG-Azure-Local-HCI-OEM-license-devices.md) to apply the preventive steps before updating to Azure Local 2411.3. <br></br> If you’ve encountered the issue, use the same insructions to validate and apply the mitigation. |-->
338339

339340

340341
## Known issues from previous releases

azure-local/manage/arc-extension-management.md

Lines changed: 10 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.author: robess
66
ms.topic: how-to
77
ms.custom: devx-track-azurecli, devx-track-azurepowershell
88
ms.reviewer: arduppal
9-
ms.date: 10/22/2024
9+
ms.date: 06/12/2025
1010
---
1111

1212
# Azure Arc extension management on Azure Local
@@ -385,9 +385,9 @@ New-AzStackHciExtension `
385385

386386
### Manual extension upgrade via the Azure portal
387387

388-
The manual extension upgrade works like the [Automatic extension upgrade](/azure/azure-arc/servers/manage-automatic-vm-extension-upgrade?tabs=azure-portal#how-does-automatic-extension-upgrade-work). On an Azure Local Arc-enabled cluster, when you manually upgrade an extension, Azure saves the version you've selected. Azure then attempts to upgrade the extension on all nodes in the cluster to that version.
388+
The manual extension upgrade works like the [Automatic extension upgrade](/azure/azure-arc/servers/manage-automatic-vm-extension-upgrade?tabs=azure-portal#how-does-automatic-extension-upgrade-work). On an Azure Local Arc-enabled cluster, when you manually upgrade an extension, Azure saves the version you've selected. Azure then attempts to upgrade the extension on all nodes in the cluster to that version. Make sure that [extensions are supported for manual upgrade](#extensions-not-supported-for-manual-upgrade).
389389

390-
On some servers, if the extension upgrade fails the platform attempts to upgrade to the selected version during the next [Azure Local cloud sync](../faq.yml).
390+
On some servers, if the extension upgrade fails, the platform attempts to upgrade to the selected version during the next [Azure Local cloud sync](../faq.yml).
391391

392392
Use the manual workflow in these scenarios:
393393

@@ -404,6 +404,13 @@ To manually upgrade an extension, follow these steps:
404404

405405
3. Choose the latest version and select **Save**.
406406

407+
408+
#### Extensions not supported for manual upgrade
409+
410+
Updating Azure Arc extensions manually from the Azure Local Machine page via the Azure portal may result in issues during deployment. The extensions that shouldn't be updated manually are: `AzureEdgeDeviceManagement`, `AzureEdgeLifecycleManager`, and `AzureEdgeAKVBackupForWindows` as shown in the figure.
411+
412+
:::image type="content" source="media/arc-extension-management/arc-extension-installation.png" alt-text="Screenshot of extensions that shouldn't be manually updated." lightbox="media/arc-extension-management/arc-extension-installation.png":::
413+
407414
### Disable automatic extension upgrade
408415

409416
You can disable automatic upgrades for certain extensions in the Azure portal. To disable automatic upgrades, navigate to the **Extensions** page and perform these steps:
345 KB
Loading

azure-stack/operator/azure-stack-mysql-resource-provider-databases.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
title: Create MySQL databases in Azure Stack Hub
33
description: Learn how to create and manage MySQL databases provisioned using the MySQL Adapter Resource Provider in Azure Stack Hub.
44
author: sethmanheim
5-
ms.topic: article
5+
ms.topic: how-to
66
ms.date: 1/17/2025
77
ms.author: sethm
88
ms.lastreviewed: 10/16/2019

azure-stack/operator/azure-stack-mysql-resource-provider-deploy.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
title: Deploy MySQL resource provider on Azure Stack Hub
33
description: Learn how to deploy the MySQL resource provider adapter and MySQL databases as a service on Azure Stack Hub.
44
author: sethmanheim
5-
ms.topic: article
5+
ms.topic: install-set-up-deploy
66
ms.date: 09/02/2021
77
ms.author: sethm
88
ms.reviewer: jiadu

azure-stack/operator/azure-stack-mysql-resource-provider-hosting-servers.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
title: Add MySQL hosting servers in Azure Stack Hub
33
description: Learn how to add MySQL hosting servers for provisioning through the MySQL Adapter Resource Provider.
44
author: sethmanheim
5-
ms.topic: article
5+
ms.topic: how-to
66
ms.date: 08/23/2022
77
ms.author: sethm
88
ms.reviewer: xiaofmao

azure-stack/operator/azure-stack-mysql-resource-provider-remove.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
title: Remove the MySQL resource provider in Azure Stack Hub
33
description: Learn how to remove the MySQL resource provider from your Azure Stack Hub deployment.
44
author: sethmanheim
5-
ms.topic: article
5+
ms.topic: how-to
66
ms.date: 01/17/2025
77
ms.author: sethm
88
ms.lastreviewed: 09/26/2021

0 commit comments

Comments
 (0)