You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: azure-local/concepts/billing.md
+9-1Lines changed: 9 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ author: alkohli
5
5
ms.author: alkohli
6
6
ms.topic: conceptual
7
7
ms.service: azure-local
8
-
ms.date: 01/28/2025
8
+
ms.date: 04/29/2025
9
9
---
10
10
11
11
# Azure Local billing and payment
@@ -37,6 +37,14 @@ To determine how many cores are present in a system, Azure Local periodically ch
37
37
38
38
To manually upload core data to Azure, use the **`Sync-AzureStackHCI`** cmdlet.
39
39
40
+
## Billing changes for 12.2504.1001.20 and later
41
+
42
+
For deployments running solution version 12.2504.1001.20 and later, the usage record originates from the Azure Local resource in Azure directly.
43
+
44
+
If you shut down or decommission your system without deleting the Azure Local resource in Azure, billing continues until the Azure Local resource in Azure is in a disconnected state for over 31 days.
45
+
46
+
To avoid unexpected charges, you must delete the Azure Local resource in Azure when you decide to decommission the cluster. If you need to deploy the service again, you must go through the Azure Local deployment process. For more information, see [Deploy Azure Local](../deploy/deploy-via-portal.md).
47
+
40
48
## FAQ
41
49
42
50
- If I already have an Azure subscription, can I use it for Azure Local? **Yes**
> Azure Local, version 23H2 (releases 2311.2 to 2509) will reach end of support on October 31, 2025. After this date, you won't receive monthly security and quality updates. Support requests will only be available for patching to a supported release.
Copy file name to clipboardExpand all lines: azure-local/includes/hci-vm-prerequisites.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ ms.topic: include
6
6
ms.date: 04/10/2025
7
7
---
8
8
9
-
- Access to an Azure subscription with the appropriate RBAC role and permissions assigned. For more information, see [RBAC roles for Azure Local VM management](../manage/assign-vm-rbac-roles.md#about-builtin-rbac-roles).
9
+
- Access to an Azure subscription with the appropriate RBAC role and permissions assigned. For more information, see [RBAC roles for Azure Local VM management](../manage/assign-vm-rbac-roles.md#about-built-in-rbac-roles).
10
10
- Access to a resource group where you want to provision the VM.
11
11
- Access to one or more VM images on your Azure Local. These VM images could be created by one of the following procedures:
12
12
-[VM image starting from an image in Azure Marketplace](../manage/virtual-machine-image-azure-marketplace.md).
Copy file name to clipboardExpand all lines: azure-local/known-issues.md
+13-5Lines changed: 13 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: Release notes with fixed and known issues in Azure Local
3
3
description: Read about the known issues and fixed issues in Azure Local.
4
4
author: alkohli
5
5
ms.topic: conceptual
6
-
ms.date: 04/28/2025
6
+
ms.date: 04/29/2025
7
7
ms.author: alkohli
8
8
ms.reviewer: alkohli
9
9
---
@@ -23,10 +23,16 @@ For more information about new features in this release, see [What's new for Azu
23
23
24
24
## Known issues for version 2504
25
25
26
-
This software release maps to software version number **2504.1001.19**.
26
+
For the 2504 release of Azure Local, Microsoft released two security updates: one for existing deployments and another for new deployments. The following table provides information about different deployment types, their corresponding security updates, and OS builds:
| New deployments | 12.2504.1001.20 | 26100.3775 |
27
33
28
34
> [!IMPORTANT]
29
-
> The new deployments of this software use the **2504.1001.19** build. You can also update from 2503.
35
+
> The new deployments of this software use the **12.2504.1001.20** build. You can also update an existing deployment from 2503 using **11.2504.1001.19**.
30
36
31
37
Release notes for this version include the issues fixed in this release, known issues in this release, and release note issues carried over from previous versions.
32
38
@@ -56,7 +62,9 @@ The following table lists the known issues in this release:
56
62
57
63
|Feature |Issue |Workaround |
58
64
|---------|---------|---------|
59
-
| Add server <br> Repair server <!--32447442-->| The `Add-server` and `Repair-server` cmdlets fail with the error: <br> `Cluster Build ID matches node to add's Build ID`. | Use the OS image of the same solution version as that running on the existing cluster. To get the OS image, contact Microsoft Support or OEM Support. |
65
+
| Add server <br> Repair server <!--32447442-->| The `Add-server` and `Repair-server` cmdlets fail with the error: <br> `Cluster Build ID matches node to add's Build ID`. | Use the OS image of the same solution version as that running on the existing cluster. To get the OS image, identify and download the image version from this [Release table](https://github.com/Azure-Samples/AzureLocal/blob/main/os-image/os-image-tracking-table.md). |
66
+
| Add server <br> Repair server <!--32483959-->| If running `Add-server` and `Repair-server` cmdlets with customized storage adapter IP configured in your Azure Local instance, the operation might fail with the error: <br> `Type 'ConfigureAzureStackHostStorageAdpaterIPAddressesInAddRepairNode' of Role 'HostNetwork' raised an exception: Connecting to remote server <MACHINE> failed with the following error message : Access is denied.`. | Contact Microsoft Support if you experience this issue. |
67
+
| Security management <!--57340784-->| The **Security defaults**, **Application control**, and **Data protections** pages show *Unknown* for all security settings. | This issue is only in the security compliance report. The states of the security settings are unaffected. Use PowerShell to verify the compliance status of the security settings. <br><br>For more information, see [Manage secure baseline via PowerShell cmdlets](./manage/manage-secure-baseline.md#powershell-cmdlet-properties)|
60
68
61
69
62
70
## Known issues from previous releases
@@ -233,7 +241,7 @@ The following table lists the known issues from previous releases:
233
241
| Azure Local VM management | The Mochostagent service might appear to be running but can get stuck without updating logs for over a month. You can identify this issue by checking the service logs in `C:\programdata\mochostagent\logs` to see if logs are being updated. | Run the following command to restart the mochostagent service: `restart-service mochostagent`. |
234
242
| Update | When viewing the readiness check results for an Azure Local instance via the Azure Update Manager, there might be multiple readiness checks with the same name. |There's no known workaround in this release. Select **View details** to view specific information about the readiness check. |
235
243
| Update | There's an intermittent issue in this release when the Azure portal incorrectly reports the update status as **Failed to update** or **In progress** though the update is complete. |[Connect to your Azure Local instance](./update/update-via-powershell-23h2.md#connect-to-your-azure-local) via a remote PowerShell session. To confirm the update status, run the following PowerShell cmdlets: <br><br> `$Update = get-solutionupdate`\|`? version -eq "<version string>"`<br><br>Replace the version string with the version you're running. For example, "10.2405.0.23". <br><br>`$Update.state`<br><br>If the update status is **Installed**, no further action is required on your part. Azure portal refreshes the status correctly within 24 hours. <br> To refresh the status sooner, follow these steps on one of the nodes. <br>Restart the Cloud Management cluster group.<br>`Stop-ClusterGroup "Cloud Management"`<br>`Start-ClusterGroup "Cloud Management"`|
236
-
| Update <!--28299865--> |During an initial MOC update, a failure occurs due to the target MOC version not being found in the catalog cache. The follow-up updates and retries show MOC in the target version, without the update succeeding, and as a result the Azure Arc rsource bridge update fails.<br><br>To validate this issue, collect the update logs using [Troubleshoot solution updates for Azure Local](./update/update-troubleshooting-23h2.md#collect-update-logs). The log files should show a similar error message (current version might differ in the error message):<br><br>`[ERROR: { "errorCode": "InvalidEntityError", "errorResponse": "{\n\"message\": \"the cloud fabric (MOC) is currently at version v0.13.1. A minimum version of 0.15.0 is required for compatibility\"\n}" }]`|Follow these steps to mitigate the issue:<br><br>1. To find the MOC agent version, run the following command: `'C:\Program Files\AksHci\wssdcloudagent.exe' version`.<br><br>2. Use the output of the command to find the MOC version from the table below that matches the agent version, and set `$initialMocVersion` to that MOC version. Set the `$targetMocVersion` by finding the Azure Local build you're updating to and get the matching MOC version from the following table. Use these values in the mitigation script provided below:<br><br><table><tr><td><b>Build</b></td><td><b>MOC version</b></td><td><b>Agent version</b></td></tr><tr><td>2311.2</td><td>1.0.24.10106</td><td>v0.13.0-6-gf13a73f7, v0.11.0-alpha.38,01/06/2024</td></tr><tr><td>2402</td><td>1.0.25.10203</td><td>v0.14.0, v0.13.1, 02/02/2024</td></tr><tr><td>2402.1</td><td>1.0.25.10302</td><td>v0.14.0, v0.13.1, 03/02/2024</td></tr><tr><td>2402.2</td><td>1.1.1.10314</td><td>v0.16.0-1-g04bf0dec, v0.15.1, 03/14/2024</td></tr><tr><td>2405/2402.3</td><td>1.3.0.10418</td><td>v0.17.1, v0.16.5, 04/18/2024</td></tr></table><br><br>For example, if the agent version is v0.13.0-6-gf13a73f7, v0.11.0-alpha.38,01/06/2024, then `$initialMocVersion = "1.0.24.10106"` and if you're updating to 2405.0.23, then `$targetMocVersion = "1.3.0.10418"`.<br><br>3. Run the following PowerShell commands on the first node:<br><br>`$initialMocVersion = "<initial version determined from step 2>"`<br>`$targetMocVersion = "<target version determined from step 2>"`<br><br># Import MOC module twice<br>`import-module moc`<br>`import-module moc`<br>`$verbosePreference = "Continue"`<br><br># Clear the SFS catalog cache<br>`Remove-Item (Get-MocConfig).manifestCache`<br><br># Set version to the current MOC version prior to update, and set state as update failed<br>`Set-MocConfigValue -name "version" -value $initialMocVersion`<br>`Set-MocConfigValue -name "installState" -value ([InstallState]::UpdateFailed)`<br><br># Rerun the MOC update to desired version<br>`Update-Moc -version $targetMocVersion`<br><br>4. Resume the update. |
244
+
| Update <!--28299865--> |During an initial MOC update, a failure occurs due to the target MOC version not being found in the catalog cache. The follow-up updates and retries show MOC in the target version, without the update succeeding, and as a result the Azure Arc resource bridge update fails.<br><br>To validate this issue, collect the update logs using [Troubleshoot solution updates for Azure Local](./update/update-troubleshooting-23h2.md#collect-update-logs). The log files should show a similar error message (current version might differ in the error message):<br><br>`[ERROR: { "errorCode": "InvalidEntityError", "errorResponse": "{\n\"message\": \"the cloud fabric (MOC) is currently at version v0.13.1. A minimum version of 0.15.0 is required for compatibility\"\n}" }]`|Follow these steps to mitigate the issue:<br><br>1. To find the MOC agent version, run the following command: `'C:\Program Files\AksHci\wssdcloudagent.exe' version`.<br><br>2. Use the output of the command to find the MOC version from the table below that matches the agent version, and set `$initialMocVersion` to that MOC version. Set the `$targetMocVersion` by finding the Azure Local build you're updating to and get the matching MOC version from the following table. Use these values in the mitigation script provided below:<br><br><table><tr><td><b>Build</b></td><td><b>MOC version</b></td><td><b>Agent version</b></td></tr><tr><td>2311.2</td><td>1.0.24.10106</td><td>v0.13.0-6-gf13a73f7, v0.11.0-alpha.38,01/06/2024</td></tr><tr><td>2402</td><td>1.0.25.10203</td><td>v0.14.0, v0.13.1, 02/02/2024</td></tr><tr><td>2402.1</td><td>1.0.25.10302</td><td>v0.14.0, v0.13.1, 03/02/2024</td></tr><tr><td>2402.2</td><td>1.1.1.10314</td><td>v0.16.0-1-g04bf0dec, v0.15.1, 03/14/2024</td></tr><tr><td>2405/2402.3</td><td>1.3.0.10418</td><td>v0.17.1, v0.16.5, 04/18/2024</td></tr></table><br><br>For example, if the agent version is v0.13.0-6-gf13a73f7, v0.11.0-alpha.38,01/06/2024, then `$initialMocVersion = "1.0.24.10106"` and if you're updating to 2405.0.23, then `$targetMocVersion = "1.3.0.10418"`.<br><br>3. Run the following PowerShell commands on the first node:<br><br>`$initialMocVersion = "<initial version determined from step 2>"`<br>`$targetMocVersion = "<target version determined from step 2>"`<br><br># Import MOC module twice<br>`import-module moc`<br>`import-module moc`<br>`$verbosePreference = "Continue"`<br><br># Clear the SFS catalog cache<br>`Remove-Item (Get-MocConfig).manifestCache`<br><br># Set version to the current MOC version prior to update, and set state as update failed<br>`Set-MocConfigValue -name "version" -value $initialMocVersion`<br>`Set-MocConfigValue -name "installState" -value ([InstallState]::UpdateFailed)`<br><br># Rerun the MOC update to desired version<br>`Update-Moc -version $targetMocVersion`<br><br>4. Resume the update. |
237
245
| Deployment<!--27312671--> | In some instances, during the registration of Azure Local machines, this error might be seen in the debug logs: *Encountered internal server error*. One of the mandatory extensions for device deployment might not be installed. |Follow these steps to mitigate the issue: <br><br> `$Settings = @{ "CloudName" = $Cloud; "RegionName" = $Region; "DeviceType" = "AzureEdge" }` <br><br> `New-AzConnectedMachineExtension -Name "AzureEdgeTelemetryAndDiagnostics" -ResourceGroupName $ResourceGroup -MachineName $env:COMPUTERNAME -Location $Region -Publisher "Microsoft.AzureStack.Observability" -Settings $Settings -ExtensionType "TelemetryAndDiagnostics" -EnableAutomaticUpgrade` <br><br> `New-AzConnectedMachineExtension -Name "AzureEdgeDeviceManagement" -ResourceGroupName $ResourceGroup -MachineName $env:COMPUTERNAME -Location $Region -Publisher "Microsoft.Edge" -ExtensionType "DeviceManagementExtension"`<br><br> `New-AzConnectedMachineExtension -Name "AzureEdgeLifecycleManager" -ResourceGroupName $ResourceGroup -MachineName $env:COMPUTERNAME -Location $Region -Publisher "Microsoft.AzureStack.Orchestration" -ExtensionType "LcmController"` <br><br>`New-AzConnectedMachineExtension -Name "AzureEdgeRemoteSupport" -ResourceGroupName $ResourceGroup -MachineName $env:COMPUTERNAME -Location $Region -Publisher "Microsoft.AzureStack.Observability" -ExtensionType "EdgeRemoteSupport" -EnableAutomaticUpgrade` |
238
246
| AKS on Azure Local |AKS cluster creation fails with the `Error: Invalid AKS network resource id`. This issue can occur when the associated logical network name has an underscore. |Underscores aren't supported in logical network names. Make sure to not use underscore in the names for logical networks deployed on your Azure Local. |
239
247
| Deployment <!--26852606-->|In this release, there's a remote task failure on a multi-node deployment that results in the following exception:<br>```ECE RemoteTask orchestration failure with ASRR1N42R01U31 (node pingable - True): A WebException occurred while sending a RestRequest. WebException.Status: ConnectFailure on [https://<URL>](https://<URL>).```|The mitigation is to restart the ECE agent on the affected node. On your machine, open a PowerShell session and run the following command:<br>```Restart-Service ECEAgent```. |
0 commit comments