Skip to content

Commit b1d13d3

Browse files
authored
Merge pull request #16620 from MicrosoftDocs/main
12/17/2024 PM Publish
2 parents 368950d + abfa00c commit b1d13d3

22 files changed

+253
-43
lines changed

AKS-Hybrid/TOC.yml

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -167,13 +167,13 @@
167167
- name: Resources
168168
items:
169169
- name: Azure Local
170-
href: /azure-stack/hci/index
171-
- name: Azure hybrid cloud
172-
href: /hybrid
170+
href: /azure-local/index
171+
- name: Azure Adaptive Cloud
172+
href: /azure/adaptive-cloud
173173
- name: Azure Arc Jumpstart
174174
href: https://azurearcjumpstart.com/azure_arc_jumpstart/azure_arc_k8s/aks_stack_hci/
175-
- name: Azure roadmap
176-
href: https://azure.microsoft.com/roadmap/
175+
- name: AKS enabled by Arc roadmap
176+
href: https://github.com/Azure/aksArc/projects
177177
- name: AKS Edge Essentials
178178
items:
179179
- name: Overview

AKS-Hybrid/breadcrumb/toc.yml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
1-
- name: Azure hybrid and multicloud documentation
1+
- name: Adaptive Cloud documentation
22
tocHref: /azure/adaptive-cloud/
33
topicHref: /azure/adaptive-cloud/index
44
items:
55
- name: AKS enabled by Azure Arc
6-
tocHref: /azure/aks/hybrid/
7-
topicHref: /azure/aks/hybrid/index
6+
tocHref: /azure/aks/aksarc/
7+
topicHref: /azure/aks/aksarc/index
88
### items:
99
### For contextual TOC
1010
- name: AKS enabled by Azure Arc
1111
tocHref: /azure/azure-arc/kubernetes/
12-
topicHref: /azure/aks/hybrid/index
12+
topicHref: /azure/aks/aksarc/index

AKS-Hybrid/docfx.json

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@
5353
"feedback_help_link_url": "https://github.com/Azure/aksArc/issues",
5454
"feedback_help_link_type": "get-product-support",
5555
"feedback_product_url": "https://feedback.azure.com/d365community/forum/a2f8da29-b853-ec11-8f8e-0022481f2fe7",
56-
"breadcrumb_path": "/azure/aks/hybrid/breadcrumb/toc.json",
56+
"breadcrumb_path": "/azure/aks/aksarc/breadcrumb/toc.json",
5757
"extendBreadcrumb": false,
5858
"manager":"femila",
5959
"ms.service": "azure-stack",
@@ -66,8 +66,8 @@
6666
"./**/*.md": "azure-kubernetes-service-hybrid"
6767
},
6868
"titleSuffix": {
69-
"**/*.md": "AKS hybrid",
70-
"aks-hci/**/*.md": "AKS hybrid"
69+
"**/*.md": "AKS enabled by Azure Arc",
70+
"azure-local/**/*.md": "AKS enabled by Azure Arc"
7171
},
7272
"feedback_product_url": {
7373
"aks-hci/**/*.md": "https://feedback.azure.com/d365community/forum/a2f8da29-b853-ec11-8f8e-0022481f2fe7"
@@ -76,6 +76,6 @@
7676
}
7777
},
7878
"template": [],
79-
"dest": "AKS-Hybrid"
79+
"dest": "AKS-Arc"
8080
}
8181
}

azure-local/TOC.yml

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,9 @@ items:
1717
items:
1818
- name: Known issues 2411
1919
items:
20-
- name: 2411 - Current
20+
- name: 2411.1 - Current
21+
href: known-issues-2411-1.md
22+
- name: 2411
2123
href: known-issues-2411.md
2224
- name: Known issues 2408
2325
items:
@@ -65,7 +67,9 @@ items:
6567
href: known-issues-2311.md
6668
- name: Security updates
6769
items:
68-
- name: November 2024 - Current
70+
- name: December 2024 - Current
71+
href: security-update/security-update-dec-2024.md
72+
- name: November 2024
6973
href: security-update/security-update-nov-2024.md
7074
- name: October 2024
7175
href: security-update/security-update-oct-2024.md
3.04 KB
Loading
-2.06 KB
Loading

azure-local/concepts/sdn-multisite-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.topic: conceptual
66
author: alkohli
77
ms.subservice:
88
zone_pivot_groups: windows-os
9-
ms.date: 11/25/2024
9+
ms.date: 12/17/2024
1010
---
1111

1212
# What is SDN Multisite?

azure-local/known-issues-2408-1.md

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -69,11 +69,6 @@ The following table lists the known issues from previous releases:
6969
| Deployment<!--27312671--> | In some instances, during the registration of Azure Local machines, this error might be seen in the debug logs: *Encountered internal server error*. One of the mandatory extensions for device deployment might not be installed. |Follow these steps to mitigate the issue: <br><br> `$Settings = @{ "CloudName" = $Cloud; "RegionName" = $Region; "DeviceType" = "AzureEdge" }` <br><br> `New-AzConnectedMachineExtension -Name "AzureEdgeTelemetryAndDiagnostics" -ResourceGroupName $ResourceGroup -MachineName $env:COMPUTERNAME -Location $Region -Publisher "Microsoft.AzureStack.Observability" -Settings $Settings -ExtensionType "TelemetryAndDiagnostics" -EnableAutomaticUpgrade` <br><br> `New-AzConnectedMachineExtension -Name "AzureEdgeDeviceManagement" -ResourceGroupName $ResourceGroup -MachineName $env:COMPUTERNAME -Location $Region -Publisher "Microsoft.Edge" -ExtensionType "DeviceManagementExtension"`<br><br> `New-AzConnectedMachineExtension -Name "AzureEdgeLifecycleManager" -ResourceGroupName $ResourceGroup -MachineName $env:COMPUTERNAME -Location $Region -Publisher "Microsoft.AzureStack.Orchestration" -ExtensionType "LcmController"` <br><br>`New-AzConnectedMachineExtension -Name "AzureEdgeRemoteSupport" -ResourceGroupName $ResourceGroup -MachineName $env:COMPUTERNAME -Location $Region -Publisher "Microsoft.AzureStack.Observability" -ExtensionType "EdgeRemoteSupport" -EnableAutomaticUpgrade` |
7070
| Update | There's an intermittent issue in this release when the Azure portal incorrectly reports the update status as **Failed to update** or **In progress** though the update is complete. |[Connect to your Azure Local instance](./update/update-via-powershell-23h2.md#connect-to-your-azure-local) via a remote PowerShell session. To confirm the update status, run the following PowerShell cmdlets: <br><br> `$Update = get-solutionupdate`\| `? version -eq "<version string>"`<br><br>Replace the version string with the version you're running. For example, "10.2405.0.23". <br><br>`$Update.state`<br><br>If the update status is **Installed**, no further action is required on your part. Azure portal refreshes the status correctly within 24 hours. <br> To refresh the status sooner, follow these steps on one of the cluster nodes. <br>Restart the Cloud Management cluster group.<br>`Stop-ClusterGroup "Cloud Management"`<br>`Start-ClusterGroup "Cloud Management"`|
7171
| Update <!--28299865--> |During an initial MOC update, a failure occurs due to the target MOC version not being found in the catalog cache. The follow-up updates and retries show MOC in the target version, without the update succeeding, and as a result the Arc Resource Bridge update fails.<br><br>To validate this issue, collect the update logs using [Troubleshoot solution updates for Azure Local, version 23H2](./update/update-troubleshooting-23h2.md#collect-update-logs). The log files should show a similar error message (current version might differ in the error message):<br><br>`[ERROR: { "errorCode": "InvalidEntityError", "errorResponse": "{\n\"message\": \"the cloud fabric (MOC) is currently at version v0.13.1. A minimum version of 0.15.0 is required for compatibility\"\n}" }]`|Follow these steps to mitigate the issue:<br><br>1. To find the MOC agent version, run the following command: `'C:\Program Files\AksHci\wssdcloudagent.exe' version`.<br><br>2. Use the output of the command to find the MOC version from the table below that matches the agent version, and set `$initialMocVersion` to that MOC version. Set the `$targetMocVersion` by finding the Azure Local build you're updating to and get the matching MOC version from the following table. Use these values in the mitigation script provided below:<br><br><table><tr><td><b>Build</b></td><td><b>MOC version</b></td><td><b>Agent version</b></td></tr><tr><td>2311.2</td><td>1.0.24.10106</td><td>v0.13.0-6-gf13a73f7, v0.11.0-alpha.38,01/06/2024</td></tr><tr><td>2402</td><td>1.0.25.10203</td><td>v0.14.0, v0.13.1, 02/02/2024</td></tr><tr><td>2402.1</td><td>1.0.25.10302</td><td>v0.14.0, v0.13.1, 03/02/2024</td></tr><tr><td>2402.2</td><td>1.1.1.10314</td><td>v0.16.0-1-g04bf0dec, v0.15.1, 03/14/2024</td></tr><tr><td>2405/2402.3</td><td>1.3.0.10418</td><td>v0.17.1, v0.16.5, 04/18/2024</td></tr></table><br><br>For example, if the agent version is v0.13.0-6-gf13a73f7, v0.11.0-alpha.38,01/06/2024, then `$initialMocVersion = "1.0.24.10106"` and if you are updating to 2405.0.23, then `$targetMocVersion = "1.3.0.10418"`.<br><br>3. Run the following PowerShell commands on the first node:<br><br>`$initialMocVersion = "<initial version determined from step 2>"`<br>`$targetMocVersion = "<target version determined from step 2>"`<br><br># Import MOC module twice<br>`import-module moc`<br>`import-module moc`<br>`$verbosePreference = "Continue"`<br><br># Clear the SFS catalog cache<br>`Remove-Item (Get-MocConfig).manifestCache`<br><br># Set version to the current MOC version prior to update, and set state as update failed<br>`Set-MocConfigValue -name "version" -value $initialMocVersion`<br>`Set-MocConfigValue -name "installState" -value ([InstallState]::UpdateFailed)`<br><br># Rerun the MOC update to desired version<br>`Update-Moc -version $targetMocVersion`<br><br>4. Resume the update. |
72-
| AKS on Azure Local |AKS cluster creation fails with the `Error: Invalid AKS network resource id`. This issue can occur when the associated logical network name has an underscore. |Underscores aren't supported in logical network names. Make sure to not use underscore in the names for logical networks deployed on your Azure Local instance. |
73-
| Update | When viewing the readiness check results for an Azure Local instance via the Azure Update Manager, there might be multiple readiness checks with the same name. |There's no known workaround in this release. Select **View details** to view specific information about the readiness check. |
74-
| Deployment<!--27312671--> | In some instances, during the registration of Azure Local machines, this error might be seen in the debug logs: *Encountered internal server error*. One of the mandatory extensions for device deployment might not be installed. |Follow these steps to mitigate the issue: <br><br> `$Settings = @{ "CloudName" = $Cloud; "RegionName" = $Region; "DeviceType" = "AzureEdge" }` <br><br> `New-AzConnectedMachineExtension -Name "AzureEdgeTelemetryAndDiagnostics" -ResourceGroupName $ResourceGroup -MachineName $env:COMPUTERNAME -Location $Region -Publisher "Microsoft.AzureStack.Observability" -Settings $Settings -ExtensionType "TelemetryAndDiagnostics" -EnableAutomaticUpgrade` <br><br> `New-AzConnectedMachineExtension -Name "AzureEdgeDeviceManagement" -ResourceGroupName $ResourceGroup -MachineName $env:COMPUTERNAME -Location $Region -Publisher "Microsoft.Edge" -ExtensionType "DeviceManagementExtension"`<br><br> `New-AzConnectedMachineExtension -Name "AzureEdgeLifecycleManager" -ResourceGroupName $ResourceGroup -MachineName $env:COMPUTERNAME -Location $Region -Publisher "Microsoft.AzureStack.Orchestration" -ExtensionType "LcmController"` <br><br>`New-AzConnectedMachineExtension -Name "AzureEdgeRemoteSupport" -ResourceGroupName $ResourceGroup -MachineName $env:COMPUTERNAME -Location $Region -Publisher "Microsoft.AzureStack.Observability" -ExtensionType "EdgeRemoteSupport" -EnableAutomaticUpgrade` |
75-
| Update | There's an intermittent issue in this release when the Azure portal incorrectly reports the update status as **Failed to update** or **In progress** though the update is complete. |[Connect to your Azure Local](./update/update-via-powershell-23h2.md#connect-to-your-azure-local) via a remote PowerShell session. To confirm the update status, run the following PowerShell cmdlets: <br><br> `$Update = get-solutionupdate` \| `? version -eq "<version string>"`<br><br>Replace the version string with the version you're running. For example, "10.2405.0.23". <br><br>`$Update.state`<br><br>If the update status is **Installed**, no further action is required on your part. Azure portal refreshes the status correctly within 24 hours. <br> To refresh the status sooner, follow these steps on one of the cluster nodes. <br>Restart the Cloud Management cluster group.<br>`Stop-ClusterGroup "Cloud Management"`<br>`Start-ClusterGroup "Cloud Management"`|
76-
| Update <!--28299865--> |During an initial MOC update, a failure occurs due to the target MOC version not being found in the catalog cache. The follow-up updates and retries show MOC in the target version, without the update succeeding, and as a result the Arc Resource Bridge update fails.<br><br>To validate this issue, collect the update logs using [Troubleshoot solution updates for Azure Local, version 23H2](./update/update-troubleshooting-23h2.md#collect-update-logs). The log files should show a similar error message (current version might differ in the error message):<br><br>`[ERROR: { "errorCode": "InvalidEntityError", "errorResponse": "{\n\"message\": \"the cloud fabric (MOC) is currently at version v0.13.1. A minimum version of 0.15.0 is required for compatibility\"\n}" }]`|Follow these steps to mitigate the issue:<br><br>1. To find the MOC agent version, run the following command: `'C:\Program Files\AksHci\wssdcloudagent.exe' version`.<br><br>2. Use the output of the command to find the MOC version from the table below that matches the agent version, and set `$initialMocVersion` to that MOC version. Set the `$targetMocVersion` by finding the Azure Local build you're updating to and get the matching MOC version from the following table. Use these values in the mitigation script provided below:<br><br><table><tr><td><b>Build</b></td><td><b>MOC version</b></td><td><b>Agent version</b></td></tr><tr><td>2311.2</td><td>1.0.24.10106</td><td>v0.13.0-6-gf13a73f7, v0.11.0-alpha.38,01/06/2024</td></tr><tr><td>2402</td><td>1.0.25.10203</td><td>v0.14.0, v0.13.1, 02/02/2024</td></tr><tr><td>2402.1</td><td>1.0.25.10302</td><td>v0.14.0, v0.13.1, 03/02/2024</td></tr><tr><td>2402.2</td><td>1.1.1.10314</td><td>v0.16.0-1-g04bf0dec, v0.15.1, 03/14/2024</td></tr><tr><td>2405/2402.3</td><td>1.3.0.10418</td><td>v0.17.1, v0.16.5, 04/18/2024</td></tr></table><br><br>For example, if the agent version is v0.13.0-6-gf13a73f7, v0.11.0-alpha.38,01/06/2024, then `$initialMocVersion = "1.0.24.10106"` and if you are updating to 2405.0.23, then `$targetMocVersion = "1.3.0.10418"`.<br><br>3. Run the following PowerShell commands on the first node:<br><br>`$initialMocVersion = "<initial version determined from step 2>"`<br>`$targetMocVersion = "<target version determined from step 2>"`<br><br># Import MOC module twice<br>`import-module moc`<br>`import-module moc`<br>`$verbosePreference = "Continue"`<br><br># Clear the SFS catalog cache<br>`Remove-Item (Get-MocConfig).manifestCache`<br><br># Set version to the current MOC version prior to update, and set state as update failed<br>`Set-MocConfigValue -name "version" -value $initialMocVersion`<br>`Set-MocConfigValue -name "installState" -value ([InstallState]::UpdateFailed)`<br><br># Rerun the MOC update to desired version<br>`Update-Moc -version $targetMocVersion`<br><br>4. Resume the update. |
7772
| AKS on Azure Local |AKS cluster creation fails with the `Error: Invalid AKS network resource id`. This issue can occur when the associated logical network name has an underscore. |Underscores aren't supported in logical network names. Make sure to not use underscore in the names for logical networks deployed on your Azure Local. |
7873
| Repair server <!--27053788--> |In rare instances, the `Repair-Server` operation fails with the `HealthServiceWaitForDriveFW` error. In these cases, the old drives from the repaired node aren't removed and new disks are stuck in the maintenance mode. |To prevent this issue, make sure that you DO NOT drain the node either via the Windows Admin Center or using the `Suspend-ClusterNode -Drain` PowerShell cmdlet before you start `Repair-Server`. <br> If the issue occurs, contact Microsoft Support for next steps. |
7974
| Repair server <!--27152397--> |This issue is seen when the single node Azure Local instance is updated from 2311 to 2402 and then the `Repair-Server` is performed. The repair operation fails. |Before you repair the single node, follow these steps:<br>1. Run version 2402 for the *ADPrepTool*. Follow the steps in [Prepare Active Directory](./deploy/deployment-prep-active-directory.md). This action is quick and adds the required permissions to the Organizational Unit (OU).<br>2. Move the computer object from **Computers** segment to the root OU. Run the following command:<br>`Get-ADComputer <HOSTNAME>` \| `Move-ADObject -TargetPath "<OU path>"`|

0 commit comments

Comments
 (0)