You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/databox-online/azure-stack-edge-gpu-2403-release-notes.md
+9-16Lines changed: 9 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -51,31 +51,24 @@ You can update to the latest version using the following update paths:
51
51
The 2403 release has the following new features and enhancements:
52
52
53
53
- CAT-1 STIG security fixes Mariner Guest OS for AKS on Azure Stack Edge.
54
-
-Deprecating support for AKS-Telemetry on AKS on Azure Stack Edge.
54
+
-Deprecated support for AKS telemetry on AKS on Azure Stack Edge.
55
55
- Zone-label support for two-node Kubernetes clusters.
56
56
- Hyper-V VM management: memory usage monitoring on Azure Stack Edge host.
57
57
58
58
## Issues fixed in this release
59
59
60
60
| No. | Feature | Issue |
61
61
| --- | --- | --- |
62
-
|**1.**| Two-node cold boot of the server causes high availability VM cluster resources to come up as offline.| Changed ColdStartSetting to AlwaysStart. |
63
-
|**2.**| Marketplace image support.| Fixed bug allowing Windows Marketplace image on Azure Stack Edge A and TMA. |
64
-
|**3.**| Fixed VM NIC link flapping after Azure Stack Edge host power off/on, which can cause VM losing its DHCP IP.||
65
-
|**4.**| Due to proxy ARP configurations in some customer environments, **IP address in use** check returns false positive even though no endpoint in the network is using the IP.| The fix skips the ARP-based VM **IP address in use** check if the IP address is allocated from an internal network managed by Azure Stack Edge. |
66
-
|**5.**| VM NIC change operation times out after 3 hours, which blocks other VM update operations. On Microsoft Kubernetes clusters, Persistent Volume (PV) dependent pods get stuck. The issue occurs when multiple NICs within a VM are being transferred from a VLAN virtual network to a non-VLAN virtual network. The transfer involves asynchronous calls that are processed in the same thread. If one NIC transfer is made while the other NIC transfer is still processing, a deadlock could occur due to the shared API activity ID in one thread.| To mitigate this issue, the API activity ID would be uniquely generated even within one thread, ensuring that the parallel calls are independent and don't affect each other. After the fix, the VM NIC change operation times out quickly and the VM update won't be blocked. |
62
+
|**1.**|Clustering |Two-node cold boot of the server causes high availability VM cluster resources to come up as offline. Changed ColdStartSetting to AlwaysStart. |
63
+
|**2.**| Marketplace image support | Fixed bug allowing Windows Marketplace image on Azure Stack Edge A and TMA. |
64
+
|**3.**|Network connectivity |Fixed VM NIC link flapping after Azure Stack Edge host power off/on, which can cause VM losing its DHCP IP. |
65
+
|**4.**|Network connectivity |Due to proxy ARP configurations in some customer environments, **IP address in use** check returns false positive even though no endpoint in the network is using the IP. The fix skips the ARP-based VM **IP address in use** check if the IP address is allocated from an internal network managed by Azure Stack Edge. |
66
+
|**5.**|Network connectivity |VM NIC change operation times out after 3 hours, which blocks other VM update operations. On Microsoft Kubernetes clusters, Persistent Volume (PV) dependent pods get stuck. The issue occurs when multiple NICs within a VM are being transferred from a VLAN virtual network to a non-VLAN virtual network. The transfer involves asynchronous calls that are processed in the same thread. If one NIC transfer is made while the other NIC transfer is still processing, a deadlock could occur due to the shared API activity ID in one thread. To mitigate this issue, the API activity ID would be uniquely generated even within one thread, ensuring that the parallel calls are independent and don't affect each other. After the fix, the VM NIC change operation times out quickly and the VM update won't be blocked. |
67
67
|**6.**| Kubernetes | Overall two-node Kubernetes resiliency improvements, like increasing memory for control plane for AKS workload cluster, increasing limits for etcd, multi-replica, and hard anti-affinity support for core DNS and Azure disk csi controller pods to improve VM failover times. |
68
68
|**7.**| Compute Diagnostic and Update | Resiliency fixes |
69
+
|**8.**| Security | STIG security fixes. Mariner Guest OS for AKS on Azure Stack Edge. |
69
70
70
-
CAT-1 STIG security fixes Mariner Guest OS for AKS on ASE.
71
-
72
-
Deprecating support for AKS-Telemetry on AKS on ASE
73
-
74
-
Zone-label support for Kubernetes cluster on two-node
75
-
76
-
Field IcM - monitor "Hyper-V Virtual Machine Management" memory usage on ASE host and keep it in check.
77
-
78
-
<!--!## Known issues in this release
71
+
<!--!## Known issues in this release
79
72
80
73
| No. | Feature | Issue | Workaround/comments |
81
74
| --- | --- | --- | --- |
@@ -86,7 +79,7 @@ Field IcM - monitor "Hyper-V Virtual Machine Management" memory usage on ASE hos
86
79
87
80
| No. | Feature | Issue | Workaround/comments |
88
81
| --- | --- | --- | --- |
89
-
|**1.**| Azure Storage Explorer | The Blob Endpoint certificate that's autogenerated by the Azure Stack Edge device may not work properly with Azure Storage Explorer. | Replace Blob Endpoint certificate. For detailed steps, see [Bring your own certificates](azure-stack-edge-gpu-deploy-configure-certificates.md#bring-your-own-certificates). |
82
+
|**1.**| Azure Storage Explorer | The Blob storage endpoint certificate that's autogenerated by the Azure Stack Edge device may not work properly with Azure Storage Explorer. | Replace the Blob storage endpoint certificate. For detailed steps, see [Bring your own certificates](azure-stack-edge-gpu-deploy-configure-certificates.md#bring-your-own-certificates). |
90
83
|**2.**| Network connectivity | On a two-node Azure Stack Edge Pro 2 cluster with a teamed virtual switch for Port 1 and Port 2, if a Port 1 or Port 2 link is down, it can take up to 5 seconds to resume network connectivity on the remaining active port. If a Kubernetes cluster uses this teamed virtual switch for management traffic, pod communication may be disrupted up to 5 seconds. ||
91
84
|**3.**| Virtual machine | After the host or Kubernetes node pool VM is shut down, there's a chance that kubelet in node pool VM fails to start due to a CPU static policy error. Node pool VM shows **Not ready** status, and pods won't be scheduled on this VM. | Enter a support session and ssh into the node pool VM, then follow steps in [Changing the CPU Manager Policy](https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#changing-the-cpu-manager-policy) to remediate the kubelet service. |
0 commit comments