fix: re-adding the ws2019 stuff #8073
Open
Azure Pipelines / Agentbaker GPU E2E
failed
Mar 11, 2026 in 18m 52s
Build #20260311.89 had test failures
Details
- Failed: 2 (8.33%)
- Passed: 22 (91.67%)
- Other: 0 (0.00%)
- Total: 24
Annotations
Check failure on line 965 in Build log
azure-pipelines / Agentbaker GPU E2E
Build log #L965
Script failed with exit code: 1
Check failure on line 1 in Test_Ubuntu2404_NvidiaDevicePluginRunning_MIG
azure-pipelines / Agentbaker GPU E2E
Test_Ubuntu2404_NvidiaDevicePluginRunning_MIG
Failed
Raw output
=== RUN Test_Ubuntu2404_NvidiaDevicePluginRunning_MIG
=== PAUSE Test_Ubuntu2404_NvidiaDevicePluginRunning_MIG
=== CONT Test_Ubuntu2404_NvidiaDevicePluginRunning_MIG
test_helpers.go:360: [18.718s] TAGS {Name:Test_Ubuntu2404_NvidiaDevicePluginRunning_MIG ImageName:2404gen2containerd OS:ubuntu Arch:amd64 NetworkIsolated:false NonAnonymousACR:false GPU:true WASM:false BootstrapTokenFallback:false KubeletCustomConfig:false Scriptless:false VHDCaching:false MockAzureChinaCloud:false VMSeriesCoverageTest:false}
test_helpers.go:199: [18.719s] → running scenario...
test_helpers.go:231: [19.046s] → preparing AKS node...
vmss.go:268: [24.773s] → creating VMSS uhol-2026-03-11-ubuntu2404nvidiadevicepluginrunningmig...
vmss.go:189: [26.702s] VMSS portal link: https://ms.portal.azure.com/#@microsoft.onmicrosoft.com/resource/subscriptions/8ecadfc9-d1a3-4ea4-b844-0d9f87e4d7c8/resourceGroups/MC_abe2e-westus3_abe2e-kubenet-v4-e1f58_westus3/providers/Microsoft.Compute/virtualMachineScaleSets/uhol-2026-03-11-ubuntu2404nvidiadevicepluginrunningmig/overview
vmss.go:195: [26.702s] Managed cluster portal link: https://ms.portal.azure.com/#@microsoft.onmicrosoft.com/resource/subscriptions/8ecadfc9-d1a3-4ea4-b844-0d9f87e4d7c8/resourceGroups/MC_abe2e-westus3_abe2e-kubenet-v4-e1f58_westus3/providers/Microsoft.ContainerService/managedClusters/abe2e-kubenet-v4-e1f58/overview
vmss.go:301: [35.567s] VM will be automatically deleted after the test finishes, to preserve it for debugging purposes set KEEP_VMSS=true or pause the test with a breakpoint before the test finishes or failed
vmss.go:305: [35.567s] SSH Instructions: (may take a few minutes for the VM to be ready for SSH)
========================
az network bastion ssh --target-resource-id "/subscriptions/8ecadfc9-d1a3-4ea4-b844-0d9f87e4d7c8/resourceGroups/MC_abe2e-westus3_abe2e-kubenet-v4-e1f58_westus3/providers/Microsoft.Compute/virtualMachineScaleSets/uhol-2026-03-11-ubuntu2404nvidiadevicepluginrunningmig/virtualMachines/0" --name "abe2e-kubenet-v4-e1f58-bastion" --resource-group MC_abe2e-westus3_abe2e-kubenet-v4-e1f58_westus3 --auth-type ssh-key --username azureuser --ssh-key /tmp/private-key-3584708471
bastionssh.go:304: [378.845s] Attempt 1/5 establishing SSH over bastion to 10.224.0.109
bastionssh.go:323: [410.112s] Attempt 1/5 SSH handshake failed: ssh: handshake failed: failed to get reader: context deadline exceeded
bastionssh.go:304: [420.172s] Attempt 2/5 establishing SSH over bastion to 10.224.0.109
bastionssh.go:323: [451.077s] Attempt 2/5 SSH handshake failed: ssh: handshake failed: failed to get reader: context deadline exceeded
bastionssh.go:304: [461.135s] Attempt 3/5 establishing SSH over bastion to 10.224.0.109
vmss.go:355: [462.245s] VM reached running state
vmss.go:325: [462.245s] ✓ creating VMSS uhol-2026-03-11-ubuntu2404nvidiadevicepluginrunningmig done (437.5s)
kube.go:149: [462.245s] → waiting for node uhol-2026-03-11-ubuntu2404nvidiadevicepluginrunningmig to be ready...
kube.go:181: [462.333s] node uhol-2026-03-11-ubuntu2404nvidiadevicepluginrunningmig000000 is ready. Taints: [{"key":"node.kubernetes.io/network-unavailable","effect":"NoSchedule","timeAdded":"2026-03-11T02:32:42Z"}] Conditions: [{"type":"NetworkUnavailable","status":"True","lastHeartbeatTime":"2026-03-11T02:32:42Z","lastTransitionTime":"2026-03-11T02:32:42Z","reason":"NodeInitialization","message":"Waiting for cloud routes"},{"type":"FilesystemCorruptionProblem","status":"False","lastHeartbeatTime":"2026-03-11T02:34:08Z","lastTransitionTime":"2026-03-11T02:34:07Z","reason":"FilesystemIsOK","message":"Filesystem is healthy"},{"type":"VMEventScheduled","status":"False","lastHeartbeatTime":"2026-03-11T02:34:08Z","lastTransitionTime":"2026-03-11T02:34:07Z","reason":"NoVMEventScheduled","message":"VM has no scheduled event"},{"type":"FrequentDockerRestart","status":"False","lastHeartbeatTime":"2026-03-11T02:34:08Z","lastTr
Check failure on line 1 in Test_Ubuntu2204_NvidiaDevicePluginRunning_WithoutVMSSTag
azure-pipelines / Agentbaker GPU E2E
Test_Ubuntu2204_NvidiaDevicePluginRunning_WithoutVMSSTag
Failed
Raw output
=== RUN Test_Ubuntu2204_NvidiaDevicePluginRunning_WithoutVMSSTag
=== PAUSE Test_Ubuntu2204_NvidiaDevicePluginRunning_WithoutVMSSTag
=== CONT Test_Ubuntu2204_NvidiaDevicePluginRunning_WithoutVMSSTag
test_helpers.go:360: [7.865s] TAGS {Name:Test_Ubuntu2204_NvidiaDevicePluginRunning_WithoutVMSSTag ImageName:2204gen2containerd OS:ubuntu Arch:amd64 NetworkIsolated:false NonAnonymousACR:false GPU:true WASM:false BootstrapTokenFallback:false KubeletCustomConfig:false Scriptless:false VHDCaching:false MockAzureChinaCloud:false VMSeriesCoverageTest:false}
test_helpers.go:199: [14.308s] → running scenario...
test_helpers.go:231: [19.045s] → preparing AKS node...
vmss.go:268: [24.693s] → creating VMSS b8od-2026-03-11-ubuntu2204nvidiadevicepluginrunningwithou...
vmss.go:189: [26.560s] VMSS portal link: https://ms.portal.azure.com/#@microsoft.onmicrosoft.com/resource/subscriptions/8ecadfc9-d1a3-4ea4-b844-0d9f87e4d7c8/resourceGroups/MC_abe2e-westus3_abe2e-kubenet-v4-e1f58_westus3/providers/Microsoft.Compute/virtualMachineScaleSets/b8od-2026-03-11-ubuntu2204nvidiadevicepluginrunningwithou/overview
vmss.go:195: [26.560s] Managed cluster portal link: https://ms.portal.azure.com/#@microsoft.onmicrosoft.com/resource/subscriptions/8ecadfc9-d1a3-4ea4-b844-0d9f87e4d7c8/resourceGroups/MC_abe2e-westus3_abe2e-kubenet-v4-e1f58_westus3/providers/Microsoft.ContainerService/managedClusters/abe2e-kubenet-v4-e1f58/overview
vmss.go:301: [34.497s] VM will be automatically deleted after the test finishes, to preserve it for debugging purposes set KEEP_VMSS=true or pause the test with a breakpoint before the test finishes or failed
vmss.go:305: [34.497s] SSH Instructions: (may take a few minutes for the VM to be ready for SSH)
========================
az network bastion ssh --target-resource-id "/subscriptions/8ecadfc9-d1a3-4ea4-b844-0d9f87e4d7c8/resourceGroups/MC_abe2e-westus3_abe2e-kubenet-v4-e1f58_westus3/providers/Microsoft.Compute/virtualMachineScaleSets/b8od-2026-03-11-ubuntu2204nvidiadevicepluginrunningwithou/virtualMachines/0" --name "abe2e-kubenet-v4-e1f58-bastion" --resource-group MC_abe2e-westus3_abe2e-kubenet-v4-e1f58_westus3 --auth-type ssh-key --username azureuser --ssh-key /tmp/private-key-3584708471
bastionssh.go:304: [317.642s] Attempt 1/5 establishing SSH over bastion to 10.224.0.104
vmss.go:355: [318.804s] VM reached running state
vmss.go:325: [318.804s] ✓ creating VMSS b8od-2026-03-11-ubuntu2204nvidiadevicepluginrunningwithou done (294.1s)
kube.go:149: [318.805s] → waiting for node b8od-2026-03-11-ubuntu2204nvidiadevicepluginrunningwithou to be ready...
kube.go:181: [318.859s] node b8od-2026-03-11-ubuntu2204nvidiadevicepluginrunningwithou000000 is ready. Taints: [{"key":"node.kubernetes.io/network-unavailable","effect":"NoSchedule","timeAdded":"2026-03-11T02:31:22Z"}] Conditions: [{"type":"NetworkUnavailable","status":"True","lastHeartbeatTime":"2026-03-11T02:31:22Z","lastTransitionTime":"2026-03-11T02:31:22Z","reason":"NodeInitialization","message":"Waiting for cloud routes"},{"type":"GPUClockThrottling","status":"True","lastHeartbeatTime":"2026-03-11T02:31:19Z","lastTransitionTime":"2026-03-11T02:30:48Z","reason":"GPUClockThrottlingIsPresent"},{"type":"KernelDeadlock","status":"False","lastHeartbeatTime":"2026-03-11T02:31:19Z","lastTransitionTime":"2026-03-11T02:30:48Z","reason":"KernelHasNoDeadlock","message":"kernel has no deadlock"},{"type":"FrequentUnregisterNetDevice","status":"False","lastHeartbeatTime":"2026-03-11T02:31:19Z","lastTransitionTime":"2026-03-11T02:30:48Z","reason":"NoFrequentUnregisterNetDevice","message":"node is functioning properly"},{"type":"KubeletProblem","status":"False","lastHeartbeatTime":"2026-03-11T02:31:19Z","lastTransitionTime":"2026-03-11T02:31:18Z","reason":"KubeletIsUp","message":"kubelet service is up"},{"type":"GPUMissing","status":"False","lastHeartbeatTime":"2026-03-11T02:31:19Z","lastTransitionTime":"2026-03-11T02:30:48Z
Loading