Skip to content

chore(deps): update oss/v2/kubernetes/kubelet-sysext docker tag to v1…

e27c834
Select commit
Loading
Failed to load commit list.
Open

chore(deps): update oss/v2/kubernetes/kubelet-sysext docker tag to v1.35.1 #7899

chore(deps): update oss/v2/kubernetes/kubelet-sysext docker tag to v1…
e27c834
Select commit
Loading
Failed to load commit list.
Azure Pipelines / Agentbaker GPU E2E failed Mar 11, 2026 in 17m 44s

Build #20260311.156 had test failures

Details

Tests

  • Failed: 2 (10.53%)
  • Passed: 17 (89.47%)
  • Other: 0 (0.00%)
  • Total: 19

Annotations

Check failure on line 1117 in Build log

See this annotation in the file changed.

@azure-pipelines azure-pipelines / Agentbaker GPU E2E

Build log #L1117

Script failed with exit code: 1

Check failure on line 1 in Test_Ubuntu2404_NvidiaDevicePluginRunning_MIG

See this annotation in the file changed.

@azure-pipelines azure-pipelines / Agentbaker GPU E2E

Test_Ubuntu2404_NvidiaDevicePluginRunning_MIG

Failed
Raw output
=== RUN   Test_Ubuntu2404_NvidiaDevicePluginRunning_MIG
=== PAUSE Test_Ubuntu2404_NvidiaDevicePluginRunning_MIG
=== CONT  Test_Ubuntu2404_NvidiaDevicePluginRunning_MIG
    test_helpers.go:360: [16.899s] TAGS {Name:Test_Ubuntu2404_NvidiaDevicePluginRunning_MIG ImageName:2404gen2containerd OS:ubuntu Arch:amd64 NetworkIsolated:false NonAnonymousACR:false GPU:true WASM:false BootstrapTokenFallback:false KubeletCustomConfig:false Scriptless:false VHDCaching:false MockAzureChinaCloud:false VMSeriesCoverageTest:false}
    test_helpers.go:199: [16.900s] → running scenario...
    test_helpers.go:231: [20.688s] → preparing AKS node...
    vmss.go:268: [28.603s] → creating VMSS utmt-2026-03-11-ubuntu2404nvidiadevicepluginrunningmig...
    vmss.go:189: [29.314s] VMSS portal link: https://ms.portal.azure.com/#@microsoft.onmicrosoft.com/resource/subscriptions/8ecadfc9-d1a3-4ea4-b844-0d9f87e4d7c8/resourceGroups/MC_abe2e-westus3_abe2e-kubenet-v4-e1f58_westus3/providers/Microsoft.Compute/virtualMachineScaleSets/utmt-2026-03-11-ubuntu2404nvidiadevicepluginrunningmig/overview
    vmss.go:195: [29.314s] Managed cluster portal link: https://ms.portal.azure.com/#@microsoft.onmicrosoft.com/resource/subscriptions/8ecadfc9-d1a3-4ea4-b844-0d9f87e4d7c8/resourceGroups/MC_abe2e-westus3_abe2e-kubenet-v4-e1f58_westus3/providers/Microsoft.ContainerService/managedClusters/abe2e-kubenet-v4-e1f58/overview
    vmss.go:301: [37.154s] VM will be automatically deleted after the test finishes, to preserve it for debugging purposes set KEEP_VMSS=true or pause the test with a breakpoint before the test finishes or failed
    vmss.go:305: [37.154s] SSH Instructions: (may take a few minutes for the VM to be ready for SSH)
        ========================
        az network bastion ssh --target-resource-id "/subscriptions/8ecadfc9-d1a3-4ea4-b844-0d9f87e4d7c8/resourceGroups/MC_abe2e-westus3_abe2e-kubenet-v4-e1f58_westus3/providers/Microsoft.Compute/virtualMachineScaleSets/utmt-2026-03-11-ubuntu2404nvidiadevicepluginrunningmig/virtualMachines/0" --name "abe2e-kubenet-v4-e1f58-bastion" --resource-group MC_abe2e-westus3_abe2e-kubenet-v4-e1f58_westus3 --auth-type ssh-key --username azureuser --ssh-key /tmp/private-key-1474000605
        
    bastionssh.go:304: [380.824s] Attempt 1/5 establishing SSH over bastion to 10.224.0.100
    bastionssh.go:323: [413.299s] Attempt 1/5 SSH handshake failed: ssh: handshake failed: failed to get reader: context deadline exceeded
    bastionssh.go:304: [423.355s] Attempt 2/5 establishing SSH over bastion to 10.224.0.100
    bastionssh.go:323: [425.090s] Attempt 2/5 SSH handshake failed: ssh: handshake failed: failed to get reader: received close frame: status = StatusNormalClosure and reason = ""
    bastionssh.go:304: [435.150s] Attempt 3/5 establishing SSH over bastion to 10.224.0.100
    bastionssh.go:323: [466.024s] Attempt 3/5 SSH handshake failed: ssh: handshake failed: failed to get reader: context deadline exceeded
    bastionssh.go:304: [476.078s] Attempt 4/5 establishing SSH over bastion to 10.224.0.100
    vmss.go:355: [477.270s] VM reached running state
    vmss.go:325: [477.270s] ✓ creating VMSS utmt-2026-03-11-ubuntu2404nvidiadevicepluginrunningmig done (448.7s)
    kube.go:149: [477.270s] → waiting for node utmt-2026-03-11-ubuntu2404nvidiadevicepluginrunningmig to be ready...
    kube.go:181: [477.341s] node utmt-2026-03-11-ubuntu2404nvidiadevicepluginrunningmig000000 is ready. Taints: null Conditions: [{"type":"NetworkUnavailable","status":"False","lastHeartbeatTime":"2026-03-11T10:42:23Z","lastTransitionTime":"2026-03-11T10:42:23Z","reason":"RouteCreated","message":"RouteController created a route"},{"type":"FrequentKubeletRestart","status":"False","lastHeartbeatTime":"2026-03-11T10:42:21Z","lastTransitionTime":"2026-03-11T10:42:20Z","reason":"NoFrequentKubeletRestart","message":"kubelet is functioning properly"},{"type":"FrequentDockerRestart","status":"False","lastHeartbeatTime":"2026-03-11T10:42:21Z","lastTransitionTime":

Check failure on line 1 in Test_Ubuntu2204_NvidiaDevicePluginRunning_WithoutVMSSTag

See this annotation in the file changed.

@azure-pipelines azure-pipelines / Agentbaker GPU E2E

Test_Ubuntu2204_NvidiaDevicePluginRunning_WithoutVMSSTag

Failed
Raw output
=== RUN   Test_Ubuntu2204_NvidiaDevicePluginRunning_WithoutVMSSTag
=== PAUSE Test_Ubuntu2204_NvidiaDevicePluginRunning_WithoutVMSSTag
=== CONT  Test_Ubuntu2204_NvidiaDevicePluginRunning_WithoutVMSSTag
    test_helpers.go:360: [9.721s] TAGS {Name:Test_Ubuntu2204_NvidiaDevicePluginRunning_WithoutVMSSTag ImageName:2204gen2containerd OS:ubuntu Arch:amd64 NetworkIsolated:false NonAnonymousACR:false GPU:true WASM:false BootstrapTokenFallback:false KubeletCustomConfig:false Scriptless:false VHDCaching:false MockAzureChinaCloud:false VMSeriesCoverageTest:false}
    test_helpers.go:199: [15.308s] → running scenario...
    test_helpers.go:231: [20.688s] → preparing AKS node...
    vmss.go:268: [28.572s] → creating VMSS yt2p-2026-03-11-ubuntu2204nvidiadevicepluginrunningwithou...
    vmss.go:189: [30.006s] VMSS portal link: https://ms.portal.azure.com/#@microsoft.onmicrosoft.com/resource/subscriptions/8ecadfc9-d1a3-4ea4-b844-0d9f87e4d7c8/resourceGroups/MC_abe2e-westus3_abe2e-kubenet-v4-e1f58_westus3/providers/Microsoft.Compute/virtualMachineScaleSets/yt2p-2026-03-11-ubuntu2204nvidiadevicepluginrunningwithou/overview
    vmss.go:195: [30.016s] Managed cluster portal link: https://ms.portal.azure.com/#@microsoft.onmicrosoft.com/resource/subscriptions/8ecadfc9-d1a3-4ea4-b844-0d9f87e4d7c8/resourceGroups/MC_abe2e-westus3_abe2e-kubenet-v4-e1f58_westus3/providers/Microsoft.ContainerService/managedClusters/abe2e-kubenet-v4-e1f58/overview
    vmss.go:301: [40.026s] VM will be automatically deleted after the test finishes, to preserve it for debugging purposes set KEEP_VMSS=true or pause the test with a breakpoint before the test finishes or failed
    vmss.go:305: [40.026s] SSH Instructions: (may take a few minutes for the VM to be ready for SSH)
        ========================
        az network bastion ssh --target-resource-id "/subscriptions/8ecadfc9-d1a3-4ea4-b844-0d9f87e4d7c8/resourceGroups/MC_abe2e-westus3_abe2e-kubenet-v4-e1f58_westus3/providers/Microsoft.Compute/virtualMachineScaleSets/yt2p-2026-03-11-ubuntu2204nvidiadevicepluginrunningwithou/virtualMachines/0" --name "abe2e-kubenet-v4-e1f58-bastion" --resource-group MC_abe2e-westus3_abe2e-kubenet-v4-e1f58_westus3 --auth-type ssh-key --username azureuser --ssh-key /tmp/private-key-1474000605
        
    bastionssh.go:304: [353.276s] Attempt 1/5 establishing SSH over bastion to 10.224.0.11
    vmss.go:355: [354.297s] VM reached running state
    vmss.go:325: [354.297s] ✓ creating VMSS yt2p-2026-03-11-ubuntu2204nvidiadevicepluginrunningwithou done (325.7s)
    kube.go:149: [354.297s] → waiting for node yt2p-2026-03-11-ubuntu2204nvidiadevicepluginrunningwithou to be ready...
    kube.go:181: [354.337s] node yt2p-2026-03-11-ubuntu2204nvidiadevicepluginrunningwithou000000 is ready. Taints: [{"key":"node.kubernetes.io/network-unavailable","effect":"NoSchedule","timeAdded":"2026-03-11T10:40:14Z"}] Conditions: [{"type":"NetworkUnavailable","status":"True","lastHeartbeatTime":"2026-03-11T10:40:14Z","lastTransitionTime":"2026-03-11T10:40:14Z","reason":"NodeInitialization","message":"Waiting for cloud routes"},{"type":"XIDError","status":"False","lastHeartbeatTime":"2026-03-11T10:40:09Z","lastTransitionTime":"2026-03-11T10:39:38Z","reason":"XIDErrorIsNotPresent","message":"XID error status is good"},{"type":"GPUClockThrottling","status":"True","lastHeartbeatTime":"2026-03-11T10:40:09Z","lastTransitionTime":"2026-03-11T10:39:38Z","reason":"GPUClockThrottlingIsPresent"},{"type":"NVIDIAGRIDStatusInvalid","status":"False","lastHeartbeatTime":"2026-03-11T10:40:09Z","lastTransitionTime":"2026-03-11T10:39:38Z","reason":"NVIDIAGRIDStatusValid","message":"NVIDIA Grid Status Valid"},{"type":"FilesystemCorruptionProblem","status":"False","lastHeartbeatTime":"2026-03-11T10:40:09Z","lastTransitionTime":"2026-03-11T10:39:38Z","reason":"FilesystemIsOK","message":"Filesystem is healthy"},{"type":"FrequentKubeletRestart","status":"False","lastHeartbeatTime":"2026-03-11T10:40:09Z","lastTransitionTime":"2026-03-11T1