Skip to content

Upgrade Linux Kernel for flatcar-4081 from 6.6.65 to 6.6.66#2535

Closed
flatcar-infra wants to merge 1 commit intoflatcar-4081from
linux-6.6.66-flatcar-4081
Closed

Upgrade Linux Kernel for flatcar-4081 from 6.6.65 to 6.6.66#2535
flatcar-infra wants to merge 1 commit intoflatcar-4081from
linux-6.6.66-flatcar-4081

Conversation

@flatcar-infra
Copy link

@flatcar-infra flatcar-infra commented Dec 15, 2024

@tormath1
Copy link
Contributor

Closing in favor of 6.6.67 upgrade.

@tormath1 tormath1 closed this Dec 20, 2024
@tormath1 tormath1 deleted the linux-6.6.66-flatcar-4081 branch December 20, 2024 08:34
@github-actions
Copy link

Test report for 4081.2.1 / amd64 arm64

Platforms tested : qemu_uefi-amd64 qemu_update-amd64 qemu_uefi-arm64 qemu_update-arm64

ok bpf.execsnoop 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok bpf.local-gadget 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.basic 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cgroupv1 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cloudinit.basic 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cloudinit.multipart-mime 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cloudinit.script 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid0.data 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid0.root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid1.data 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid1.root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.etcd-member.discovery 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.etcd-member.etcdctlv3 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.etcd-member.v2-backup-restore 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.filesystem 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.flannel.udp 🟢 Succeeded: qemu_uefi-amd64 (1)

ok cl.flannel.vxlan 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.instantiated.enable-unit 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.kargs 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.luks 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.indirect 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (2) ❌ Failed: qemu_uefi-arm64 (1)

                Diagnostic output for qemu_uefi-arm64, run 1
    L1: "  "
    L2: " Error: _oem.go:199: Couldn_t reboot machine: machine __53877905-d8ab-4c9b-88ce-f8c38f8d0308__ failed basic checks: some systemd units failed:"
    L3: "??? ldconfig.service loaded failed failed Rebuild Dynamic Linker Cache"
    L4: "status: "
    L5: "journal:-- No entries --"
    L6: "harness.go:602: Found systemd unit failed to start (?[0;1;39mldconfig.service?[0m - Rebuild Dynamic Linker Cache.  ) on machine 53877905-d8ab-4c9b-88ce-f8c38f8d0308 console_"
    L7: " "

ok cl.ignition.oem.indirect.new 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.regular 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (2) ❌ Failed: qemu_uefi-arm64 (1)

                Diagnostic output for qemu_uefi-arm64, run 1
    L1: "  "
    L2: " Error: _oem.go:199: Couldn_t reboot machine: machine __1b4cb3e9-d9fd-4c3e-b229-1ed7a07c698e__ failed basic checks: some systemd units failed:"
    L3: "??? ldconfig.service loaded failed failed Rebuild Dynamic Linker Cache"
    L4: "status: "
    L5: "journal:-- No entries --"
    L6: "harness.go:602: Found systemd unit failed to start (?[0;1;39mldconfig.service?[0m - Rebuild Dynamic Linker Cache.  ) on machine 1b4cb3e9-d9fd-4c3e-b229-1ed7a07c698e console_"
    L7: " "

ok cl.ignition.oem.regular.new 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.reuse 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.wipe 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.partition_on_boot_disk 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.symlink 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.translation 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.btrfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.ext4root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.groups 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.once 🟢 Succeeded: qemu_uefi-amd64 (2); qemu_uefi-arm64 (1) ❌ Failed: qemu_uefi-amd64 (1)

                Diagnostic output for qemu_uefi-amd64, run 1
    L1: " Error: _execution.go:140: Couldn_t reboot machine: machine __dd6b1fe6-dcc3-4317-8321-c24f552f1793__ failed basic checks: some systemd units failed:"
    L2: "??? ldconfig.service loaded failed failed Rebuild Dynamic Linker Cache"
    L3: "status: "
    L4: "journal:-- No entries --_"
    L5: " "
    L6: "  "

ok cl.ignition.v1.sethostname 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.users 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.xfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.btrfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.ext4root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.users 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.xfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2_1.ext4checkexisting 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2_1.swap 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2_1.vfat 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.install.cloudinit 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.internet 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.locksmith.cluster 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.misc.falco 🟢 Succeeded: qemu_uefi-amd64 (1)

ok cl.network.initramfs.second-boot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.network.listeners 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.network.wireguard 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.omaha.ping 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.osreset.ignition-rerun 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.overlay.cleanup 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.swap_activation 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.sysext.boot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.sysext.fallbackdownload # SKIP 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.tang.nonroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.tang.root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.toolbox.dnf-install 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.tpm.nonroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.tpm.root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.tpm.root-cryptenroll 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (2) ❌ Failed: qemu_uefi-arm64 (1)

                Diagnostic output for qemu_uefi-arm64, run 1
    L1: "  "
    L2: " Error: _tpm.go:353: could not reboot machine: machine __28eac556-3030-4672-b19e-75ec8f97ff27__ failed basic checks: some systemd units failed:"
    L3: "??? ldconfig.service loaded failed failed Rebuild Dynamic Linker Cache"
    L4: "status: "
    L5: "journal:-- No entries --"
    L6: "harness.go:602: Found systemd unit failed to start (?[0;1;39mldconfig.service?[0m - Rebuild Dynamic Linker Cache.  ) on machine 28eac556-3030-4672-b19e-75ec8f97ff27 console_"
    L7: " "

ok cl.tpm.root-cryptenroll-pcr-noupdate 🟢 Succeeded: qemu_uefi-amd64 (2); qemu_uefi-arm64 (1) ❌ Failed: qemu_uefi-amd64 (1)

                Diagnostic output for qemu_uefi-amd64, run 1
    L1: " Error: _tpm.go:353: could not reboot machine: machine __a9c02310-2041-418c-b476-1f1814e1ff02__ failed basic checks: some systemd units failed:"
    L2: "??? ldconfig.service loaded failed failed Rebuild Dynamic Linker Cache"
    L3: "status: "
    L4: "journal:-- No entries --_"
    L5: " "
    L6: "  "

ok cl.tpm.root-cryptenroll-pcr-withupdate 🟢 Succeeded: qemu_uefi-amd64 (2); qemu_uefi-arm64 (1) ❌ Failed: qemu_uefi-amd64 (1)

                Diagnostic output for qemu_uefi-amd64, run 1
    L1: " Error: _cluster.go:125: Warning: keyslot operation could fail as it requires more than available memory."
    L2: "cluster.go:125: New TPM2 token enrolled as key slot 1."
    L3: "cluster.go:125: Wiped slot 2."
    L4: "tpm.go:366: could not reboot machine: machine __ce15bf32-3828-44f7-bdda-cfac8a889e93__ failed basic checks: some systemd units failed:"
    L5: "??? ldconfig.service loaded failed failed Rebuild Dynamic Linker Cache"
    L6: "status: "
    L7: "journal:-- No entries --"
    L8: "harness.go:602: Found systemd unit failed to start (?[0;1;39mldconfig.service?[0m - Rebuild Dynamic Linker Cache.  ) on machine ce15bf32-3828-44f7-bdda-cfac8a889e93 console_"
    L9: " "
    L10: "  "

ok cl.update.badverity 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.update.payload 🟢 Succeeded: qemu_update-amd64 (1); qemu_update-arm64 (1)

ok cl.update.reboot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.users.shells 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.verity 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.auth.verify 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.groups 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.once 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.resource.local 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.resource.remote 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.resource.s3.versioned 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.security.tls 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.sethostname 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.systemd.enable-service 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.locksmith.reboot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.locksmith.tls 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.selinux.boolean 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.selinux.enforce 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.tls.fetch-urls 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.update.badusr 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok devcontainer.docker 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok devcontainer.systemd-nspawn 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.btrfs-storage 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.containerd-restart 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.enable-service.sysext 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.lib-coreos-dockerd-compat 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.network-openbsd-nc 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.selinux 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.userns 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok extra-test.[first_dual].cl.update.docker-btrfs-compat 🟢 Succeeded: qemu_update-amd64 (1, 2, 3, 4, 5); qemu_update-arm64 (1, 2, 3, 4, 5)

not ok extra-test.[first_dual].cl.update.oem ❌ Failed: qemu_update-amd64 (1, 2, 3, 4, 5); qemu_update-arm64 (1, 2, 3, 4, 5)

                Diagnostic output for qemu_update-arm64, run 5
    L1: " Error: _cluster.go:125: Created symlink /etc/systemd/system/locksmithd.service ??? /dev/null."
    L2: "update.go:328: Triggering update_engine"
    L3: "update.go:344: waiting for UPDATE_STATUS_UPDATED_NEED_REBOOT: time limit exceeded_"
    L4: " "
    L5: " Error: _cluster.go:125: Created symlink /etc/systemd/system/locksmithd.service ??? /dev/null."
    L6: "update.go:328: Triggering update_engine"
    L7: "update.go:344: waiting for UPDATE_STATUS_UPDATED_NEED_REBOOT: time limit exceeded_"
    L8: " "
                Diagnostic output for qemu_update-amd64, run 5
    L1: " Error: _cluster.go:125: Created symlink /etc/systemd/system/locksmithd.service ??? /dev/null."
    L2: "update.go:328: Triggering update_engine"
    L3: "update.go:344: waiting for UPDATE_STATUS_UPDATED_NEED_REBOOT: time limit exceeded_"
    L4: " "
    L5: " Error: _cluster.go:125: Created symlink /etc/systemd/system/locksmithd.service ??? /dev/null."
    L6: "update.go:328: Triggering update_engine"
    L7: "update.go:344: waiting for UPDATE_STATUS_UPDATED_NEED_REBOOT: time limit exceeded_"
    L8: " "
                Diagnostic output for qemu_update-arm64, run 4
    L1: " Error: _cluster.go:125: Created symlink /etc/systemd/system/locksmithd.service ??? /dev/null."
    L2: "update.go:328: Triggering update_engine"
    L3: "update.go:344: waiting for UPDATE_STATUS_UPDATED_NEED_REBOOT: time limit exceeded_"
    L4: " "
    L5: " Error: _cluster.go:125: Created symlink /etc/systemd/system/locksmithd.service ??? /dev/null."
    L6: "update.go:328: Triggering update_engine"
    L7: "update.go:344: waiting for UPDATE_STATUS_UPDATED_NEED_REBOOT: time limit exceeded_"
    L8: " "
                Diagnostic output for qemu_update-amd64, run 4
    L1: " Error: _cluster.go:125: Created symlink /etc/systemd/system/locksmithd.service ??? /dev/null."
    L2: "update.go:328: Triggering update_engine"
    L3: "update.go:344: waiting for UPDATE_STATUS_UPDATED_NEED_REBOOT: time limit exceeded_"
    L4: " "
    L5: " Error: _cluster.go:125: Created symlink /etc/systemd/system/locksmithd.service ??? /dev/null."
    L6: "update.go:328: Triggering update_engine"
    L7: "update.go:344: waiting for UPDATE_STATUS_UPDATED_NEED_REBOOT: time limit exceeded_"
    L8: " "
                Diagnostic output for qemu_update-arm64, run 3
    L1: " Error: _cluster.go:125: Created symlink /etc/systemd/system/locksmithd.service ??? /dev/null."
    L2: "update.go:328: Triggering update_engine"
    L3: "update.go:344: waiting for UPDATE_STATUS_UPDATED_NEED_REBOOT: time limit exceeded_"
    L4: " "
    L5: " Error: _cluster.go:125: Created symlink /etc/systemd/system/locksmithd.service ??? /dev/null."
    L6: "update.go:328: Triggering update_engine"
    L7: "update.go:344: waiting for UPDATE_STATUS_UPDATED_NEED_REBOOT: time limit exceeded_"
    L8: " "
                Diagnostic output for qemu_update-amd64, run 3
    L1: " Error: _cluster.go:125: Created symlink /etc/systemd/system/locksmithd.service ??? /dev/null."
    L2: "update.go:328: Triggering update_engine"
    L3: "update.go:344: waiting for UPDATE_STATUS_UPDATED_NEED_REBOOT: time limit exceeded_"
    L4: " "
    L5: " Error: _cluster.go:125: Created symlink /etc/systemd/system/locksmithd.service ??? /dev/null."
    L6: "update.go:328: Triggering update_engine"
    L7: "update.go:344: waiting for UPDATE_STATUS_UPDATED_NEED_REBOOT: time limit exceeded_"
    L8: " "
                Diagnostic output for qemu_update-arm64, run 2
    L1: " Error: _cluster.go:125: Created symlink /etc/systemd/system/locksmithd.service ??? /dev/null."
    L2: "update.go:328: Triggering update_engine"
    L3: "update.go:344: waiting for UPDATE_STATUS_UPDATED_NEED_REBOOT: time limit exceeded_"
    L4: " "
    L5: " Error: _cluster.go:125: Created symlink /etc/systemd/system/locksmithd.service ??? /dev/null."
    L6: "update.go:328: Triggering update_engine"
    L7: "update.go:344: waiting for UPDATE_STATUS_UPDATED_NEED_REBOOT: time limit exceeded_"
    L8: " "
                Diagnostic output for qemu_update-amd64, run 2
    L1: " Error: _cluster.go:125: Created symlink /etc/systemd/system/locksmithd.service ??? /dev/null."
    L2: "update.go:328: Triggering update_engine"
    L3: "update.go:344: waiting for UPDATE_STATUS_UPDATED_NEED_REBOOT: time limit exceeded_"
    L4: " "
    L5: " Error: _cluster.go:125: Created symlink /etc/systemd/system/locksmithd.service ??? /dev/null."
    L6: "update.go:328: Triggering update_engine"
    L7: "update.go:344: waiting for UPDATE_STATUS_UPDATED_NEED_REBOOT: time limit exceeded_"
    L8: " "
                Diagnostic output for qemu_update-arm64, run 1
    L1: " Error: _cluster.go:125: Created symlink /etc/systemd/system/locksmithd.service ??? /dev/null."
    L2: "update.go:328: Triggering update_engine"
    L3: "update.go:344: waiting for UPDATE_STATUS_UPDATED_NEED_REBOOT: time limit exceeded_"
    L4: " "
    L5: " Error: _cluster.go:125: Created symlink /etc/systemd/system/locksmithd.service ??? /dev/null."
    L6: "update.go:328: Triggering update_engine"
    L7: "update.go:344: waiting for UPDATE_STATUS_UPDATED_NEED_REBOOT: time limit exceeded_"
    L8: " "
                Diagnostic output for qemu_update-amd64, run 1
    L1: " Error: _cluster.go:125: Created symlink /etc/systemd/system/locksmithd.service ??? /dev/null."
    L2: "update.go:328: Triggering update_engine"
    L3: "update.go:344: waiting for UPDATE_STATUS_UPDATED_NEED_REBOOT: time limit exceeded_"
    L4: " "
    L5: " Error: _cluster.go:125: Created symlink /etc/systemd/system/locksmithd.service ??? /dev/null."
    L6: "update.go:328: Triggering update_engine"
    L7: "update.go:344: waiting for UPDATE_STATUS_UPDATED_NEED_REBOOT: time limit exceeded_"
    L8: " "

ok extra-test.[first_dual].cl.update.payload 🟢 Succeeded: qemu_update-amd64 (1, 2, 3, 4, 5); qemu_update-arm64 (1, 2, 3, 4, 5)

ok kubeadm.v1.29.2.calico.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (2) ❌ Failed: qemu_uefi-arm64 (1)

                Diagnostic output for qemu_uefi-arm64, run 1
    L1: "  "
    L2: " Error: _cluster.go:125: I0114 09:05:57.628749    1776 version.go:256] remote version is much newer: v1.32.0; falling back to: stable-1.29"
    L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.29.12"
    L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.29.12"
    L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.29.12"
    L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.29.12"
    L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.11.1"
    L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.9"
    L9: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.10-0"
    L10: "cluster.go:125: I0114 09:06:11.122208    2016 version.go:256] remote version is much newer: v1.32.0; falling back to: stable-1.29"
    L11: "cluster.go:125: [init] Using Kubernetes version: v1.29.12"
    L12: "cluster.go:125: [preflight] Running pre-flight checks"
    L13: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
    L14: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L15: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L16: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L17: "cluster.go:125: W0114 09:06:11.517594    2016 checks.go:835] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended that using __registry.k8s.io/pause:3.9__ as the CRI sandbox image."
    L18: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L19: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L20: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L21: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.4?4]"
    L22: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L23: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L24: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L29: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L30: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L31: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L32: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L33: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
    L34: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L35: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L36: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L37: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L41: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L42: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L43: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L44: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L45: "cluster.go:125: [apiclient] All control plane components are healthy after 6.003770 seconds"
    L46: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L47: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L48: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L49: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L50: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L51: "cluster.go:125: [bootstrap-token] Using token: 7odcee.322c1ogwtz9bwfmj"
    L52: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L53: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L54: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L55: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L56: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L57: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L58: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L59: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L60: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L61: "cluster.go:125: "
    L62: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L63: "cluster.go:125: "
    L64: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L65: "cluster.go:125: "
    L66: "cluster.go:125:   mkdir -p $HOME/.kube"
    L67: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L68: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L69: "cluster.go:125: "
    L70: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L71: "cluster.go:125: "
    L72: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L73: "cluster.go:125: "
    L74: "cluster.go:125: You should now deploy a pod network to the cluster."
    L75: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L76: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L77: "cluster.go:125: "
    L78: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L79: "cluster.go:125: "
    L80: "cluster.go:125: kubeadm join 10.0.0.44:6443 --token 7odcee.322c1ogwtz9bwfmj _"
    L81: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:1e9697fc55298e88dc1d1ff16d9e43d1a91632c211e70779c7866a70cfccae17 "
    L82: "cluster.go:125: namespace/tigera-operator created"
    L83: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created"
    L84: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created"
    L85: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created"
    L86: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created"
    L87: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created"
    L88: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created"
    L89: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created"
    L90: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created"
    L91: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created"
    L92: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created"
    L93: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created"
    L94: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created"
    L95: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created"
    L96: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created"
    L97: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created"
    L98: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created"
    L99: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created"
    L100: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created"
    L101: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/tiers.crd.projectcalico.org created"
    L102: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/adminnetworkpolicies.policy.networking.k8s.io created"
    L103: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created"
    L104: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created"
    L105: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created"
    L106: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created"
    L107: "cluster.go:125: serviceaccount/tigera-operator created"
    L108: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
    L109: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
    L110: "cluster.go:125: deployment.apps/tigera-operator created"
    L111: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
    L112: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io condition met"
    L113: "cluster.go:125: installation.operator.tigera.io/default created"
    L114: "cluster.go:125: apiserver.operator.tigera.io/default created"
    L115: "cluster.go:125: [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
    L116: "cluster.go:125: WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/core/.kube/config"
    L117: "cluster.go:125: WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/core/.kube/config"
    L118: "cluster.go:125: WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/core/.kube/config"
    L119: "cluster.go:125: WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/core/.kube/config"
    L120: "cluster.go:125: jq: error (at <stdin_:121): Cannot iterate over null (null)"
    L121: "cluster.go:125: jq: error (at <stdin_:121): Cannot iterate over null (null)"
    L122: "cluster.go:125: jq: error (at <stdin_:121): Cannot iterate over null (null)"
    L123: "harness.go:602: Found systemd unit failed to start (?[0;1;39metcd-member.servic????[0mtcd (System Application Container).  ) on machine a8195bc9-a39c-4899-9f3d-03ca91cc72cd console_"
    L124: " "

ok kubeadm.v1.29.2.calico.cgroupv1.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.29.2.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.29.2.cilium.cgroupv1.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.29.2.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.29.2.flannel.cgroupv1.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.30.1.calico.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.30.1.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.30.1.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.31.0.calico.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.31.0.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.31.0.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok linux.nfs.v3 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok linux.nfs.v4 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok linux.ntp 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok misc.fips 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok packages 🟢 Succeeded: qemu_uefi-amd64 (2); qemu_uefi-arm64 (1) ❌ Failed: qemu_uefi-amd64 (1)

                Diagnostic output for qemu_uefi-amd64, run 1
    L1: " _packages/sys-block/open-iscsi (29.39s)"
    L2: "cluster.go:125: Unable to find image _ghcr.io/flatcar/targetcli-fb:latest_ locally"
    L3: "cluster.go:125: latest: Pulling from flatcar/targetcli-fb"
    L4: "cluster.go:125: a2318d6c47ec: Pulling fs layer"
    L5: "cluster.go:125: 3d3086a1439f: Pulling fs layer"
    L6: "cluster.go:125: 3d3086a1439f: Verifying Checksum"
    L7: "cluster.go:125: 3d3086a1439f: Download complete"
    L8: "cluster.go:125: a2318d6c47ec: Verifying Checksum"
    L9: "cluster.go:125: a2318d6c47ec: Download complete"
    L10: "cluster.go:125: a2318d6c47ec: Pull complete"
    L11: "cluster.go:125: 3d3086a1439f: Pull complete"
    L12: "cluster.go:125: Digest: sha256:b6cd65db981974e8b74938617218dd023775b969f9a059ced21e6ce6fa4763c1"
    L13: "cluster.go:125: Status: Downloaded newer image for ghcr.io/flatcar/targetcli-fb:latest"
    L14: "cluster.go:125: mke2fs 1.47.1 (20-May-2024)"
    L15: "cluster.go:125: Created symlink /etc/systemd/system/remote-fs.target.wants/iscsi.service ??? /usr/lib/systemd/system/iscsi.service."
    L16: "cluster.go:145: __sudo /check__ failed: output no /dev/sda device after reboot, status Process exited with status 1_"
    L17: " "
    L18: "  "

ok sysext.custom-docker.sysext 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.custom-oem 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.disable-containerd 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.disable-docker 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.simple 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.journal.remote 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.journal.user 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.sysusers.gshadow 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants