Skip to content

Migration faling from esxi to Harvester 1.4.3 #4491

@rajivreddy

Description

@rajivreddy

ESXI version : VMware ESXi 7.0.3 build-24411414: 7.0.3
Harvester version: 1.4.3

Kubernetes version: v1.29.9+rke2r1

kubectl get providers -A
NAMESPACE           NAME     TYPE        STATUS   READY   CONNECTED   INVENTORY   URL                            AGE
konveyor-forklift   host     openshift   Ready    True    True        True                                       7d18h
konveyor-forklift   vmware   vsphere     Ready    True    True        True        https://vxr04.example.com/sdk   4d21h
kubectl get storagemaps.forklift.konveyor.io -A
kNAMESPACE           NAME               READY   AGE
konveyor-forklift   esxi-storage-map   True    19h
kubectl get networkmaps.forklift.konveyor.io -A
NAMESPACE           NAME                 READY   AGE
konveyor-forklift   vmware-network-map   True    20h
kubectl get plan -A
NAMESPACE           NAME                  READY   EXECUTING   SUCCEEDED   FAILED   AGE
konveyor-forklift   esxi-migration-plan   True                            True     19h
kubectl get migrations -A
NAMESPACE           NAME                       READY   RUNNING   SUCCEEDED   FAILED   AGE
konveyor-forklift   esxi-migration-execution   True                          True     21m
kubectl get migrations esxi-migration-execution -n konveyor-forklift -o yaml | yq .status
completed: "2026-01-28T04:24:45Z"
conditions:
  - category: Required
    lastTransitionTime: "2026-01-28T04:22:43Z"
    message: The migration is ready.
    status: "True"
    type: Ready
  - category: Advisory
    durable: true
    lastTransitionTime: "2026-01-28T04:24:45Z"
    message: The migration has FAILED.
    status: "True"
    type: Failed
observedGeneration: 1
started: "2026-01-28T04:22:50Z"
vms:
  - completed: "2026-01-28T04:24:45Z"
    conditions:
      - category: Advisory
        durable: true
        lastTransitionTime: "2026-01-28T04:24:35Z"
        message: The VM migration has FAILED.
        status: "True"
        type: Failed
    error:
      phase: ConvertGuest
      reasons:
        - Guest conversion failed. See pod logs for details.
    id: "373"
    luks: {}
    name: Alvin-k8s-cluster1-node2
    newName: alvin-k8s-cluster1-node2
    phase: Completed
    pipeline:
      - completed: "2026-01-28T04:22:56Z"
        description: Initialize migration.
        name: Initialize
        phase: Completed
        progress:
          completed: 0
          total: 1
        started: "2026-01-28T04:22:49Z"
      - annotations:
          unit: MB
        completed: "2026-01-28T04:23:00Z"
        description: Allocate disks.
        name: DiskAllocation
        phase: Completed
        progress:
          completed: 51200
          total: 51200
        reason: 'Pending; target PVC esxi-migration-plan-373-cknsh Pending and [prime-bf57118c-1e7f-4f37-abe9-e401e17648d6] : Successfully provisioned volume pvc-d5548eeb-2a27-4cfa-a8bd-5c606cbb3b45'
        started: "2026-01-28T04:22:56Z"
        tasks:
          - annotations:
              unit: MB
            completed: "2026-01-28T04:23:00Z"
            name: '[LAB_AFF250_NFSDS] Alvin-k8s-cluster1-node2/Alvin-k8s-cluster1-node2.vmdk'
            phase: Completed
            progress:
              completed: 51200
              total: 51200
            reason: Transfer completed.
            started: "2026-01-28T04:23:00Z"
      - completed: "2026-01-28T04:24:35Z"
        description: Convert image to kubevirt.
        error:
          phase: Running
          reasons:
            - Guest conversion failed. See pod logs for details.
        name: ImageConversion
        phase: Running
        progress:
          completed: 0
          total: 1
        started: "2026-01-28T04:23:00Z"
      - annotations:
          unit: MB
        description: Copy disks.
        name: DiskTransferV2v
        phase: Pending
        progress:
          completed: 0
          total: 51200
        tasks:
          - annotations:
              unit: MB
            name: '[LAB_AFF250_NFSDS] Alvin-k8s-cluster1-node2/Alvin-k8s-cluster1-node2.vmdk'
            progress:
              completed: 0
              total: 51200
      - description: Create VM.
        name: VirtualMachineCreation
        phase: Pending
        progress:
          completed: 0
          total: 1
    restorePowerState: "Off"
    started: "2026-01-28T04:22:49Z"

Logs

libguestfs: trace: v2v: disk_create "/tmp/libguestfswUotpE/overlay1.qcow2" "qcow2" -1 "backingfile:/var/tmp/.guestfs-107/appliance.d/root"
libguestfs: trace: v2v: disk_format "/var/tmp/.guestfs-107/appliance.d/root"
libguestfs: command: run: qemu-img --help | grep -sqE -- '\binfo\b.*-U\b'
libguestfs: command: run: qemu-img
libguestfs: command: run: \ info
libguestfs: command: run: \ --output json
libguestfs: command: run: \ /var/tmp/.guestfs-107/appliance.d/root
libguestfs: parse_json: qemu-img info JSON output:\n{\n    "children": [\n        {\n            "name": "file",\n            "info": {\n                "children": [\n                ],\n                "virtual-size": 4294967296,\n                "filename": "/var/tmp/.guestfs-107/appliance.d/root",\n                "format": "file",\n                "actual-size": 271974400,\n                "format-specific": {\n                    "type": "file",\n                    "data": {\n                    }\n                },\n                "dirty-flag": false\n            }\n        }\n    ],\n    "virtual-size": 4294967296,\n    "filename": "/var/tmp/.guestfs-107/appliance.d/root",\n    "format": "raw",\n    "actual-size": 271974400,\n    "dirty-flag": false\n}\n\n
libguestfs: trace: v2v: disk_format = "raw"
libguestfs: command: run: qemu-img
libguestfs: command: run: \ create
libguestfs: command: run: \ -f qcow2
libguestfs: command: run: \ -o backing_file=/var/tmp/.guestfs-107/appliance.d/root,backing_fmt=raw
libguestfs: command: run: \ /tmp/libguestfswUotpE/overlay1.qcow2
Formatting '/tmp/libguestfswUotpE/overlay1.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=4294967296 backing_file=/var/tmp/.guestfs-107/appliance.d/root backing_fmt=raw lazy_refcounts=off refcount_bits=16
libguestfs: trace: v2v: disk_create = 0
libguestfs: trace: v2v: get_sockdir
libguestfs: trace: v2v: get_sockdir = "/tmp"
libguestfs: set_socket_create_context: context_new failed: cri-containerd.apparmor.d (enforce)\n: Invalid argument [you can ignore this message if you are not using SELinux + sVirt]
libguestfs: clear_socket_create_context: setsockcreatecon failed: NULL: Invalid argument [you can ignore this message if you are not using SELinux + sVirt]
libguestfs: create libvirt XML
libguestfs: command: run: passt --help
Usage: passt [OPTION]...

  -d, --debug\t\tBe verbose
      --trace\t\tBe extra verbose, implies --debug
  --stats DELAY  \tDisplay events statistics
    minimum DELAY seconds between updates
  -q, --quiet\t\tDon't print informational messages
  -f, --foreground\tDon't run in background
    default: run in background
  -l, --log-file PATH\tLog (only) to given file
  --log-size BYTES\tMaximum size of log file
    default: 1 MiB
  --runas UID|UID:GID \tRun as given UID, GID, which can be
    numeric, or login and group names
    default: drop to user "nobody"
  -h, --help\t\tDisplay this help message and exit
  --version\t\tShow version and exit
  -s, --socket, --socket-path PATH\tUNIX domain socket path
    default: probe free path starting from /tmp/passt_1.socket
  --vhost-user\t\tEnable vhost-user mode
    UNIX domain socket is provided by -s option
  --print-capabilities\tprint back-end capabilities in JSON format,
    only meaningful for vhost-user mode
  --repair-path PATH\tpath for passt-repair(1)
    default: append '.repair' to UNIX domain path
  --migrate-exit\tDEPRECATED:
\t\t\tsource quits after migration
    default: source keeps running after migration
  --migrate-no-linger\tDEPRECATED:
\t\t\tclose sockets on migration
    default: keep sockets open, ignore events
  -F, --fd FD\t\tUse FD as pre-opened connected socket
  -p, --pcap FILE\tLog tap-facing traffic to pcap file
  -P, --pid FILE\tWrite own PID to the given file
  -m, --mtu MTU\tAssign MTU via DHCP/NDP
    a zero value disables assignment
    default: 65520: maximum 802.3 MTU minus 802.3 header
                    length, rounded to 32 bits (IPv4 words)
  -a, --address ADDR\tAssign IPv4 or IPv6 address ADDR
    can be specified zero to two times (for IPv4 and IPv6)
    default: use addresses from interface with default route
  -n, --netmask MASK\tAssign IPv4 MASK, dot-decimal or bits
    default: netmask from matching address on the host
  -M, --mac-addr ADDR\tUse source MAC address ADDR
    default: 9a:55:9a:55:9a:55 (locally administered)
  -g, --gateway ADDR\tPass IPv4 or IPv6 address as gateway
    default: gateway from interface with default route
  -i, --interface NAME\tInterface for addresses and routes
    default: from --outbound-if4 and --outbound-if6, if any
             otherwise interface with first default route
  -o, --outbound ADDR\tBind to address as outbound source
    can be specified zero to two times (for IPv4 and IPv6)
    default: use source address from routing tables
  --outbound-if4 NAME\tBind to outbound interface for IPv4
    default: use interface from default route
  --outbound-if6 NAME\tBind to outbound interface for IPv6
    default: use interface from default route
  -D, --dns ADDR\tUse IPv4 or IPv6 address as DNS
    can be specified multiple times
    a single, empty option disables DNS information
    default: use addresses from /etc/resolv.conf
  -S, --search LIST\tSpace-separated list, search domains
    a single, empty option disables the DNS search list
  -H, --hostname NAME \tHostname to configure client with
  --fqdn NAME\t\tFQDN to configure client with
    default: use search list from /etc/resolv.conf
  --no-dhcp-dns\tNo DNS list in DHCP/DHCPv6/NDP
  --no-dhcp-search\tNo list in DHCP/DHCPv6/NDP
  --map-host-loopback ADDR\tTranslate ADDR to refer to host
    can be specified zero to two times (for IPv4 and IPv6)
    default: gateway address
  --map-guest-addr ADDR\tTranslate ADDR to guest's address
    can be specified zero to two times (for IPv4 and IPv6)
    default: none
  --dns-forward ADDR\tForward DNS queries sent to ADDR
    can be specified zero to two times (for IPv4 and IPv6)
    default: don't forward DNS queries
  --dns-host ADDR\tHost nameserver to direct queries to
    can be specified zero to two times (for IPv4 and IPv6)
    default: first nameserver from host's /etc/resolv.conf
  --no-tcp\t\tDisable TCP protocol handler
  --no-udp\t\tDisable UDP protocol handler
  --no-icmp\t\tDisable ICMP/ICMPv6 protocol handler
  --no-dhcp\t\tDisable DHCP server
  --no-ndp\t\tDisable NDP responses
  --no-dhcpv6\t\tDisable DHCPv6 server
  --no-ra\t\tDisable router advertisements
  --freebind\t\tBind to any address for forwarding
  --no-map-gw\t\tDon't map gateway address to host
  -4, --ipv4-only\tEnable IPv4 operation only
  -6, --ipv6-only\tEnable IPv6 operation only
  -1, --one-off\tQuit after handling one single client
  -t, --tcp-ports SPEC\tTCP port forwarding to guest
    can be specified multiple times
    SPEC can be:
      'none': don't forward any ports
      'all': forward all unbound, non-ephemeral ports
      a comma-separated list, optionally ranged with '-'
        and optional target ports after ':', with optional
        address specification suffixed by '/' and optional
        interface prefixed by '%'. Ranges can be reduced by
        excluding ports or ranges prefixed by '~'
        Examples:
        -t 22\t\tForward local port 22 to 22 on guest
        -t 22:23\tForward local port 22 to 23 on guest
        -t 22,25\tForward ports 22, 25 to ports 22, 25
        -t 22-80  \tForward ports 22 to 80
        -t 22-80:32-90\tForward ports 22 to 80 to
\t\t\tcorresponding port numbers plus 10
        -t 192.0.2.1/5\tBind port 5 of 192.0.2.1 to guest
        -t 5-25,~10-20\tForward ports 5 to 9, and 21 to 25
        -t ~25\t\tForward all ports except for 25
    default: none
  -u, --udp-ports SPEC\tUDP port forwarding to guest
    SPEC is as described for TCP above
    default: none
libguestfs: trace: v2v: get_cachedir
libguestfs: trace: v2v: get_cachedir = "/var/tmp"
libguestfs: libvirt XML:\n<?xml version="1.0"?>\n<domain type="kvm" xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0">\n  <name>guestfs-lmot1e4n4q6t4spa</name>\n  <memory unit="MiB">2560</memory>\n  <currentMemory unit="MiB">2560</currentMemory>\n  <cpu mode="maximum">\n    <feature policy="disable" name="la57"/>\n  </cpu>\n  <vcpu>8</vcpu>\n  <clock offset="utc">\n    <timer name="rtc" tickpolicy="catchup"/>\n    <timer name="pit" tickpolicy="delay"/>\n    <timer name="hpet" present="no"/>\n  </clock>\n  <os>\n    <type machine="q35">hvm</type>\n    <kernel>/var/tmp/.guestfs-107/appliance.d/kernel</kernel>\n    <initrd>/var/tmp/.guestfs-107/appliance.d/initrd</initrd>\n    <cmdline>panic=1 console=ttyS0 edd=off udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1 cgroup_disable=memory usbcore.nousb cryptomgr.notests tsc=reliable 8250.nr_uarts=1 root=UUID=7a4c23e1-f3da-469e-9816-bd31d71f738e selinux=0 guestfs_verbose=1 guestfs_network=1 TERM=linux guestfs_identifier=v2v</cmdline>\n    <bios useserial="yes"/>\n  </os>\n  <on_reboot>destroy</on_reboot>\n  <devices>\n    <rng model="virtio">\n      <backend model="random">/dev/urandom</backend>\n    </rng>\n    <controller type="scsi" index="0" model="virtio-scsi"/>\n    <disk device="disk" type="network">\n      <source protocol="nbd">\n        <host transport="unix" socket="/tmp/v2v.s5UQ5e/in0"/>\n      </source>\n      <target dev="sda" bus="scsi"/>\n      <driver name="qemu" type="raw" cache="unsafe" discard="unmap"/>\n      <address type="drive" controller="0" bus="0" target="0" unit="0"/>\n    </disk>\n    <disk type="file" device="disk">\n      <source file="/tmp/libguestfswUotpE/overlay1.qcow2"/>\n      <target dev="sdb" bus="scsi"/>\n      <driver name="qemu" type="qcow2" cache="unsafe"/>\n      <address type="drive" controller="0" bus="0" target="1" unit="0"/>\n    </disk>\n    <serial type="unix">\n      <source mode="connect" path="/tmp/libguestfs8B4jxg/console.sock"/>\n      <target port="0"/>\n    </serial>\n    <channel type="unix">\n      <source mode="connect" path="/tmp/libguestfs8B4jxg/guestfsd.sock"/>\n      <target type="virtio" name="org.libguestfs.channel.0"/>\n    </channel>\n    <interface type="user">\n      <backend type="passt"/>\n      <model type="virtio"/>\n      <ip family="ipv4" address="169.254.2.15" prefix="16"/>\n    </interface>\n    <controller type="usb" model="none"/>\n    <memballoon model="none"/>\n  </devices>\n  <qemu:commandline>\n    <qemu:env name="TMPDIR" value="/var/tmp"/>\n  </qemu:commandline>\n</domain>\n
libguestfs: trace: v2v: get_cachedir
libguestfs: trace: v2v: get_cachedir = "/var/tmp"
libguestfs: command: run: ls
libguestfs: command: run: \ -a
libguestfs: command: run: \ -l
libguestfs: command: run: \ -R
libguestfs: command: run: \ -Z /var/tmp/.guestfs-107
libguestfs: /var/tmp/.guestfs-107:
libguestfs: total 12
libguestfs: drwxr-xr-x 3 qemu qemu ? 4096 Jan 28 04:24 .
libguestfs: drwxrwxrwt 1 root root ? 4096 Jan 28 04:24 ..
libguestfs: drwxr-xr-x 2 qemu qemu ? 4096 Jan 28 04:24 appliance.d
libguestfs: -rw-r--r-- 1 qemu qemu ?    0 Jan 28 04:24 lock
libguestfs:
libguestfs: /var/tmp/.guestfs-107/appliance.d:
libguestfs: total 287808
libguestfs: drwxr-xr-x 2 qemu qemu ?       4096 Jan 28 04:24 .
libguestfs: drwxr-xr-x 3 qemu qemu ?       4096 Jan 28 04:24 ..
libguestfs: -rw-r--r-- 1 qemu qemu ?    7590912 Jan 28 04:24 initrd
libguestfs: -rwxr-xr-x 1 qemu qemu ?   15136808 Jan 28 04:24 kernel
libguestfs: -rw-r--r-- 1 qemu qemu ? 4294967296 Jan 28 04:24 root
libguestfs: command: run: ls
libguestfs: command: run: \ -a
libguestfs: command: run: \ -l
libguestfs: command: run: \ -Z /tmp/libguestfs8B4jxg
libguestfs: total 8
libguestfs: drwx------ 2 qemu qemu ? 4096 Jan 28 04:24 .
libguestfs: drwxrwxrwt 1 root root ? 4096 Jan 28 04:24 ..
libguestfs: srwxr-xr-x 1 qemu qemu ?    0 Jan 28 04:24 console.sock
libguestfs: srwxr-xr-x 1 qemu qemu ?    0 Jan 28 04:24 guestfsd.sock
libguestfs: launch libvirt guest
libguestfs: clear_socket_create_context: setsockcreatecon failed: NULL: Invalid argument [you can ignore this message if you are not using SELinux + sVirt]
libguestfs: trace: v2v: launch = -1 (error)
virt-v2v: error: libguestfs error: could not create appliance through libvirt. Original error from libvirt: internal error: Child process (passt --one-off --socket /home/qemu/.cache/libvirt/qemu/run/passt/1-guestfs-lmot1e4n4q6t-net0.socket --pid /home/qemu/.cache/libvirt/qemu/run/passt/1-guestfs-lmot1e4n4q6t-net0-passt.pid --address 169.254.2.15 --netmask 16) unexpected exit status 1: No interfaces with usable IPv6 routes
IPv6: no external interface as template, use local mode
UNIX domain socket bound at /home/qemu/.cache/libvirt/qemu/run/passt/1-guestfs-lmot1e4n4q6t-net0.socket
Couldn't create user namespace: Operation not permitted
 [code=1 int1=-1]
rm -rf -- '/tmp/v2vnbdkit.SC3CtX'
nbdkit: debug: cow: cleanup
nbdkit: debug: retry: cleanup
nbdkit: debug: curl: cleanup
nbdkit: debug: curl: unload plugin
nbdkit: debug: retry: unload filter
nbdkit: debug: cow: unload filter
rm -rf -- '/tmp/v2v.s5UQ5e'
libguestfs: trace: v2v: close
libguestfs: closing guestfs handle 0x55f510ff5270 (state 0)
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfswUotpE
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfs8B4jxg
Error executing v2v command: exit status 1
Failed to execute virt-v2v command exit status 1

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions