Skip to content

Can't restore VM in Proxmox  #62

@paprikkafox

Description

@paprikkafox

Environment:

  • 3 HV Nodes with Linstor installed in HA mode (drbd-reactor)
  • Each has 3 disks in pool 'ssd_zpool1' (2 servers with 3 SSD and 1 server with 2 SSD+HDD)
  • 2 servers has 'hdd_zpool1' with 8 TB HDDs and 1 server has same pool with 2 TB HDDs
  • Each pool based on ZFS-Thin volumes
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ StoragePool          ┊ Node       ┊ Driver   ┊ PoolName   ┊ FreeCapacity ┊ TotalCapacity ┊ CanSnapshots ┊ State ┊ SharedName ┊
╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ DfltDisklessStorPool ┊ SRVDMPVE01 ┊ DISKLESS ┊            ┊              ┊               ┊ False        ┊ Ok    ┊            ┊
┊ DfltDisklessStorPool ┊ SRVDMPVE02 ┊ DISKLESS ┊            ┊              ┊               ┊ False        ┊ Ok    ┊            ┊
┊ DfltDisklessStorPool ┊ SRVDMPVE03 ┊ DISKLESS ┊            ┊              ┊               ┊ False        ┊ Ok    ┊            ┊
┊ hdd_zpool1           ┊ SRVDMPVE01 ┊ ZFS      ┊ hdd_zpool1 ┊     6.11 TiB ┊      7.27 TiB ┊ True         ┊ Ok    ┊            ┊
┊ hdd_zpool1           ┊ SRVDMPVE02 ┊ ZFS      ┊ hdd_zpool1 ┊     1.76 TiB ┊      1.81 TiB ┊ True         ┊ Ok    ┊            ┊
┊ hdd_zpool1           ┊ SRVDMPVE03 ┊ ZFS      ┊ hdd_zpool1 ┊     6.11 TiB ┊      7.27 TiB ┊ True         ┊ Ok    ┊            ┊
┊ ssd_zpool1           ┊ SRVDMPVE01 ┊ ZFS      ┊ ssd_zpool1 ┊     1.75 TiB ┊      2.72 TiB ┊ True         ┊ Ok    ┊            ┊
┊ ssd_zpool1           ┊ SRVDMPVE02 ┊ ZFS      ┊ ssd_zpool1 ┊     1.74 TiB ┊      2.72 TiB ┊ True         ┊ Ok    ┊            ┊
┊ ssd_zpool1           ┊ SRVDMPVE03 ┊ ZFS      ┊ ssd_zpool1 ┊     1.74 TiB ┊      2.72 TiB ┊ True         ┊ Ok    ┊            ┊
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

Software Versions:

  • Proxmox Virtual Environment 7.4-3
  • Linstor stack - 1.18.0; GIT-hash: 9a2f939169b360ed3daa3fa2623dc3baa22cb509

Proxmox Plugin config:

drbd: net-vz-data
        resourcegroup hot_data
        content rootdir,images
        controller MULTIPLE_IPS_OF_CONTROLLERS
        preferlocal true
        statuscache 5

Problem:

When I try to create then restore backup of VM with TPM2.0 and EFI storage enabled im getting error about different disk sizes for EFI disk (used to store EFI vars)

vma: vma_reader_register_bs for stream drive-efidisk0 failed - unexpected size 5242880 != 540672

Full restore log:

restore vma archive: zstd -q -d -c /mnt/pve/net-share-01/dump/vzdump-qemu-101-2023_06_05-11_08_41.vma.zst | vma extract -v -r /var/tmp/vzdumptmp1491351.fifo - /var/tmp/vzdumptmp1491351
CFG: size: 781 name: qemu-server.conf
DEV: dev_id=1 size: 540672 devname: drive-efidisk0
DEV: dev_id=2 size: 8589934592 devname: drive-scsi0
DEV: dev_id=3 size: 5242880 devname: drive-tpmstate0-backup
CTIME: Mon Jun  5 11:08:43 2023
new volume ID is 'net-vz-data:vm-101-disk-1'
new volume ID is 'net-vz-data:vm-101-disk-2'
new volume ID is 'net-vz-data:vm-101-disk-3'
map 'drive-efidisk0' to '/dev/drbd/by-res/vm-101-disk-1/0' (write zeros = 1)
map 'drive-scsi0' to '/dev/drbd/by-res/vm-101-disk-2/0' (write zeros = 1)
map 'drive-tpmstate0-backup' to '/dev/drbd/by-res/vm-101-disk-3/0' (write zeros = 1)
vma: vma_reader_register_bs for stream drive-efidisk0 failed - unexpected size 5242880 != 540672
/bin/bash: line 1: 1491353 Broken pipe             zstd -q -d -c /mnt/pve/net-share-01/dump/vzdump-qemu-101-2023_06_05-11_08_41.vma.zst
     1491354 Trace/breakpoint trap   | vma extract -v -r /var/tmp/vzdumptmp1491351.fifo - /var/tmp/vzdumptmp1491351
temporary volume 'net-vz-data:vm-101-disk-2' sucessfuly removed
temporary volume 'net-vz-data:vm-101-disk-1' sucessfuly removed
temporary volume 'net-vz-data:vm-101-disk-3' sucessfuly removed
no lock found trying to remove 'create'  lock
error before or during data restore, some or all disks were not completely restored. VM 101 state is NOT cleaned up.
TASK ERROR: command 'set -o pipefail && zstd -q -d -c /mnt/pve/net-share-01/dump/vzdump-qemu-101-2023_06_05-11_08_41.vma.zst | vma extract -v -r /var/tmp/vzdumptmp1491351.fifo - /var/tmp/vzdumptmp1491351' failed: exit code 133

I think the problem is somehow tied to the work of ZFS thin-provisoning and related functionality in Linstor, tell me please, maybe I'm doing something wrong

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions