Skip to content

backup.sh seems to mess zfs pool (multiple datasets). #7

@pefribeiro

Description

@pefribeiro

Hi,

Thanks for the nice guide. I've followed most of the steps exactly as described, up to the point where I got the initial backup from my FreeNAS installation to an intel NUC running 11.0-RELEASE-p9. I observe that I have more than one dataset, but only one volume.

I then downloaded your script backup.sh, and after executing it a few times eventually the remote system becomes unable to gracefully export the pool. For example:

local@freenas:~ % echo "master@2017-06-11T23:13:19Z" > .sunflower-last-sent 
local@freenas:~ % ./backup.sh snapback
Estimated Size: 47.6M
46.8MiB 0:00:04 [11.6MiB/s] [==============================================================================================================================>   ] 98%            
local@freenas:~ % ./backup.sh snapback
Estimated Size: 30.5K
74.8KiB 0:00:00 [82.2KiB/s] [=================================================================================================================================] 245%            
cannot export 'master': pool is busy
geli: Cannot destroy device da0.eli (error=16).

I've tried changing geli's command not to 'auto-detach' following this report but even then I still get the same issue.

Following a reboot of the target system I then see 'UNAVAIL' for the pool.

root@sunflower:~ # zpool list
NAME     SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
master      -      -      -         -      -      -      -  UNAVAIL  -

root@sunflower:~ # zpool status
  pool: master
 state: UNAVAIL
status: One or more devices could not be opened.  There are insufficient
	replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
   see: http://illumos.org/msg/ZFS-8000-3C
  scan: none requested
config:

	NAME                   STATE     READ WRITE CKSUM
	master                 UNAVAIL      0     0     0
	  1568430436390657639  UNAVAIL      0     0     0  was /dev/da0.eli

I then manually copy the key to the target system and ask geli to attach the volume.

It then comes back online:

root@sunflower:~ # zpool status
  pool: master
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	master      ONLINE       0     0     0
	  da0.eli   ONLINE       0     0     0

However, if I then mount the volume by doing zfs mount master I can then only see the top-level folders corresponding to each of the original datasets, but not any files or folders saved within this zfs volume, which seems to indicate to me that something is seriously messed up.

I then do zfs umount master followed by zpool export master and `zpool import master```. At this point the contents in the datasets are visible again at the mount point.

I am not sure what is causing the problem with the pool becoming 'unexportable' upon the first time altogether. I'm fairly new to ZFS so I would appreciate some guidance as to whether I'm doing something wrong with multiple datasets and your backup.sh script. For reference, here is my current ```zfs-receive.sh`` script:

#!/bin/sh

geli attach -pk /tmp/k /dev/da0
zpool export master 2>/dev/null
zpool import -N master
zfs receive -Fu master
zpool export master
geli detach /dev/da0

Thanks

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions