A plugin for OpenMediaVault that provides ZFS pool, filesystem, volume, and snapshot management through the OMV web interface.
- Pool management — create, expand, import, export, and delete ZFS pools with support for basic, mirror, RAIDZ1, RAIDZ2, and RAIDZ3 topologies
- Filesystems — create and delete ZFS filesystems with optional custom mountpoints; nested filesystems supported
- Volumes (zvols) — create thick or thin-provisioned block device volumes
- Snapshots — create, roll back, and delete snapshots; clone filesystems from snapshots
- Properties — view and modify ZFS properties on pools, filesystems, volumes, and snapshots
- Scrub — initiate pool integrity scrubs from the UI
- Discover — synchronise the OMV fstab database with the live ZFS state:
- Add new — register any ZFS datasets not yet known to OMV
- Add new + delete missing — full sync in both directions
- Delete missing — remove OMV fstab entries for datasets that no longer exist
- ARC statistics — dashboard widget showing ZFS ARC hit ratio and cache size
- Encryption — enable, load, unload, and change encryption keys on datasets
<zpool> ::= <vdev> [<log>] [<cache>] [<spare>]
<vdev> ::= <basic> | <mirror> | <raidz1> | <raidz2> | <raidz3>
<basic> ::= "disk"
<mirror> ::= "disk" "disk" ["disk" ...] (≥ 2 disks)
<raidz1> ::= "disk" "disk" "disk" ["disk" ...] (≥ 3 disks)
<raidz2> ::= "disk" "disk" "disk" "disk" [...] (≥ 4 disks)
<raidz3> ::= "disk" "disk" "disk" "disk" "disk" [...] (≥ 5 disks)
<log> ::= "internal" | <basic> | <mirror>
<cache> ::= "internal" | <basic>
<spare> ::= <basic> ["disk" ...]
tests/test-rpc.sh is an end-to-end integration test that exercises the plugin's RPC methods against a real ZFS pool.
- Run as root
- One or more block devices that can be wiped (use loop devices, LVM logical volumes, or spare disks)
python3available on PATH (standard on all OMV installations)
sudo tests/test-rpc.sh /dev/sdX [/dev/sdY ...]The pool topology is chosen automatically based on how many devices are supplied:
| Devices | Pool type |
|---|---|
| 1 | basic |
| 2 | mirror |
| 3–4 | raidz1 |
| 5+ | raidz2 |
The pool and all datasets are destroyed on exit regardless of whether the tests pass or fail.
Calls that require no pool and verify the plugin can communicate with the engine:
getStats— reads ARC hit/miss counters and cache size from/proclistCompressionTypes— returns the list of available compression algorithmsgetEmptyCandidates— returns unused, unpartitioned block devices
addPool— creates a pool on the supplied devices using the selected topology; verified independently withzpool list
listPools/listPoolsBg— enumerate pools and their status, size, and mountpoint; the background variant is also exercisedgetObjectDetails— retrieves rawzpool statusandzpool get alloutput for the poolgetProperties— reads all ZFS filesystem properties (compression, quota, atime, etc.) on the pool root datasetsetProperties— setscompression=lz4on the pool root dataset viazfs setscrubPool— starts a pool scrub
addObject (filesystem)— createsfs1with the default mountpointaddObject (filesystem, custom mountpoint)— createsfs2at a custom pathaddObject (nested filesystem)— createsfs1/childto verify nested dataset creationgetObjectDetails— readszfs get allfor a filesystemgetProperties— reads all properties for a filesystem, verified to includecompressionsetProperties— setscompression=lz4andatime=offon the filesystem
addObject (snapshot)— snapshotsfs1asfs1@snap1getAllSnapshots/getAllSnapshotsBg— lists all snapshots; verified to include the new snapshotgetObjectDetails— readszfs get allfor the snapshotrollbackSnapshot— rolls backfs1tosnap1; a marker file is written before the snapshot and its absence after rollback is verifieddeleteObject (snapshot)— removesfs1@snap1
addObject (snapshot)— createsfs1@snap2as the clone sourceaddObject (clone)— clonesfs1@snap2intoclone1deleteObject (clone)— destroys the clone filesystemdeleteObject (snapshot)— removesfs1@snap2
addObject (volume, thick)— creates a 100 MiB thick-provisioned zvoladdObject (volume, thin)— creates a 100 MiB thin-provisioned (sparse) zvolgetObjectDetails— verifiesvolsizeappears in the outputgetProperties— reads zvol propertiesdeleteObject×2 — removes both volumes
- A filesystem is created directly via the ZFS CLI, bypassing the plugin
doDiscoverBg (addMissing=true)— the plugin scans for unregistered datasets and adds the missing fstab entry- The OMV FsTab database is queried to confirm the entry was created
- The CLI-created filesystem from the previous step is destroyed via the ZFS CLI, leaving a stale OMV fstab entry behind
doDiscoverBg (deleteStale=true)— the plugin removes fstab entries for datasets that no longer exist- The OMV FsTab database is queried to confirm the entry was removed
- A filesystem is registered via the plugin and then destroyed via the CLI, creating a stale entry
doDiscoverBg (addMissing=true, deleteStale=false)— verifies the operation completes without error even when stale entries are present (regression test for a bug wherezfs getwas called on non-existent datasets)- The stale entry is cleaned up with a subsequent
deleteStalecall
- Child datasets are removed to allow a clean export
exportPool— exports the pool;listPoolsis queried to confirm the pool is no longer visibleimportPool— imports the pool by name;listPoolsis queried to confirm it is visible again
deleteObjectBg (Pool)— destroys the pool; verified withzpool list