The nvmetcp plugin allows an OpenMediaVault (OMV) system to act as an NVMe-over-TCP target, using the Linux kernel's built-in nvmet subsystem.
It provides a web UI to configure ports, subsystems, namespaces, and host access, and generates /etc/nvmet/config.json automatically via SaltStack.
- Install plugin via OMV-Extras β Plugins β
openmediavault-nvmetcp - Go to Services β NVMe/TCP β Settings
- β Enable plugin
- β Leave autoβassociate ON (default)
- Go to Ports tab β Add
- Port ID:
1 - IP:
0.0.0.0 - TCP Port:
4420
- Port ID:
- Go to Subsystems tab β Add
- Name:
fastssd - Namespace β Device:
/dev/sdb(or ZFS zvol, LV, etc.) - (Hosts optional if "Allow any host" is ON)
- Name:
- Click *Apply( in OMV to deploy config)
- On a Linux host:
sudo modprobe nvme-tcp
sudo nvme discover -t tcp -a <OMV_IP> -s 4420
sudo nvme connect -t tcp -a <OMV_IP> -s 4420 -n nqn.2025-10.io.omv:fastssd
lsblk # β will show /dev/nvme0n1You're now serving a raw NVMe device over the network at kernel-level speeds. π For persistent setup, see initiator examples below.
| Requirement | Notes |
|---|---|
| OMV 7+ | Available via omv-extras repo |
Linux kernel with nvmet + nvmet-tcp |
Included by default in Debian 12+ |
| NVMe-TCP capable initiator | Linux nvme-cli, Proxmox, ESXi 8, Windows 11, etc. |
| Tab | Purpose |
|---|---|
| Settings | Global options: enable/disable, auto-associate, org domain/date for NQNs |
| Ports | Configure NVMe-TCP listener endpoints |
| Subsystems | Create subsystems, namespaces, allowed hosts |
| Associations (optional) | Manually link subsystems β ports (only if auto-associate disabled) |
All changes mark the configuration dirty and require clicking Apply in OMV.
| Field | Description |
|---|---|
Enable |
Master on/off switch for nvmet.service |
Auto-associate |
If enabled, all ports are auto-linked to all subsystems |
Organization domain |
Reverse DNS used for generated NQNs (e.g. io.omv) |
Organization date |
Included in NQN (default 2014-08) |
| Input | Result |
|---|---|
Name: fastssd |
nqn.2025-10.io.omv:fastssd |
Defines TCP listeners where initiators connect.
| Field | Description |
|---|---|
Port ID |
Integer ID (1, 2, 3β¦) |
IP Address |
Listening address (0.0.0.0 allowed) |
TCP Port |
Usually 4420 |
Queue Count |
Number of I/O queues |
Queue Size |
Queue depth |
β Multiple ports supported β Each can bind to different interfaces
A subsystem is similar to an iSCSI target.
| Field | Description |
|---|---|
Name |
Base name for generated NQN (if NQN not specified) |
NQN |
Optional β auto-generated if blank |
Model |
Model name reported to initiators |
Serial |
Must be unique per subsystem |
Allow any host |
If off, hosts must be explicitly added |
| Field | Description |
|---|---|
NSID |
Namespace ID (integer, usually 1) |
Device |
/dev/sdX, /dev/nvme0n1, /dev/mapper/vg-lv, etc. |
Size override |
Optional (blank = full device) |
Shown only if Auto-associate is disabled.
| Port | Subsystem |
|---|---|
| 1 | fastssd |
| 2 | backupzvol |
{
"ports": {
"1": {
"addr_trtype": "tcp",
"addr_traddr": "192.168.10.100",
"addr_trsvcid": "4420",
"addr_adrfam": "ipv4"
}
},
"subsystems": {
"nqn.2025-10.io.omv:fastssd": {
"allow_any_host": false,
"namespaces": [
{
"nsid": 1,
"device": "/dev/sdb"
}
],
"hosts": [
"nqn.2014-08.org.nvme:host1"
]
}
},
"associations": [
["1", "nqn.2025-10.io.omv:fastssd"]
]
}sudo modprobe nvme-tcp
sudo nvme discover -t tcp -a 192.168.10.100 -s 4420
sudo nvme connect -t tcp -a 192.168.10.100 -s 4420 \
-n nqn.2025-10.io.omv:fastssdDevice appears as /dev/nvme0n1.
| Issue | Fix |
|---|---|
nvmet.service fails to start |
`lsmod |
| No device detected on initiator | Ensure namespace + association exists |
Host not allowed error |
Add host NQN to subsystem |
| Windows doesn't see device | Requires 4K block + single namespace |
β Use dedicated NIC / VLAN for NVMe traffic β Prefer ZFS zvols or LVM LV backing devices β Keep NQNs stable (changing breaks initiators) β Serial numbers must be unique
GPLv3 β same as OpenMediaVault core plugins.
Below are practical examples for common initiators. Replace placeholders like <TARGET_IP>, <PORT>, and <SUBSYSTEM_NQN> with your actual values.
# 1) Ensure the kernel initiator is available
sudo modprobe nvme-tcp
# 2) (Optional) Set/verify host NQN (appears on target as an allowed host)
sudo cat /etc/nvme/hostnqn || sudo nvme gen-hostnqn | sudo tee /etc/nvme/hostnqn
# 3) Discover the target
sudo nvme discover -t tcp -a <TARGET_IP> -s <PORT>
# 4) Connect to a specific subsystem
sudo nvme connect -t tcp -a <TARGET_IP> -s <PORT> -n <SUBSYSTEM_NQN>
# 5) Verify block device
lsblk
# -> /dev/nvme0n1 (or similar)Disconnect when done:
sudo nvme disconnect -n <SUBSYSTEM_NQN>Create a per-subsystem systemd unit so it reconnects on boot.
/etc/systemd/system/[email protected]
[Unit]
Description=NVMe/TCP Connect to %i
After=network-online.target
Wants=network-online.target
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/nvme connect -t tcp -a <TARGET_IP> -s <PORT> -n %i
ExecStop=/usr/bin/nvme disconnect -n %i
[Install]
WantedBy=multi-user.targetEnable for a given NQN:
sudo systemctl daemon-reload
sudo systemctl enable --now nvme-connect@<SUBSYSTEM_NQN>.servicenvme-stas provides stafd (finder) and stacd (connector) for NVMe/TCP. Configure a Central or Direct Discovery Controller (CDC/DDC) and let the client auto-(re)connect.
/etc/stas/stas.conf (minimal static discovery example)
# stacd: connect I/O controllers discovered from a Discovery Controller
[stacd]
# Add one or more discovery entries (IP + TCP port)
discovery-controller = [
{ transport = "tcp", traddr = "<TARGET_IP>", trsvcid = "<PORT>", adrfam = "ipv4" }
]
# Optional: set host NQN explicitly
# hostnqn = "/etc/nvme/hostnqn"Activate:
sudo systemctl enable --now stafd.service stacd.service
journalctl -u stacd -u stafd -fProxmox uses the Linux initiator β steps are the same as Section 1. Recommended specifics:
# Load initiator at boot
echo nvme-tcp | sudo tee /etc/modules-load.d/nvme-tcp.conf
# Optional: increase queue depth or tune sysctls as needed
# Discover & connect (as root on the PVE host)
nvme discover -t tcp -a <TARGET_IP> -s <PORT>
nvme connect -t tcp -a <TARGET_IP> -s <PORT> -n <SUBSYSTEM_NQN>
# Verify new device and add as a PV/LVM or ZFS member if appropriate
lsblkFor persistent connections across reboots, use the systemd unit method or STAS.
On ESXi 8.x, NVMe-over-TCP is supported. Use esxcli:
# 1) Discover
esxcli nvme discover -t tcp -a <TARGET_IP> -s <PORT>
# 2) Add the controller (bind to subsystem NQN)
esxcli nvme controller add -t tcp -a <TARGET_IP> -s <PORT> -n <SUBSYSTEM_NQN>
# 3) Verify
esxcli nvme controller list
esxcli storage core adapter list
esxcli storage core device listTo remove:
esxcli nvme controller remove -A <ControllerID>In vSphere Client, you can also add an NVMe/TCP adapter, then set up Discovery and Controllers via the UI.
Windows includes an NVMe/TCP initiator in recent builds. Administrative steps generally involve:
- Ensure your NIC and OS build support NVMe/TCP.
- Add a Discovery Controller and then Connect to Subsystem (via OS UI or vendor-provided tools).
- Use a 4K block-size namespace (Windows is picky about sector size).
- Single namespace per subsystem is the most compatible configuration.
Note: Windows tooling and UI can vary by build and vendor. If you use a vendor-provided NVMe/TCP utility, follow its βAdd Discovery Controllerβ β βConnect Subsystemβ workflow with:
- Transport: TCP
- TrAddr:
<TARGET_IP>- TrSvcId (Port):
<PORT>(default 4420)- NQN:
<SUBSYSTEM_NQN>
For HA targets (multiple ports or paths), enable device-mapper multipathing:
sudo apt-get install multipath-tools
sudo systemctl enable --now multipathd
sudo multipath -llEnsure you connect to multiple target IP:port paths for the same NQN. The kernel will expose a single multipath device (e.g., /dev/mapper/nvmeXnY) if eligible.
# List current NVMe connections / controllers / namespaces
nvme list
nvme list-subsys
nvme list-ctrl
# Show logs
dmesg | grep -i nvme
journalctl -k | grep -i nvme
# Disconnect all controllers for a specific subsystem
nvme disconnect -n <SUBSYSTEM_NQN>