This project provides a single Rust binary that bridges TCP clients outside of an OpenStack-style host into processes running inside Linux network namespaces by combining TCP and Unix Domain Socket (UDS) forwarding. It also supports pure TCP→TCP and UDP→UDP proxying so you can centralize all dataplane forwarding inside one supervisor. The goal is to replace adhoc socat command lines with a supervised, configurable service that can run multiple forwarders at once while safely managing the lifecycle of the UDS files.
The forwarder binary supervises a list of forward tasks described via CLI flags or a TOML configuration file. Each task looks like:
listen: optional TCP address (e.g.0.0.0.0:2222). When set, the process listens on the host and accepts external connections. Ifudsis also provided we bridge into a namespace via UDS; otherwise, iftargetis set we build a direct TCP proxy.udp_listen: optional UDP address for stateless forwarding (e.g.0.0.0.0:5353).namespaceorsetns_path: optional network namespace name or an absolute/var/run/netns/<ns>path. When provided, the task enters that namespace viasetns()before binding a Unix socket.uds: Unix socket path used for host/namespace communication.target: final TCP address (inside the namespace for UDS bridging or on the host for TCP proxies). Each client gets its own TCP connection to this target.udp_target: destinationhost:portfor UDP proxies. Each client receives its own relay socket with idle eviction.
Depending on which fields are populated, the binary can act as:
- Namespace endpoint – enter the namespace, expose a UDS, and connect to the VM target for each inbound UDS session.
- Host proxy – listen on a host TCP port, dial a UDS, and stream bytes to the namespace endpoint.
- Combined pipeline – perform both actions in one task if you want the binary to enter the namespace and export a TCP listener while still running on the host (useful for testing).
All data movement is handled by tokio::io::copy_bidirectional, giving full-duplex forwarding similar to socat.
flowchart LR
Client["External TCP client"]
TCPListener["Host TCP listener<br/>listen=0.0.0.0:2222"]
HostTask["Host proxy task\n(copy TCP⇄UDS)"]
UDS["/run/qdhcp/ssh.sock"]
NamespaceTask["Namespace endpoint task\n(setns + Tokio runtime)"]
Target["Namespace TCP target\n192.168.31.201:22"]
DirectTCPListen["Direct TCP listener<br/>listen=0.0.0.0:8443"]
DirectTCPTarget["Direct TCP target\n10.0.0.23:443"]
UDPClient["UDP client"]
UDPListen["UDP listener\nudp_listen=0.0.0.0:5353"]
UDPRelay["Per-client UDP relay\n(creates upstream socket)"]
UDPTarget["UDP target\n192.168.31.10:53"]
Client -->|SSH/any TCP| TCPListener --> HostTask -->|Unix stream| UDS --> NamespaceTask -->|TCP dial| Target
Client -->|HTTPS/etc| DirectTCPListen --> DirectTCPTarget
UDPClient --> UDPListen --> UDPRelay --> UDPTarget
- UDS bridge (default) – set
namespace/setns_path+uds+target(and optionallylisten). This is the classic host ⇄ namespace workflow documented above. - Direct TCP proxy – set
listen+targetwithoutuds. Each spec spawns a pure TCP listener on the host that dials thetargetendpoint and shuttles bytes with zero Unix sockets involved. - Direct UDP proxy – set
udp_listen+udp_target(and optionallyudp_idle_timeout). The proxy allocates one upstream socket per client, forwards datagrams bidirectionally, and reclaims idle sessions automatically.
All three modes can run simultaneously in a single pfwd process, letting you describe your entire dataplane from one config file.
The binary uses clap for structured arguments:
pfwd --config forward.toml \
--forward listen=0.0.0.0:2222,uds=/run/qdhcp/ssh.sock \
--forward namespace=qdhcp-27a7...,uds=/run/qdhcp/ssh.sock,target=192.168.31.201:22 \
--forward listen=0.0.0.0:8443,target=10.0.0.23:443 \
--forward udp_listen=0.0.0.0:5353,udp_target=192.168.31.10:53,udp_idle_timeout=600
Key options:
--config <PATH>: load defaults from TOML (optional).--forward key=value: repeatable and compatible with config entries. Keys mirror TOML fields (listen,udp_listen,namespace,uds,target,udp_target,mode,owner,backlog,label,udp_idle_timeout).- Flags override file values so operators can hot-fix without editing files.
[defaults]
log_level = "info"
mode = 384 # octal 0o600 for the UDS
owner = "root:root"
uds_dir = "/run/qdhcp"
udp_idle_timeout_secs = 600
[[forward]]
namespace = "qdhcp-27a7..."
uds = "/run/qdhcp/ssh.sock"
target = "192.168.31.201:22"
[[forward]]
listen = "0.0.0.0:2222"
uds = "/run/qdhcp/ssh.sock"
[[forward]]
listen = "0.0.0.0:8443"
target = "10.0.0.23:443"
[[forward]]
udp_listen = "0.0.0.0:5353"
udp_target = "192.168.31.10:53"Each forward table maps to one async task. Validation rules:
udsis mandatory whenever the spec references a namespace endpoint or a host-side UDS proxy.listenmay be paired withuds(host⇄UDS) ortarget(direct TCP). Providinglistenwithout either will be rejected.targetis required when acting inside a namespace or when running a direct TCP proxy.udp_listenrequiresudp_target(and vice versa). You can override idle eviction withudp_idle_timeout_secs(seconds, must be > 0).
- Parent directories: Ensure
std::fs::create_dir_allfor the parent path and apply desired permissions. - Stale sockets: If the UDS path exists,
lstat. If it is a socket, unlink before binding; otherwise abort with a clear error. - Ownership/mode: After binding, use
nix::unistd::fchown/fchmodon the listener fd to guarantee the requested UID/GID/mode even with differentumasksettings. - Cleanup: Install a guard that removes the socket on graceful shutdown (
SIGINT,SIGTERM) or abnormal drop. - Health checks: The TCP-side task retries UDS connections with capped exponential backoff when
ENOENTis returned, logging actionable messages.
- Parse config and CLI; build a vector of
ForwardSpecstructs. - For each spec, spawn an async task:
- If
namespaceprovided, callsetns()(vianix::sched::setns) before opening sockets. - If
listenprovided withuds, bind aTcpListener, accept clients, connect touds, and shuttle bytes. - If
listenprovided withoutuds, run a pure TCP proxy that dialstargetdirectly. - If
udp_listenprovided, bind aUdpSocket, maintain per-client relay sockets toudp_target, and reap idle sessions based onudp_idle_timeout. - Always log per-session UUIDs for traceability.
- If
- Hook
tokio::signal::ctrl_cto trigger a graceful drain and cleanup.
-
The binary is single-process. Each
ForwardSpecthat combineslistenwithnamespace/setns_pathlaunches two tasks inside the same process: the host proxy task keeps running in the root namespace, while the namespace endpoint task runs on a dedicated blocking thread that callssetns()before starting its own Tokio runtime. -
setns()only affects the calling thread, so the root namespace remains available for other specs and for host-level logging/control. You do not need two binaries; one invocation can service both ends so long as the UDS path is visible to both namespaces. -
ip netns identify <pid>may print nothing because the main thread never leaves the root namespace. Inspect per-thread namespaces instead:pid=$(pgrep pfwd) for task in /proc/$pid/task/*; do tid=${task##*/} echo "tid=$tid $(readlink "$task/ns/net")" done
Threads reporting
net:[4026531993](example inode) remain in the root namespace; the namespace worker shows a different inode such asnet:[4026532851]. -
To map that inode back to an
ip netnsname, compare with the symlink behind/var/run/netns/<name>:readlink /var/run/netns/qdhcp-27a7862a-e1f6-4520-ac9e-061bb8b649c4
When the inode matches the one printed for the worker thread, you know the task entered that namespace successfully.
-
lsns -t netgives a system-wide view of namespace inodes and the PIDs bound to them. This is helpful when validating that only the expectedpfwdthread joined a given namespace. -
Direct TCP/UDP proxy specs never call
setns, so they stay entirely in the root namespace and coexist safely with the namespace tasks described above.