You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Storage module is available on zbus over the following channel
5
+
VMD module is available on zbus over the following channel
6
6
7
7
| module | object | version |
8
8
|--------|--------|---------|
9
-
| vmd|[vmd](#interface)| 0.0.1|
9
+
| vmd|[vmd](#interface)| 0.0.1|
10
10
11
11
## Home Directory
12
12
13
-
contd keeps some data in the following locations
14
-
| directory | path|
15
-
|----|---|
16
-
| root|`/var/cache/modules/containerd`|
13
+
vmd keeps data in the following locations:
14
+
15
+
| directory | path |
16
+
|-----------|------|
17
+
| root |`/var/cache/modules/vmd`|
18
+
| config |`{root}/config/` — one JSON file per VM |
19
+
| logs |`{root}/logs/` — stdout/stderr per VM |
20
+
| cloud-init |`{root}/cloud-init/` — fat32 images per VM |
21
+
| sockets |`/var/run/cloud-hypervisor/` — unix API socket per VM |
17
22
18
23
## Introduction
19
24
20
-
The vmd module, manages all virtual machines processes, it provide the interface to, create, inspect, and delete virtual machines. It also monitor the vms to make sure they are re-spawned if crashed. Internally it uses `cloud-hypervisor` to start the Vm processes.
25
+
The vmd module manages all virtual machine processes. It provides the interface to create, inspect, pause, resume, and delete virtual machines. It monitors VMs and re-spawns them if they crash. Internally it uses [cloud-hypervisor](https://www.cloudhypervisor.org/) to run VM processes.
21
26
22
-
It also provide the interface to configure VM logs streamers.
27
+
It also provides the interface to configure VM log streamers via zinit-managed `tailstream` services.
23
28
24
29
### zinit unit
25
30
26
-
`contd` must run after containerd is running, and the node boot process is complete. Since it doesn't keep state, no dependency on `stroaged` is needed
31
+
`vmd` must run after the boot process and networking are ready. Since it doesn't keep state on disk (config is regenerated by the provision engine on boot), no dependency on `storaged` is needed.
27
32
28
33
```yaml
29
34
exec: vmd --broker unix:///var/run/redis.sock
@@ -32,25 +37,153 @@ after:
32
37
- networkd
33
38
```
34
39
40
+
## Architecture
41
+
42
+
```
43
+
VMModule interface (pkg/vm.go)
44
+
|
45
+
v
46
+
Module (pkg/vm/manager.go)
47
+
|
48
+
+-- Run()
49
+
| +-- cloudinit.CreateImage() → fat32 disk image
50
+
| +-- Machine.Save() → JSON config
51
+
| +-- Machine.Run() → cloud-hypervisor process
52
+
| +-- startFs() × N → virtiofsd-rs daemons (virtio-fs shares)
53
+
| +-- exec cloud-hypervisor via busybox setsid
54
+
| +-- waitAndAdjOom() → OOM protection (-200)
55
+
| +-- startCloudConsole → cloud-console process (serial PTY)
56
+
|
57
+
+-- Monitor() goroutine
58
+
| +-- health check every 10s → restart crashed VMs (up to 4 times)
+-- Inspect() → cloud-hypervisor REST API (unix socket)
64
+
+-- Lock() → pause/resume via CH API
65
+
+-- Metrics() → /sys/class/net/.../statistics/
66
+
+-- StreamCreate/StreamDelete() → zinit service + tailstream
67
+
```
68
+
69
+
## VM Types
70
+
71
+
### Container VM vs Full VM
72
+
73
+
The module supports two boot modes determined by the flist content:
74
+
75
+
-**Container VM** (flist without `/image.raw`): The flist is mounted as a read-write overlay using a btrfs subvolume. A cloud-container kernel + initrd are injected. The root filesystem is shared via virtio-fs with tag `vroot`. Kernel args are set to `root=vroot rootfstype=virtiofs`.
76
+
77
+
-**Full VM** (flist with `/image.raw`): The disk image is written to the first ZMount. The VM boots directly from disk using `hypervisor-fw` firmware. No virtio-fs root is needed.
Each interface is configured via cloud-init with static IP addresses, routes, and gateways. A `cloud-console` process is launched for the private network interface, providing serial console access over the network.
89
+
90
+
### Storage
91
+
92
+
Disks are attached via virtio block devices (`--disk` flag):
93
+
- Boot disk (full VM mode): first disk, read-write
Shared directories use virtio-fs (`--fs` flag). Each share runs a dedicated `virtiofsd-rs` daemon. In container mode, disks and shared dirs are mounted via cloud-init fstab entries.
98
+
99
+
### GPU Passthrough
100
+
101
+
PCI devices can be passed through to VMs via VFIO (`--device` flag). The module checks device exclusivity before launch — no two VMs can share the same PCI device.
102
+
103
+
## VM Lifecycle
104
+
105
+
### Creation (`Run`)
106
+
107
+
1. Validate config (name, CPU 1-max, memory >= 250 MB)
Network metrics are read from `/sys/class/net/{tap}/statistics/` for each tap device. Traffic is segregated into private (`t-*` taps) and public (`p-*` taps) categories, reporting rx/tx bytes and packets per VM.
166
+
167
+
## Legacy Support
168
+
169
+
The module includes a legacy monitor for old Firecracker-based VMs. It scans `/proc` for `firecracker` processes and cleans up their bind-mounts and directories when they exit. This runs in the background until no Firecracker processes or directories remain.
170
+
35
171
## Interface
36
172
37
173
```go
38
-
39
-
// VMModule defines the virtual machine module interface
0 commit comments