You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Use virtiofsd for sharing file system data with host
Switch over to using virtiofsd for sharing file system data with the
host. virtiofs is a file system designed for the needs of virtual
machines and environments. That is in contrast to 9P fs, which we
currently use for sharing data with the host, which is first and
foremost a network file system.
9P is problematic if for no other reason that it lacks proper support
for usage of the "open-unlink-fstat idiom", in which files are unlinked
and later referenced via file descriptor (see #83). virtiofs does not
have this problem.
This change replaces usage of 9P with that of virtiofs. In order to
work, virtiofsd needs a user space server. Contrary to what is the case
for 9P, it is not currently integrated into Qemu itself and so we have
to manage it separately (and require the user to install it).
I benchmarked both the current master as well as this version with a
bare-bones custom kernel:
Benchmark 1: target/release/vmtest -k bzImage-9p 'echo test'
Time (mean ± σ): 1.316 s ± 0.087 s [User: 0.462 s, System: 1.104 s]
Range (min … max): 1.232 s … 1.463 s 10 runs
Benchmark 1: target/release/vmtest -k bzImage-virtiofsd 'echo test'
Time (mean ± σ): 1.244 s ± 0.011 s [User: 0.307 s, System: 0.358 s]
Range (min … max): 1.227 s … 1.260 s 10 runs
So it seems there is a ~0.7s speed up, on average (and significantly
less system time being used). This is great, but I suspect a more
pronounced speed advantage will be visible when working with large
files, in which virtiofs is said to significantly outperform 9P
(typically >2x from what I understand, but I have not done any
benchmarks of that nature).
A few other notes:
- we solely rely on guest level read-only mounts to enforce read-only
state. The virtiofsd recommended way is to use read-only bind mounts
[0], but doing so would require root.
- we are not using DAX, because it still is still incomplete and
apparently requires building Qemu (?) from source. In any event, it
should not change anything functionally and be solely a performance
improvement.
- interestingly, there may be the option of just consuming the virtiofsd
crate as a library and not require any shelling out. That would be
*much* nicer, but the current APIs make this somewhat cumbersome. I'd
think we'd pretty much have to reimplement their entire main()
functionality [1]. I consider this way out of scope for this first
version.
I have adjusted the configs, but because I don't have Docker handy I
can't really create those kernel. CI seems incapable of producing the
artifacts without doing a fully-blown release dance. No idea what empty
is about, really. I suspect the test failures we see are because it
lacks support?
Some additional resources worth keeping around:
- https://virtio-fs.gitlab.io/howto-boot.html
- https://virtio-fs.gitlab.io/howto-qemu.html
[0] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md?ref_type=heads#faq
[1] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/src/main.rs?ref_type=heads#L1242Closes: #16Closes: #83
Signed-off-by: Daniel Müller <[email protected]>
0 commit comments