-
Notifications
You must be signed in to change notification settings - Fork 110
Misc cleanups: move mmap files and replace cfg unix/windows with target_family #314
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The interfaces don't change in this PR, it's easy to check this by patching the vm-memory dependency in a crate with this PR's branch: [patch.crates-io]
vm-memory = { git = "https://github.com/epilys/vm-memory.git", branch = "rework-xen-interface" } And |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Happy to see someone else interested in making the xen and unix worlds compatible! :D
fwiw, I also started some work towards this in #312, by generifying some of the code in mmap.rs to stop relying on exactly one of mmap_*.rs being define - does that roughly match your vision of how to go about this?
Oops, completely missed your PR (because I had started this branch locally before you posted it and only picked it up again recently). I wouldn't make the memory region type generic, we would want it to be opaque so that it would be the same type under any hypervisor. The actual memory region kind would be a "hidden" implementation detail and depend on how the library consumer/user initializes the vm-memory api, on run-time. At least, that's what my thoughts were. |
(preamble: no matter what we do long term, this PR makes sense, so if you do
Mh, interesting! I was actually thinking about it exactly the other way - e.g. I'd even love to separate out the Xen stuff from VolatileSlice, and instead add a second type (VolatileSliceXen or sth that is just a thin wrapper/decorator for VolatileSlice that does the xen mmaps on read/write) and add an associated type to re: making the memory region type generic: Ideally, I would like vm-memory's API to be in such a way that theoretically, something like Xen support can be easily implemented outside of the core code, and for that goal, generifying things like GuestMemoryMmap (GuestRegionCollection in my PR) still makes sense imo (and also the generification of VolatileSlice would be in line with this goal). And if downstream users that want to support multiple backends are written generically against |
By that you mean that vm-memory will hand out trait objects that hide the implementation details? |
vm-memory will hand out objects that implement specific traits (e.g. |
Right, I think we're talking about the same thing essentially. 🤔 Interface would look like this:
|
Mh, yeah, I think generally we are aligned - why trait objects instead of static dispatch though? Or rather, why do the type erasure between Xen and Mmap backends in vm-memory instead of downstream? (I admittedly also am not sure our traits are dyn compatible/object safe, because they have associated types 🤔)
|
The reasoning is, to have one build that works on any (compile-time) supported hypervisor, configured at runtime. We're investigating whether this usecase is reasonable to achieve upstream. |
Right, but why isn't that possible with static dispatch? E.g. fn main() {
/* parse arguments, determine runtime configuration */
if (xen) {
run(setup_xen_memory());
} else {
run(setup_mmap_memory());
}
}
fn setup_xen_memory() -> impl GuestMemory { /* actual type something like GuestRegionCollection<XenRegion> */
todo!()
}
fn setup_mmap_memory() -> impl GuestMemory { /* actual type something like GuestRegionCollection<MmapRegion> */
todo!()
}
fn run<G: GuestMemory>() {
todo!()
} |
It's possible of course, but adding a new hypervisor means you'd have to change downstream code to handle it, right? |
Wouldn't you have to do that either way? At the very least, downstreams would need to be recompiled against the new vm-memory version. But say today vm-memory was written in terms of type erased trait objects and then it gains support for a new hypervisor (say, Xen). Then, how would a downstream actually make use of vm-memory's new Xen support without any code changes? There would still need to be something downstream that drives whatever vm-memory API hands out Besides, adding support for new hypervisors is an incredibly rare event 😅 - how much does it make sense to optimize for this, especially if the savings for downstream is just a couple lines of initialization code? Now, I'm not saying we cannot have a utility API that offers a sort of builder pattern for handing out Footnotes
|
When your referring to the downstream code you mean the consumers of vm-memory? e.g. A vhost-device binary? The aim is to get automatic support from the vm-memory and vhost-user crates without the binary having to specify anything more then what features it wants enabled. |
yea
Just to clarify, by "features" you don't mean "cargo features", right? Because that would put the choice back at compile time, which I thought was precisely what @epilys did not want. Could you maybe sketch out roughly how you would like to drive the vm-memory API? I'm probably just missing something, but I don't see how vhost-user binary crates could automatically support a new hypervisor without code changes just because vm-memory adds support for it. |
Is this a deserialization thing? The vhost backend gets a description of the memory regions, and just passes that description on to vm-memory to reconstruct its view of the regions, and this should be transparent to the backend binary? E.g. if the binary suddenly receives descriptions for regions for/from a new hypervisor then it just forwards them to vm-memory all the same, and vm-memory then constructs an opaque objects to give back to the vhost binary? |
Currently the cargo features select which mode to support, the idea of this change is to enable multiple modes that can be supported.
Its the vhost-user crate which will create the slice in response to the feature negotiation and the memory messages. I'm happy for the logic to live in there. The point is the details of memory mapping should be invisible to the binaries themselves. |
Okay, I think I got it now. I was missing the part with the shim layer in vhost-user that'll translate vhost-user messages into vm-memory objects. So in that case let me revise my answer to your earlier question...
... to instead be "consumers of vm-memory that have to construct guest memory mappings", e.g. not the binaries, but only the vhost-user library crate. So the shim layer in vhost-user can then construct GuestMemoryXen or GuestMemoryMmap based on which vhost messages it receives, cram those into a Box1 or an enum2, and pass that off to the binary - new backends would only need to be added to the shim in vhost-user, but the binaries will be non the wiser Footnotes
|
2a3367e
to
96b6807
Compare
No functional changes, other than the module paths of mmap related types. Signed-off-by: Manos Pitsidianakis <[email protected]>
cfg flags "unix" and "windows" are now aliases to target_family = "unix" and target_family = "windows" respectively for backwards compatibility, so use the non-legacy way of targeting those OS families. Signed-off-by: Manos Pitsidianakis <[email protected]>
Other modules with children modules in this crate use the <module_name>/mod.rs structure except for mmap, fix the discrepancy. No functional changes. Signed-off-by: Manos Pitsidianakis <[email protected]>
96b6807
to
592031e
Compare
Perform some cleanups/refactorings that should not result in functional changes.
The motivation for consolidating mmap stuff together is to perform future work on unifying xen/non-xen interfaces to allow compiling vm-memory with any combination of features.
cfg flags "unix" and "windows" are now aliases to target_family = "unix"
and target_family = "windows" respectively for backwards compatibility,
so use the non-legacy way of targeting those OS families.
Signed-off-by: Manos Pitsidianakis [email protected]