Skip to content

Conversation

XanClic
Copy link

@XanClic XanClic commented Aug 29, 2025

Summary of the PR

This PR contains fixes for fragmented guest memory, i.e. situations where a consecutive guest memory slice does not translate into a consecutive slice in our userspace address space. Currently, that is not really an issue, but with virtual memory (where such discontinuities can occur on any page boundary), it will be.

(See also PR #327).

Specifically:

  • Add GuestMemory::get_slices(), which returns an iterator over slices instead of just a single one
  • Fix Bytes::read() and Bytes::write() to correctly work for fragmented memory (i.e. multiple try_access() closure calls)
  • Have Bytes::load() and Bytes::store() use try_access() instead of to_region_addr(), so they can at least detect if there is fragmentation, and return an error. (Their address argument being properly (naturally) aligned should prevent any problems with fragmentation.)

Requirements

Before submitting your PR, please make sure you addressed the following
requirements:

  • All commits in this PR have Signed-Off-By trailers (with
    git commit -s), and the commit message has max 60 characters for the
    summary and max 75 characters for each description line.
  • All added/changed functionality has a corresponding unit/integration
    test.
    • Note that this was not possible for patches 2 and 3, as explained in their respective commit messages.
  • All added/changed public-facing functionality has entries in the "Upcoming
    Release" section of CHANGELOG.md (if no such section exists, please create one).
  • Any newly added unsafe code is properly documented.

bonzini
bonzini previously approved these changes Aug 29, 2025
With virtual memory, seemingly consecutive I/O virtual memory regions
may actually be fragmented across multiple pages in our userspace
mapping.  Existing `descriptor_utils::Reader::new()` (and `Writer`)
implementations (e.g. in virtiofsd or vm-virtio/virtio-queue) use
`GuestMemory::get_slice()` to turn guest memory address ranges into
valid slices in our address space; but with this fragmentation, it is
easily possible that a range no longer corresponds to a single slice.

To fix this, add a `get_slices()` method that iterates over potentially
multiple slices instead of a single one.  We should probably also
deprecate `get_slice()`, but I’m hesitant to do it in the same
commit/PR.

(We could also try to use `try_access()` as an existing internal
iterator instead of this new external iterator, which would require
adding lifetimes to `try_access()` so the region and thus slices derived
from it could be moved outside of the closure.  However, that will not
work for virtual memory that we are going to introduce later: It will
have a dirty bitmap that is independent of the one in guest memory
regions, so its `try_access()` function will need to dirty it after the
access.  Therefore, the access must happen in that closure and the
reference to the region must not be moved outside.)

Signed-off-by: Hanna Czenczek <[email protected]>
read() and write() must not ignore the `count` parameter: The mappings
passed into the `try_access()` closure are only valid for up to `count`
bytes, not more.

(Note: We cannot really have a test case for this, as right now, memory
fragmentation will only happen exactly at memory region boundaries.  In
this case, `region.write()`/`region.read()` will only access the region
up until its end, even if the passed slice is longer, and so silently
ignore the length mismatch.  This change is necessary for when page
boundaries result in different mappings within a single region, i.e. the
region does not end at the fragmentation point, and calling
`region.write()`/`region.read()` would write/read across the boundary.
Because we don’t have IOMMU support yet, this can’t be tested.)

Signed-off-by: Hanna Czenczek <[email protected]>
When we switch to a (potentially) virtual memory model, we want to
compact the interface, especially removing references to memory regions
because virtual memory is not just split into regions, but pages first.

The one memory-region-referencing part we are going to keep is
`try_access()` because that method is nicely structured around the
fragmentation we will have to accept when it comes to paged memory.

`to_region_addr()` in contrast does not even take a length argument, so
for virtual memory, using the returned region and address is unsafe if
doing so crosses page boundaries.

Therefore, switch `Bytes::load()` and `store()` from using
`to_region_addr()` to `try_access()`.

(Note: We cannot really have a test case for this, as right now, memory
fragmentation will only happen exactly at memory region boundaries.  In
this case, `region.load()` and `region.store()` would have already
returned errors.  This change is necessary for when page boundaries
result in different mappings within a single region, but because we
don’t have IOMMU support yet, this can’t be tested.)

Signed-off-by: Hanna Czenczek <[email protected]>
@XanClic
Copy link
Author

XanClic commented Aug 29, 2025

Sorry, messed up the safety formatting: Replaced // Safe: by // SAFETY:\n to make clippy happy.

@bonzini
Copy link
Member

bonzini commented Aug 29, 2025

@XanClic Oops, some missing safety comments

Copy link
Member

@roypat roypat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think maybe we can keep .get_slice() as a sort of utility function for when someone wants to get a contiguous slice, and where receiving something cross-region boundary would be an error condition.

/// The iterator’s items are wrapped in [`Result`], i.e. errors are reported on individual
/// items. If there is no such error, the cumulative length of all items will be equal to
/// `count`. If `count` is 0, an empty iterator will be returned.
fn get_slices<'a>(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we reimplement try_access in terms of get_slices, to avoid the duplication of the iteration implementation? Or even deprecate try_access in favor of get_slices, since it seems to me to be the more powerful of the two?

match unsafe { self.do_next() } {
Some(Ok(slice)) => Some(Ok(slice)),
other => {
// On error (or end), reset to 0 so iteration remains stopped
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could implement FusedIterator, since after a single None we never return not-None ever again

///
/// Returned by [`GuestMemory::get_slices()`].
#[derive(Debug)]
pub struct GuestMemorySliceIterator<'a, M: GuestMemory + ?Sized> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe M: GuestAddressSpace, and then we could derive Clone? Although not really sure if that's useful

@@ -23,6 +23,7 @@
and `GuestRegionMmap::from_range` to be separate from the error type returned by `GuestRegionCollection` functions.
Change return type of `GuestRegionMmap::new` from `Result` to `Option`.
- \[#324](https:////github.com/rust-vmm/vm-memory/pull/324)\] `GuestMemoryRegion::bitmap()` now returns a `BitmapSlice`. Accessing the full bitmap is now possible only if the type of the memory region is know, for example with `MmapRegion::bitmap()`.
- \[[#339](https://github.com/rust-vmm/vm-memory/pull/339)\] Fix `Bytes::read()` and `Bytes::write()` not to ignore `try_access()`'s `count` parameter
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should probably go into a Fixed section. Also, let's specify that this only applies to the blanket impl provided for T: GuestMemory

Comment on lines +682 to +706
let expected = size_of::<O>();

let completed = self.try_access(
expected,
addr,
|offset, len, region_addr, region| -> Result<usize> {
assert_eq!(offset, 0);
if len < expected {
return Err(Error::PartialBuffer {
expected,
completed: 0,
});
}
region.store(val, region_addr, order).map(|()| expected)
},
)?;

if completed < expected {
Err(Error::PartialBuffer {
expected,
completed,
})
} else {
Ok(())
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this one (and the one below) would be a bit simpler in terms of get_slices maybe? Shouldn't that be something like

let iter = self.get_slices(addr, size_of::<O>());
let vslice = iter.next()?;
if iter.next().is_some() {
	return Err(PartialBuffer {0})
}
vslice.store(val)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

heh, or just self.get_slice(addr, size_of::<O>())?.store(val)

@@ -24,6 +24,7 @@
Change return type of `GuestRegionMmap::new` from `Result` to `Option`.
- \[#324](https:////github.com/rust-vmm/vm-memory/pull/324)\] `GuestMemoryRegion::bitmap()` now returns a `BitmapSlice`. Accessing the full bitmap is now possible only if the type of the memory region is know, for example with `MmapRegion::bitmap()`.
- \[[#339](https://github.com/rust-vmm/vm-memory/pull/339)\] Fix `Bytes::read()` and `Bytes::write()` not to ignore `try_access()`'s `count` parameter
- \[[#339](https://github.com/rust-vmm/vm-memory/pull/339)\] Implement `Bytes::load()` and `Bytes::store()` with `try_access()` instead of `to_region_addr()`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should also specify that it's only relevant for the blanket impl

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants