Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
is now a type alias for `GuestRegionContainer<GuestRegionMmap>`).
- \[[#338](https://github.com/rust-vmm/vm-memory/pull/338)\] Make `GuestMemoryAtomic` always implement `Clone`.
- \[[#338](https://github.com/rust-vmm/vm-memory/pull/338)\] Make `GuestAddressSpace` a subtrait of `Clone`.
- \[[#339](https://github.com/rust-vmm/vm-memory/pull/339)\] Add `GuestMemory::get_slices()`

### Changed

Expand All @@ -22,6 +23,8 @@
and `GuestRegionMmap::from_range` to be separate from the error type returned by `GuestRegionCollection` functions.
Change return type of `GuestRegionMmap::new` from `Result` to `Option`.
- \[#324](https:////github.com/rust-vmm/vm-memory/pull/324)\] `GuestMemoryRegion::bitmap()` now returns a `BitmapSlice`. Accessing the full bitmap is now possible only if the type of the memory region is know, for example with `MmapRegion::bitmap()`.
- \[[#339](https://github.com/rust-vmm/vm-memory/pull/339)\] Fix `Bytes::read()` and `Bytes::write()` not to ignore `try_access()`'s `count` parameter
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should probably go into a Fixed section. Also, let's specify that this only applies to the blanket impl provided for T: GuestMemory

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, right!

- \[[#339](https://github.com/rust-vmm/vm-memory/pull/339)\] Implement `Bytes::load()` and `Bytes::store()` with `try_access()` instead of `to_region_addr()`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should also specify that it's only relevant for the blanket impl


### Removed

Expand Down
4 changes: 3 additions & 1 deletion src/bitmap/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -266,7 +266,9 @@ pub(crate) mod tests {
let dirty_len = size_of_val(&val);

let (region, region_addr) = m.to_region_addr(dirty_addr).unwrap();
let slice = m.get_slice(dirty_addr, dirty_len).unwrap();
let mut slices = m.get_slices(dirty_addr, dirty_len);
let slice = slices.next().unwrap().unwrap();
assert!(slices.next().is_none());

assert!(range_is_clean(&region.bitmap(), 0, region.len() as usize));
assert!(range_is_clean(slice.bitmap(), 0, dirty_len));
Expand Down
157 changes: 145 additions & 12 deletions src/guest_memory.rs
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@
use std::convert::From;
use std::fs::File;
use std::io;
use std::mem::size_of;
use std::ops::{BitAnd, BitOr, Deref};
use std::rc::Rc;
use std::sync::atomic::Ordering;
Expand Down Expand Up @@ -455,6 +456,93 @@ pub trait GuestMemory {
.ok_or(Error::InvalidGuestAddress(addr))
.and_then(|(r, addr)| r.get_slice(addr, count))
}

/// Returns an iterator over [`VolatileSlice`](struct.VolatileSlice.html)s, together covering
/// `count` bytes starting at `addr`.
///
/// Iterating in this way is necessary because the given address range may be fragmented across
/// multiple [`GuestMemoryRegion`]s.
///
/// The iterator’s items are wrapped in [`Result`], i.e. errors are reported on individual
/// items. If there is no such error, the cumulative length of all items will be equal to
/// `count`. If `count` is 0, an empty iterator will be returned.
fn get_slices<'a>(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we reimplement try_access in terms of get_slices, to avoid the duplication of the iteration implementation? Or even deprecate try_access in favor of get_slices, since it seems to me to be the more powerful of the two?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it can be deprecated, I don’t have a stake in that game. :)

try_access() is slightly different in that it passes the GuestMemoryRegion plus the address in that region, not the slice, so I don’t think it can be implemented via get_slices(). However, at least some (if not most) try_access() users only really use that region to generate a slice, so those could indeed be replaced by get_slices().

So we can definitely try and see whether the try_access() calls across rust-vmm (vm-memory, vm-virtio, vhost) could all be replaced with get_slices(), and if so, indeed deprecate it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I gave it a try for vm-memory, and turns out the problem is less operating on VolatileSlice instead of GuestRegion, but more than the iterator doesn't give us the offset for free, and the fact that the iterator yields Result. But nothing that isn't solved using try_fold (compile-tested only):

diff --git a/src/bitmap/mod.rs b/src/bitmap/mod.rs
index 802ebef1ca72..c92b7e618cfd 100644
--- a/src/bitmap/mod.rs
+++ b/src/bitmap/mod.rs
@@ -288,16 +288,12 @@ pub(crate) mod tests {
 
         // Finally, let's invoke the generic tests for `Bytes`.
         let check_range_closure = |m: &M, start: usize, len: usize, clean: bool| -> bool {
-            let mut check_result = true;
-            m.try_access(len, GuestAddress(start as u64), |_, size, reg_addr, reg| {
-                if !check_range(&reg.bitmap(), reg_addr.0 as usize, size, clean) {
-                    check_result = false;
-                }
-                Ok(size)
-            })
-            .unwrap();
-
-            check_result
+            m.get_slices(GuestAddress(start as u64), len)
+                .try_fold(true, |check_result, slice| {
+                    let slice = slice?;
+                    Ok(check_result && check_range(slice.bitmap(), 0, slice.len(), clean))
+                })
+                .unwrap()
         };
 
         test_bytes(
diff --git a/src/guest_memory.rs b/src/guest_memory.rs
index b9d3b62e5227..09fd38d898c2 100644
--- a/src/guest_memory.rs
+++ b/src/guest_memory.rs
@@ -354,9 +354,12 @@ pub trait GuestMemory {
 
     /// Check whether the range [base, base + len) is valid.
     fn check_range(&self, base: GuestAddress, len: usize) -> bool {
-        match self.try_access(len, base, |_, count, _, _| -> Result<usize> { Ok(count) }) {
+        match self
+            .get_slices(base, len)
+            .try_fold(0, |acc, r| r.map(|slice| acc + slice.len()))
+        {
             Ok(count) => count == len,
-            _ => false,
+            Err(_) => false,
         }
     }
 
@@ -549,23 +552,13 @@ impl<T: GuestMemory + ?Sized> Bytes<GuestAddress> for T {
     type E = Error;
 
     fn write(&self, buf: &[u8], addr: GuestAddress) -> Result<usize> {
-        self.try_access(
-            buf.len(),
-            addr,
-            |offset, count, caddr, region| -> Result<usize> {
-                region.write(&buf[offset..(offset + count)], caddr)
-            },
-        )
+        self.get_slices(addr, buf.len())
+            .try_fold(0, |acc, slice| Ok(acc + slice?.write(&buf[acc..], 0)?))
     }
 
     fn read(&self, buf: &mut [u8], addr: GuestAddress) -> Result<usize> {
-        self.try_access(
-            buf.len(),
-            addr,
-            |offset, count, caddr, region| -> Result<usize> {
-                region.read(&mut buf[offset..(offset + count)], caddr)
-            },
-        )
+        self.get_slices(addr, buf.len())
+            .try_fold(0, |acc, slice| Ok(acc + slice?.read(&mut buf[acc..], 0)?))
     }
 
     /// # Examples
@@ -629,8 +622,9 @@ impl<T: GuestMemory + ?Sized> Bytes<GuestAddress> for T {
     where
         F: ReadVolatile,
     {
-        self.try_access(count, addr, |_, len, caddr, region| -> Result<usize> {
-            region.read_volatile_from(caddr, src, len)
+        self.get_slices(addr, count).try_fold(0, |acc, r| {
+            let slice = r?;
+            Ok(acc + slice.read_volatile_from(0, src, slice.len())?)
         })
     }
 
@@ -657,10 +651,12 @@ impl<T: GuestMemory + ?Sized> Bytes<GuestAddress> for T {
     where
         F: WriteVolatile,
     {
-        self.try_access(count, addr, |_, len, caddr, region| -> Result<usize> {
+        self.get_slices(addr, count).try_fold(0, |acc, r| {
+            let slice = r?;
             // For a non-RAM region, reading could have side effects, so we
             // must use write_all().
-            region.write_all_volatile_to(caddr, dst, len).map(|()| len)
+            slice.write_all_volatile_to(0, dst, slice.len())?;
+            Ok(acc + slice.len())
         })
     }

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but more than the iterator doesn't give us the offset for free

If you think it helps, we could of course add that, e.g. as part of Iterator::Item, though I’m happy with the try_fold() solution, too.

&'a self,
addr: GuestAddress,
count: usize,
) -> GuestMemorySliceIterator<'a, Self> {
GuestMemorySliceIterator {
mem: self,
addr,
count,
}
}
}

/// Iterates over [`VolatileSlice`]s that together form a guest memory area.
///
/// Returned by [`GuestMemory::get_slices()`].
#[derive(Debug)]
pub struct GuestMemorySliceIterator<'a, M: GuestMemory + ?Sized> {
/// Underlying memory
mem: &'a M,
/// Next address in the guest memory area
addr: GuestAddress,
/// Remaining bytes in the guest memory area
count: usize,
}

impl<'a, M: GuestMemory + ?Sized> GuestMemorySliceIterator<'a, M> {
/// Helper function for [`<Self as Iterator>::next()`].
///
/// Get the next slice (i.e. the one starting from `self.addr` with a length up to
/// `self.count`) and update the internal state.
///
/// # Safety
///
/// This function does not reset to `self.count` to 0 in case of error, i.e. will not stop
/// iterating. Actual behavior after an error is ill-defined, so the caller must check the
/// return value, and in case of an error, reset `self.count` to 0.
///
/// (This is why this function exists, so this resetting can be done in a single central
/// location.)
unsafe fn do_next(&mut self) -> Option<Result<VolatileSlice<'a, MS<'a, M>>>> {
if self.count == 0 {
return None;
}

let Some((region, start)) = self.mem.to_region_addr(self.addr) else {
return Some(Err(Error::InvalidGuestAddress(self.addr)));
};

let cap = region.len() - start.raw_value();
let len = std::cmp::min(cap, self.count as GuestUsize);

self.count -= len as usize;
self.addr = match self.addr.overflowing_add(len as GuestUsize) {
(x @ GuestAddress(0), _) | (x, false) => x,
(_, true) => return Some(Err(Error::GuestAddressOverflow)),
};

Some(region.get_slice(start, len as usize))
}
}

impl<'a, M: GuestMemory + ?Sized> Iterator for GuestMemorySliceIterator<'a, M> {
type Item = Result<VolatileSlice<'a, MS<'a, M>>>;

fn next(&mut self) -> Option<Self::Item> {
// SAFETY:
// We reset `self.count` to 0 on error
match unsafe { self.do_next() } {
Some(Ok(slice)) => Some(Ok(slice)),
other => {
// On error (or end), reset to 0 so iteration remains stopped
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could implement FusedIterator, since after a single None we never return not-None ever again

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right!

self.count = 0;
other
}
}
}
}

impl<T: GuestMemory + ?Sized> Bytes<GuestAddress> for T {
Expand All @@ -464,8 +552,8 @@ impl<T: GuestMemory + ?Sized> Bytes<GuestAddress> for T {
self.try_access(
buf.len(),
addr,
|offset, _count, caddr, region| -> Result<usize> {
region.write(&buf[offset..], caddr)
|offset, count, caddr, region| -> Result<usize> {
region.write(&buf[offset..(offset + count)], caddr)
},
)
}
Expand All @@ -474,8 +562,8 @@ impl<T: GuestMemory + ?Sized> Bytes<GuestAddress> for T {
self.try_access(
buf.len(),
addr,
|offset, _count, caddr, region| -> Result<usize> {
region.read(&mut buf[offset..], caddr)
|offset, count, caddr, region| -> Result<usize> {
region.read(&mut buf[offset..(offset + count)], caddr)
},
)
}
Expand Down Expand Up @@ -591,17 +679,62 @@ impl<T: GuestMemory + ?Sized> Bytes<GuestAddress> for T {
}

fn store<O: AtomicAccess>(&self, val: O, addr: GuestAddress, order: Ordering) -> Result<()> {
// `find_region` should really do what `to_region_addr` is doing right now, except
// it should keep returning a `Result`.
self.to_region_addr(addr)
.ok_or(Error::InvalidGuestAddress(addr))
.and_then(|(region, region_addr)| region.store(val, region_addr, order))
let expected = size_of::<O>();

let completed = self.try_access(
expected,
addr,
|offset, len, region_addr, region| -> Result<usize> {
assert_eq!(offset, 0);
if len < expected {
return Err(Error::PartialBuffer {
expected,
completed: 0,
});
}
region.store(val, region_addr, order).map(|()| expected)
},
)?;

if completed < expected {
Err(Error::PartialBuffer {
expected,
completed,
})
} else {
Ok(())
}
Comment on lines +682 to +706
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this one (and the one below) would be a bit simpler in terms of get_slices maybe? Shouldn't that be something like

let iter = self.get_slices(addr, size_of::<O>());
let vslice = iter.next()?;
if iter.next().is_some() {
	return Err(PartialBuffer {0})
}
vslice.store(val)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

heh, or just self.get_slice(addr, size_of::<O>())?.store(val)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, OK. If VolatileSlice::store() does the same as GuestMemoryRegion::store(), that will be much better indeed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does! In fact, GuestMemoryRegion::store() just defers to VolatileSlice::store().

Btw, this function is exactly what made me go "mh, maybe there is a use for get_slice() after all". Because atomic loads/stores that cross region boundaries cannot work, and not something that is an implementation TODO that can be solved somehow (e.g. if the guest gives us a virt queue where an atomic goes across region boundaries, then best we can do is fail the device activation).

}

fn load<O: AtomicAccess>(&self, addr: GuestAddress, order: Ordering) -> Result<O> {
self.to_region_addr(addr)
.ok_or(Error::InvalidGuestAddress(addr))
.and_then(|(region, region_addr)| region.load(region_addr, order))
let expected = size_of::<O>();
let mut result = None::<O>;

let completed = self.try_access(
expected,
addr,
|offset, len, region_addr, region| -> Result<usize> {
assert_eq!(offset, 0);
if len < expected {
return Err(Error::PartialBuffer {
expected,
completed: 0,
});
}
result = Some(region.load(region_addr, order)?);
Ok(expected)
},
)?;

if completed < expected {
Err(Error::PartialBuffer {
expected,
completed,
})
} else {
// Must be set because `completed == expected`
Ok(result.unwrap())
}
}
}

Expand Down
50 changes: 50 additions & 0 deletions src/mmap/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -624,6 +624,56 @@ mod tests {
assert!(guest_mem.get_slice(GuestAddress(0xc00), 0x100).is_err());
}

#[test]
fn test_guest_memory_get_slices() {
let start_addr1 = GuestAddress(0);
let start_addr2 = GuestAddress(0x800);
let start_addr3 = GuestAddress(0xc00);
let guest_mem = GuestMemoryMmap::from_ranges(&[
(start_addr1, 0x400),
(start_addr2, 0x400),
(start_addr3, 0x400),
])
.unwrap();

// Same cases as `test_guest_memory_get_slice()`, just with `get_slices()`.
let slice_size = 0x200;
let mut slices = guest_mem.get_slices(GuestAddress(0x100), slice_size);
let slice = slices.next().unwrap().unwrap();
assert!(slices.next().is_none());
assert_eq!(slice.len(), slice_size);

let slice_size = 0x400;
let mut slices = guest_mem.get_slices(GuestAddress(0x800), slice_size);
let slice = slices.next().unwrap().unwrap();
assert!(slices.next().is_none());
assert_eq!(slice.len(), slice_size);

// Empty iterator.
assert!(guest_mem
.get_slices(GuestAddress(0x900), 0)
.next()
.is_none());

// Error cases, wrong size or base address.
let mut slices = guest_mem.get_slices(GuestAddress(0), 0x500);
assert_eq!(slices.next().unwrap().unwrap().len(), 0x400);
assert!(slices.next().unwrap().is_err());
assert!(slices.next().is_none());
let mut slices = guest_mem.get_slices(GuestAddress(0x600), 0x100);
assert!(slices.next().unwrap().is_err());
assert!(slices.next().is_none());
let mut slices = guest_mem.get_slices(GuestAddress(0x1000), 0x100);
assert!(slices.next().unwrap().is_err());
assert!(slices.next().is_none());

// Test fragmented case
let mut slices = guest_mem.get_slices(GuestAddress(0xa00), 0x400);
assert_eq!(slices.next().unwrap().unwrap().len(), 0x200);
assert_eq!(slices.next().unwrap().unwrap().len(), 0x200);
assert!(slices.next().is_none());
}

#[test]
fn test_atomic_accesses() {
let region = GuestRegionMmap::from_range(GuestAddress(0), 0x1000, None).unwrap();
Expand Down