Skip to content
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@
Change return type of `GuestRegionMmap::new` from `Result` to `Option`.
- \[#324](https:////github.com/rust-vmm/vm-memory/pull/324)\] `GuestMemoryRegion::bitmap()` now returns a `BitmapSlice`. Accessing the full bitmap is now possible only if the type of the memory region is know, for example with `MmapRegion::bitmap()`.
- \[[#339](https://github.com/rust-vmm/vm-memory/pull/339)\] Fix `Bytes::read()` and `Bytes::write()` not to ignore `try_access()`'s `count` parameter
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should probably go into a Fixed section. Also, let's specify that this only applies to the blanket impl provided for T: GuestMemory

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, right!

- \[[#339](https://github.com/rust-vmm/vm-memory/pull/339)\] Implement `Bytes::load()` and `Bytes::store()` with `try_access()` instead of `to_region_addr()`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should also specify that it's only relevant for the blanket impl


### Removed

Expand Down
62 changes: 54 additions & 8 deletions src/guest_memory.rs
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@
use std::convert::From;
use std::fs::File;
use std::io;
use std::mem::size_of;
use std::ops::{BitAnd, BitOr, Deref};
use std::rc::Rc;
use std::sync::atomic::Ordering;
Expand Down Expand Up @@ -678,17 +679,62 @@ impl<T: GuestMemory + ?Sized> Bytes<GuestAddress> for T {
}

fn store<O: AtomicAccess>(&self, val: O, addr: GuestAddress, order: Ordering) -> Result<()> {
// `find_region` should really do what `to_region_addr` is doing right now, except
// it should keep returning a `Result`.
self.to_region_addr(addr)
.ok_or(Error::InvalidGuestAddress(addr))
.and_then(|(region, region_addr)| region.store(val, region_addr, order))
let expected = size_of::<O>();

let completed = self.try_access(
expected,
addr,
|offset, len, region_addr, region| -> Result<usize> {
assert_eq!(offset, 0);
if len < expected {
return Err(Error::PartialBuffer {
expected,
completed: 0,
});
}
region.store(val, region_addr, order).map(|()| expected)
},
)?;

if completed < expected {
Err(Error::PartialBuffer {
expected,
completed,
})
} else {
Ok(())
}
Comment on lines +682 to +706
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this one (and the one below) would be a bit simpler in terms of get_slices maybe? Shouldn't that be something like

let iter = self.get_slices(addr, size_of::<O>());
let vslice = iter.next()?;
if iter.next().is_some() {
	return Err(PartialBuffer {0})
}
vslice.store(val)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

heh, or just self.get_slice(addr, size_of::<O>())?.store(val)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, OK. If VolatileSlice::store() does the same as GuestMemoryRegion::store(), that will be much better indeed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does! In fact, GuestMemoryRegion::store() just defers to VolatileSlice::store().

Btw, this function is exactly what made me go "mh, maybe there is a use for get_slice() after all". Because atomic loads/stores that cross region boundaries cannot work, and not something that is an implementation TODO that can be solved somehow (e.g. if the guest gives us a virt queue where an atomic goes across region boundaries, then best we can do is fail the device activation).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That’s indeed true and a very good point; for atomic accesses, the reason why they shouldn’t go across boundaries is that they should be naturally aligned, which automatically prevents them from crossing e.g. page boundaries.

Then again, specifically for atomic accesses, I think users outside of vm-memory shouldn’t need to implement them and instead use what Bytes provides.

I still think it could be beneficial to force users to deal with an iterator even when they only expect a single slice because it might encourage them to write a comment why this is OK. Making get_slice() unsafe would achieve the same effect, but the thing is, it isn’t really unsafe, so it would technically be wrong. But maybe I’m overreaching and shouldn’t dictate how vm-memory users have to document their code…

}

fn load<O: AtomicAccess>(&self, addr: GuestAddress, order: Ordering) -> Result<O> {
self.to_region_addr(addr)
.ok_or(Error::InvalidGuestAddress(addr))
.and_then(|(region, region_addr)| region.load(region_addr, order))
let expected = size_of::<O>();
let mut result = None::<O>;

let completed = self.try_access(
expected,
addr,
|offset, len, region_addr, region| -> Result<usize> {
assert_eq!(offset, 0);
if len < expected {
return Err(Error::PartialBuffer {
expected,
completed: 0,
});
}
result = Some(region.load(region_addr, order)?);
Ok(expected)
},
)?;

if completed < expected {
Err(Error::PartialBuffer {
expected,
completed,
})
} else {
// Must be set because `completed == expected`
Ok(result.unwrap())
}
}
}

Expand Down