Skip to content

Conversation

ShadowCurse
Copy link
Contributor

Changes

Add a buffer to the Entropy struct which will be reused for descriptor chain processing. This will remove runtime overhead of recreating this buffer for each request.

Reason

Less overhead in rng device.

License Acceptance

By submitting this pull request, I confirm that my contribution is made under
the terms of the Apache 2.0 license. For more information on following Developer
Certificate of Origin and signing off your commits, please check
CONTRIBUTING.md.

PR Checklist

  • If a specific issue led to this PR, this PR closes the issue.
  • The description of changes is clear and encompassing.
  • Any required documentation changes (code and docs) are included in this
    PR.
  • API changes follow the Runbook for Firecracker API changes.
  • User-facing changes are mentioned in CHANGELOG.md.
  • All added/changed functionality is tested.
  • New TODOs link to an issue.
  • Commits meet
    contribution quality standards.

  • This functionality cannot be added in rust-vmm.

@ShadowCurse ShadowCurse force-pushed the rng_reuse branch 2 times, most recently from e67de4f to 432793b Compare October 10, 2024 11:35
Copy link

codecov bot commented Oct 10, 2024

Codecov Report

Attention: Patch coverage is 93.33333% with 1 line in your changes missing coverage. Please review.

Project coverage is 83.96%. Comparing base (32332d9) to head (41dfa50).
Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
src/vmm/src/devices/virtio/rng/device.rs 93.33% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #4846      +/-   ##
==========================================
- Coverage   83.96%   83.96%   -0.01%     
==========================================
  Files         250      250              
  Lines       27758    27756       -2     
==========================================
- Hits        23306    23304       -2     
  Misses       4452     4452              
Flag Coverage Δ
5.10-c5n.metal 84.58% <93.33%> (-0.01%) ⬇️
5.10-m5n.metal 84.56% <93.33%> (-0.01%) ⬇️
5.10-m6a.metal 83.85% <93.33%> (-0.01%) ⬇️
5.10-m6g.metal 80.51% <93.33%> (-0.01%) ⬇️
5.10-m6i.metal 84.56% <93.33%> (-0.01%) ⬇️
5.10-m7g.metal 80.51% <93.33%> (-0.01%) ⬇️
6.1-c5n.metal 84.58% <93.33%> (-0.01%) ⬇️
6.1-m5n.metal 84.56% <93.33%> (-0.01%) ⬇️
6.1-m6a.metal 83.85% <93.33%> (-0.01%) ⬇️
6.1-m6g.metal 80.50% <93.33%> (-0.01%) ⬇️
6.1-m6i.metal 84.56% <93.33%> (-0.01%) ⬇️
6.1-m7g.metal 80.51% <93.33%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@ShadowCurse ShadowCurse marked this pull request as ready for review October 10, 2024 11:48
bchalios
bchalios previously approved these changes Oct 10, 2024
@ShadowCurse ShadowCurse self-assigned this Oct 10, 2024
@ShadowCurse ShadowCurse added Status: Awaiting review Indicates that a pull request is ready to be reviewed Type: Enhancement Indicates new feature requests Type: Performance labels Oct 10, 2024
Copy link
Contributor

@roypat roypat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we avoid the lifetime issues like this instead of doing Self:: everywhere? :o

diff --git a/src/vmm/src/devices/virtio/rng/device.rs b/src/vmm/src/devices/virtio/rng/device.rs
index f9bb3c828..622268e54 100644
--- a/src/vmm/src/devices/virtio/rng/device.rs
+++ b/src/vmm/src/devices/virtio/rng/device.rs
@@ -91,13 +91,13 @@ impl Entropy {
             .map_err(DeviceError::FailedSignalingIrq)
     }
 
-    fn rate_limit_request(rate_limiter: &mut RateLimiter, bytes: u64) -> bool {
-        if !rate_limiter.consume(1, TokenType::Ops) {
+    fn rate_limit_request(&mut self, bytes: u64) -> bool {
+        if !self.rate_limiter.consume(1, TokenType::Ops) {
             return false;
         }
 
-        if !rate_limiter.consume(bytes, TokenType::Bytes) {
-            rate_limiter.manual_replenish(1, TokenType::Ops);
+        if !self.rate_limiter.consume(bytes, TokenType::Bytes) {
+            self.rate_limiter.manual_replenish(1, TokenType::Ops);
             return false;
         }
 
@@ -109,29 +109,29 @@ impl Entropy {
         rate_limiter.manual_replenish(bytes, TokenType::Bytes);
     }
 
-    fn handle_one(iovec: &mut IoVecBufferMut) -> Result<u32, EntropyError> {
+    fn handle_one(&mut self) -> Result<u32, EntropyError> {
         // If guest provided us with an empty buffer just return directly
-        if iovec.len() == 0 {
+        if self.buffer.len() == 0 {
             return Ok(0);
         }
 
-        let mut rand_bytes = vec![0; iovec.len() as usize];
+        let mut rand_bytes = vec![0; self.buffer.len() as usize];
         rand::fill(&mut rand_bytes).map_err(|err| {
             METRICS.host_rng_fails.inc();
             err
         })?;
 
         // It is ok to unwrap here. We are writing `iovec.len()` bytes at offset 0.
-        iovec.write_all_volatile_at(&rand_bytes, 0).unwrap();
-        Ok(iovec.len())
+        self.buffer.write_all_volatile_at(&rand_bytes, 0).unwrap();
+        Ok(self.buffer.len())
     }
 
     fn process_entropy_queue(&mut self) {
-        // This is safe since we checked in the event handler that the device is activated.
-        let mem = self.device_state.mem().unwrap();
 
         let mut used_any = false;
         while let Some(desc) = self.queues[RNG_QUEUE].pop() {
+            // This is safe since we checked in the event handler that the device is activated.
+            let mem = self.device_state.mem().unwrap();
             let index = desc.index;
             METRICS.entropy_event_count.inc();
 
@@ -148,8 +148,7 @@ impl Entropy {
                     // Check for available rate limiting budget.
                     // If not enough budget is available, leave the request descriptor in the queue
                     // to handle once we do have budget.
-                    if !Self::rate_limit_request(
-                        &mut self.rate_limiter,
+                    if !self.rate_limit_request(
                         u64::from(self.buffer.len()),
                     ) {
                         debug!("entropy: throttling entropy queue");
@@ -158,7 +157,7 @@ impl Entropy {
                         break;
                     }
 
-                    Self::handle_one(&mut self.buffer).unwrap_or_else(|err| {
+                    self.handle_one().unwrap_or_else(|err| {
                         error!("entropy: {err}");
                         METRICS.entropy_event_fails.inc();
                         0

Add a buffer to the Entropy struct which will be reused
for descriptor chain processing. This will remove runtime
overhead of recreating this buffer for each request.

Signed-off-by: Egor Lazarchuk <[email protected]>
@ShadowCurse ShadowCurse merged commit c00d5ed into firecracker-microvm:main Oct 10, 2024
6 of 7 checks passed
@ShadowCurse ShadowCurse deleted the rng_reuse branch October 10, 2024 16:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Status: Awaiting review Indicates that a pull request is ready to be reviewed Type: Enhancement Indicates new feature requests Type: Performance

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants