-
Notifications
You must be signed in to change notification settings - Fork 2.1k
fix(test): cleanup snapshot files better #5126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(test): cleanup snapshot files better #5126
Conversation
The restore_from_snapshot function did not integrate well with uffd-based snapshot restore: Even if a UFFD path was specified, it still created a copy of the snapshot memory file inside the chroot, even though the UFFD handler set this up long ago in space_pf_handler. Fix this, and while we're at it, also remove the need for passing in uffd handler and snapshot file explicitly when using uffd-based restore, as the spawn_pf_handler sets the uffd_handler field of the microvm object, and can also easily be made to actually contain the snapshot from which page faults are being served. Signed-off-by: Patrick Roy <[email protected]>
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #5126 +/- ##
=======================================
Coverage 83.09% 83.09%
=======================================
Files 250 250
Lines 26920 26920
=======================================
Hits 22368 22368
Misses 4552 4552
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
565afda
to
eca928f
Compare
eca928f
to
698edb2
Compare
If build_n_from_snapshot is asked to build many snapshots incrementally, it doesn't clean up after itself properly: Each iteration ends by creating a new snapshot of the VM, and this snapshot is passed to iteration n+1. Here, a copy is created inside the new VMs chroot, and iteration n+1 ends by creating a new snapshot for iteration n+2, and deleting the copy of the snapshot inside the chroot. However, we never delete the snapshot created in iteration n, and so with each iteration more snapshots accumulate. Fix this by having the function delete the snapshot created in iteration n after iteration n+2 finished successfully. The idea here is that in case of failure, we will have the snapshot created in iteration n+1 (the one which caused a failure in n+2), and also the snapshot created in n (which was the last known snapshot to successfully go through a test iteration, namely iteration n+1). Signed-off-by: Patrick Roy <[email protected]>
698edb2
to
b5652f5
Compare
force pushed to fix a typo and a left cover commented debug statement from testing |
It turns out that some of our snapshot tests are not exactly exemplary at cleaning up snapshot memory files after they're done. This is partly because Microvm.restore_from_snapshot was largely oblivious to the fact that if a snapshot is uffd-restored, it doesn't need to be copied into the jail again for mmaping that will never happen, and also due to me messing up the build_n_from_snapshot function and not cleaning up snapshots in "incremental" mode.
License Acceptance
By submitting this pull request, I confirm that my contribution is made under
the terms of the Apache 2.0 license. For more information on following Developer
Certificate of Origin and signing off your commits, please check
CONTRIBUTING.md
.PR Checklist
tools/devtool checkstyle
to verify that the PR passes theautomated style checks.
how they are solving the problem in a clear and encompassing way.
in the PR.
CHANGELOG.md
.Runbook for Firecracker API changes.
integration tests.
TODO
.rust-vmm
.