Tests for S3VersionStore::store_version_from_reader#404
Tests for S3VersionStore::store_version_from_reader#404
Conversation
…producer task errors out--otherwise we may attempt to begin or continue uploads after sending abort_multipart_upload, which would be weird
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
malcolmgreaves
left a comment
There was a problem hiding this comment.
Great PR w/ 1 requested change! Easy to follow. Great test cases and excellent in-line documentation. One requested change before merging -- please use workspace versions for hyper, hyper-util, s3s, and s3s-fs.
…ror that didn't originate from the aws sdk
…t multipart upload shenanigans
…3 without multipart upload shenanigans" because it fails at runtime without a size This reverts commit 7690d04.
…ge in the direct streaming attempt
…t; use file size to determine part size for multipart uploads for files > 100MB
e689414 to
16d3337
Compare
1ac0ab1 to
58091e1
Compare
Co-authored-by: Malcolm Greaves <malcolmgreaves@users.noreply.github.com>
…ires updating some dependencies
6cc41e0 to
f2c6737
Compare
I'm tackling the S3 implementation one method at a time, and there's many more methods to go, so it will be a bit until we get into a state where we can actually try it out. But I have unit tests [in a separate PR](#404) (since they required some hefty dependency updates). - Implement the S3VersionStore::store_version_from_reader - Now requires specifying the file size up front, which all callers can easily do. - Uploads files <= 100MB in one shot (as per AWS recommendations) - Determines a file part size dynamically based on the file size for files > 100MB - Does not write to disk - Uploads up to 16 file parts concurrently - Cancels the multipart upload if anything goes wrong. - Add a new `OxenError::AwsS3Error` variant. - Updated the aws crates - Added instructions for claude to stop creating functions for random tiny bits of code just to call them exactly once and to stop deleting relevant comments🤞🏻 --------- Co-authored-by: Malcolm Greaves <malcolmgreaves@users.noreply.github.com>
Note: I won't blame you if you can't find this PR! 🤣 (Look at the PR number)
These tests are in a separate PR from #398 because pulling in
s3sas a dev dependency required a newerchrono, which required a newerduckdb, which required a newerarrow. It felt like isolating this in its own PR in case of side effects would be a good idea (and there was one observed side effect that had to be fixed).We need to get over this hurdle so we can have unit tests for the S3 implementation.