Skip to content

Conversation

cosmicexplorer
Copy link
Contributor

@cosmicexplorer cosmicexplorer commented Aug 17, 2024

Recreation of #208 to work around github issues.

Problem

ZipArchive::extract() corresponds to the way most zip implementations perform the task, but it's single-threaded. This is appropriate under the assumptions imposed by rust's Read and Seek traits, where mutable access is necessary and only one reader can extract file contents at a time, but most unix-like operating systems offer a pread() operation which avoids mutating OS state like the file offset, so multiple threads can read from a file handle at once. The go programming language offers io.ReaderAt in the stdlib to codify this ability.

Solution

This is a rework of #72 which avoids introducing unnecessary thread pools and creates all output file handles and containing directories up front. For large zips, we want to:

  • create output handles and containing directories up front,
  • split the input file handle into chunks to process the constituent file entries in parallel,
  • for large compressed entries, pipe their content into a dedicated stream to avoid intermixing i/o and decompression and blocking quick small entries later in the file.

src/read/split.rs was created to cover pread() and other operations, while src/read/pipelining.rs was created to perform the high-level logic to split up entries and perform pipelined extraction.

Result

  • The parallelism feature was added to the crate to gate the newly added code + API.
  • A dependency on the libc crate was added for #[cfg(all(unix, feature = "parallelism"))] in order to make use of OS-specific functionality.
  • zip::read::split_extract() was added as a new external API to extract &ZipArchive<fs::File> when #[cfg(all(unix, feature = "parallelism"))].

Note that this does not handle symlinks yet, which I plan to add in a followup PR.

CURRENT BENCHMARK STATUS

On a linux host (with splice() and optionally copy_file_range()), we get about a 6.5x speedup with 12 decompression threads:

> cargo bench --features parallelism -- extract
running 2 tests
test extract_basic           ... bench: 104,389,978 ns/iter (+/- 5,715,453) = 85 MB/s
test extract_split           ... bench:  16,274,974 ns/iter (+/- 1,530,257) = 546 MB/s

The performance should keep increasing as we increase thread count, up to the number of available CPU cores (this was running with a parallelism of 12 on my 16-core laptop). This also works on macOS and BSDs, and other #[cfg(unix)] platforms.

@cosmicexplorer cosmicexplorer force-pushed the pipelined-extract-v2 branch 4 times, most recently from 7a45b32 to 5cec332 Compare August 21, 2024 04:21
@cosmicexplorer
Copy link
Contributor Author

Going to try to get this one in before figuring out the cli PR.

@cosmicexplorer cosmicexplorer force-pushed the pipelined-extract-v2 branch 5 times, most recently from fa18aa3 to 7332eb6 Compare January 16, 2025 21:24
@cosmicexplorer cosmicexplorer marked this pull request as ready for review January 17, 2025 00:48
Copy link
Member

@Pr0methean Pr0methean left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's a review of what I've read so far. Still needs a fair bit of work, but I'm happy with the overall concept.

pub file_range_copy_buffer_length: usize,
/// Size of buffer used to splice contents from a pipe into an output file handle.
///
/// Used on non-Linux platforms without [`splice()`](https://en.wikipedia.org/wiki/Splice_(system_call)).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This buffer isn't necessary on any Unix; see https://stackoverflow.com/a/10330172.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I understand your meaning here. That answer seems to say that on non-linux platforms, read()/write() with an explicit buffer (as we do here) is the way to go. Our PipeReadBufferSplicer struct performs read() then pwrite_all() with an explicit buffer, because we can't use splice().

Do I misunderstand you here?

Copy link
Member

@Pr0methean Pr0methean Mar 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What it's saying is that on Unix, when you don't have splice() you should memmap() the file directly and pass the mapped region to write(). The memmap2 crate will provide the wrapper we need.

@cosmicexplorer
Copy link
Contributor Author

cosmicexplorer commented Feb 5, 2025

thank you so much for these wonderful comments!!

@cosmicexplorer
Copy link
Contributor Author

Was able to remove the non_local_definitions lint ignore after updating displaydoc to 0.2.5!

@cosmicexplorer
Copy link
Contributor Author

Hey @Pr0methean -- think I got to all of your comments! I proposed a couple compromises to do in follow-up PRs (supporting absolute extraction paths and symlinks)--let me know if you agree! I am hoping to spend more time on this in the next few weeks to get it in and then do the follow-ups. No rush as usual, and I really appreciate your comments.


let params = ExtractionParameters {
decompression_threads: DECOMPRESSION_THREADS,
decompression_threads: num_cpus::get() / 3,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What will the other 2/3 of the CPUs be doing? Also, does this need to be clamped to at least 1?

Copy link
Member

@Pr0methean Pr0methean left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good; just 2 minor comments.

let block = Self::from_le(block);
/// Convert endianness and check the magic value.
#[allow(clippy::wrong_self_convention)]
fn validate(self) -> ZipResult<Self> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Call this function from_le_validated to make its combined functionality more clear, or separate out the from_le call and call it with_checked_magic.

@cosmicexplorer
Copy link
Contributor Author

Hey, I'm going to work on this again. A friend just mentioned to me making temporary files for outputs instead of creating the entire output directory hierarchy upfront and that solved the issue that was blocking me. I'm also going to close the massive zip-cli PR and make a much smaller one according to the original idea. Thanks so much for your patience.

@cosmicexplorer cosmicexplorer mentioned this pull request Sep 29, 2025
3 tasks
@cosmicexplorer
Copy link
Contributor Author

There was also an additional very tricky concern about preserving permissions of the output directory (a concern shared with the default extract method). I don't think that's immediately solved, but performing the extraction in a temp directory should at least move the perms calculation off the critical path.

In fact, it might be appropriate to only expose an API that extracts into a temp dir, and then separately sync that temp dir with the real dir. This would likely be nicer for invocations from an async context, minimizing the amount of blocking work done at a time.

It would be really nice if coroutines were stable.

- initial sketch of lexicographic trie for pipelining
- move path splitting into a submodule
- lex trie can now propagate entry data
- outline handle allocation
- mostly handle files
- mostly handle dirs
- clarify symlink FIXMEs
- do symlink validation
- extract writable dir setting to helper method
- modify args to handle allocation method
- handle allocation test passes
- simplify perms a lot
- outline evaluation
- handle symlinks
- BIGGER CHANGE! add EntryReader/etc
- make initial pipelined extract work
- fix file perms by writing them after finishing the file write
- support directory entries by unix mode as well
- impl split extraction
- remove dependency on reader refactoring
- add dead_code to methods we don't use yet
- bzip2 support needed for benchmark test
- correctly handle backslashes in entry names (i.e. don't)
- make PathSplitError avoid consing a String until necessary
- add repro_old423 test for pipelining
- silence dead code warnings for windows
- avoid erroring for top-level directory entries
- use num_cpus by default for parallelism
- we spawn three threads per chunk
- add dynamically-generated test archive
- initialize the test archives exactly once in statics
- add benchmarks for dynamic and static test data
- use lazy_static
- add FIXME for follow-up work to support absolute paths
- impl From<DirEntry<...>> for FSEntry
- move handle_creation module to a separate file
- downgrade HandleCreationError to io::Error
- use ByAddress over ZipDataHandle
- replace unsafe transmutes with Pod methods
- add note about shared future dependency task DAG

- box each level of the b-tree together with its values

this may technically reduce heap fragmentation, but since this data structure only exists
temporarily, that's probably not too important. instead, this change just reduces the amount of
coercion and unboxing we need to do
@cosmicexplorer
Copy link
Contributor Author

It should be up to date now. Thanks again for all your efforts in the meantime to keep this alive.

I am pretty sure the temp files and temp dir technique should overcome the blockers and reduce a lot of the complexity in the pipelining setup (not to mention improved perf, etc). The biggest difficulty I'd had was around symlinks and output files which may be in paths defined by symlinks.

Semantic Differences with Serial Path Construction

There is an (easy) semantic decision to make here. Traditional serial extraction would error if an entry name earlier on went through a symlink that was only defined later. I don't think it's likely anyone is relying upon that failure mode for correctness, and rather I suspect people are getting surprised by that failure mode in the serial case. So I think it's safe to apply the symlinks all at once, before assigning output file handles to paths in the output temp dir, and alleviate that failure mode once and for all.

A reasonable question to ask is whether we'd want to do the same for our simple serial extraction--I would say not in this PR, and possibly not at all (there is a benefit to having an extremely simple and easy-to-audit serial extraction loop). For example in zip-clite from #235 we would avoid having parallel extraction at all to produce a maximally auditable small binary, while zip-cli could perform the more complex (and still experimental) parallel extraction.

Applications of this Work and Follow-Up

On the subject of "experimental": I have at least mentioned this parallel zip mechanism in the context of Python packaging standards (https://discuss.python.org/t/pep-777-how-to-re-invent-the-wheel/67484/243), and I will be using this technique in a prototype Python package indexer/fetcher tool I'm working on (which will be in Rust and depend upon the zip crate).

Eventually: Breaking Out POSIX API wrappers

Python happens to have a lot more than Rust provided in the stdlib (particularly pipe(), splice(), pread(), and even copy_file_range()), and I'm trying to neg the Rust stdlib into covering more OS APIs (see e.g. rust-lang/libc#4522), so I may produce a version of this approach in Python as well, using subprocesses.

I know one (linux-specific) OS call we couldn't use in this PR was vmsplice(), because it produces undefined behavior (garbage data) when used from multiple threads iirc (it's intended for multiprocess parallelism). That is also the one call not exposed to Python. So as a follow-up change (not this PR), I'm thinking that comparing against a Python implementation, as well as a forking implementation with vmsplice(), could be good benchmarks to add. I think vmsplice() could be useful since it would operate upon decompressed data, and therefore saving a copy could be significant.

Either before or after said follow-up PR, I think breaking out a separate crate for various POSIX fs and i/o wrappers might be useful, especially given discussions like this https://internals.rust-lang.org/t/why-no-fork-in-std-process/13770 flatly rejecting fork() in the stdlib. I've already produced a wrapper library for filesystem operations like readlink(): https://codeberg.org/cosmicexplorer/deep-link/src/commit/1ea3eba5d599d8c48ea56816b7103b12ee49d505/d-major/readdir-sys/src/wrappers.rs#L397, including a wrapper for getdents() (which is actually now POSIX as of 2024: https://pubs.opengroup.org/onlinepubs/9799919799/functions/posix_getdents.html), which drastically reduces the number of syscalls required over readdir().

None of this should be relevant to this PR (which I'll get back to momentarily), but I'll probably end up producing a POSIX-specific std::{fs,path} crate soon (independent of the zip crate), and that will add a dependency to the tree but enable some more optimization paths. As with other optional dependencies, this would not be added to the zip-clite dependency graph.

Windows / in-memory ring buffer / pipelined streaming extraction

I don't have a Windows machine to test on, but I think this approach should be extensible to Windows by using an in-process blocking ring buffer instead of an OS pipe. When I looked for such a data structure on crates.io, I couldn't actually find it, and I avoided implementing it here because the code was already getting unwieldy. But such an in-memory approach would be applicable to e.g. ZipArchive<io::Cursor<Vec<u8>>> (not just ZipArchive<fs::File>), and could potentially be applied to pipeline the extraction of zip files downloaded from a network request as well.

I think this in-memory ring buffer could just use a mutex + condvar, since blocking is exactly what we want it to do when starved. That's kind of interesting, especially since avoiding OS pipes would mean we could pipe the decompressed output (or even compressed) into an arbitrary io::Write.

It's beginning to seem like using the linux-specific syscalls might have obscured this simpler approach the whole time. Sure, splice() avoids a memory copy, but Rust can do that too.

@cosmicexplorer
Copy link
Contributor Author

Very lengthy and meandering comment above, but with the significant realization at the end that an in-memory (fixed-size, maybe growable) ring buffer that blocks when empty would likely have been easy enough to slap together without any low-level atomic operations (just mutex + cvar), would work on all platforms and i/o sources, and in general seems like a much cleaner architecture. I think the splice()/pipe() approach should actually be relegated to experimental status now, since it introduces a lot more unsafe code and generally interleaves OS IPC mechanisms with multithreaded logic which is hard to generalize.

Especially given that we also just realized that much of the scheduling work is likely obsolete now too (after realizing that we should use temp output files and a temp output dir), I'm actually thinking of letting this PR sit for a bit now (in case I've made a mistake in the above reasoning), and switching over to #235 instead.

@cosmicexplorer
Copy link
Contributor Author

Getting more excited about this and I don't know if I'll be in steady communication the whole time but I think this would be highly auditable and highly performant and I live for that sort of thing. Going to spend some time this weekend on it.

@cosmicexplorer
Copy link
Contributor Author

Would like to note that the python tool uv experienced a vulnerability recently relating to the parsing of zip files: https://astral.sh/blog/uv-security-advisory-cve-2025-54368. They have wildly and misleadingly misstated the performance issues as well as the security issue, because they used their own crate which isn't as hardened as this one so they can boast about using async, which I know from my experimentation on this crate is uniformly slower. They also mention downloading metadata lazily as if it was their idea, although that was my idea that I proposed and helped to implement in pip many years ago. I mention it because it should be useful to you to know that your approach with this crate to focus on security has been shown to be correct.

This incident from uv also motivates looking to achieve pipelining in a way that applies to network downloads (or streaming from stdin) as well as from file with pread() (which is some unsafe code we'll still have to keep). This leads to two separate usage scenarios:

  • local file on disk (enabling pread() on POSIX platforms), extracting most or all of the contents.
  • streaming network request or stdin, extracting some subset of the contents.

Given that we have also found above that temporary files can be used to avoid allocating output handles upfront, I think these can both be covered by a similar abstraction, which performs the following:

  1. receives input request for a given entry, with name and compressed entry contents.
  2. allocates temporary output file handle, and begins to read from the compressed input stream into one of its decompression threads, which then pipe the output into one of file writing threads (the goal here remains to avoid interleaving i/o and decompression on the same thread).
  3. upon completion of the input stream, the file handle is closed, and the file can be moved into the destination directory.

For network or stdin streams, this will unblock later entries from an especially-large and highly-compressed intermediate entry (which was the reason we needed pread() for local files). However, if we rely upon the in-memory pipe described above alone, an entry which is gigabytes in size will still fill up the decompression pipeline, blocking our ability to perform network I/O. So we'll need to consider allocating a temporary file to store compressed entries above a certain size. This will essentially parallelize the serial input contents, and is therefore strictly more general than the pread() approach.

It remains to be seen whether we'll need to incorporate pread() at all. I think it should be possible to incorporate it as another implementation of this second proposed abstraction, which otherwise allocates a temporary file to store compressed entries. Currently, users have no way to refer to an entry in a ZipArchive except directly as per ZipFile or indirectly as per .by_index(). Our streaming interface also uses ZipFile, which requires exclusive access to the io::Read stream.

I'm thinking the way to get this into a minimal PR that can be incorporated into the rest of the crate would be to start off with the most general approach, and avoid pread() for now. This means the two abstractions described:

  • decompression pipelining thread pool (with temporary output handle, moved upon completion),
  • preallocated intermediate compressed entry contents temp file.

My hope is actually that these can be expressed in terms of the ZipArchive and ZipFile we already have, unlike the current setup in this PR at cosmicexplorer@f3fee69 which makes it a separate API. This would mean users could filter which files they want to extract, which was one of the reasons that #235 got to be so big. This approach would hopefully also then be applicable to ZipStreamReader::extract(), achieving the parallel extraction capability for streaming zips.

Will be spending more time today on this approach.

Copy link
Member

@Pr0methean Pr0methean left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow, this is coming along really well!

* - 200K random data, stored uncompressed (CompressionMethod::Stored)
* - 246K text data (the project gutenberg html version of king lear)
* (CompressionMethod::Bzip2, compression level 1) (project gutenberg ebooks are public domain)
*
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we include some compressed files that contain both text and random bytes, to reflect the fact that real files tend to have sections with different entropy rates (e.g. image content vs metadata)?

* The full archive file is 5.3MB.
*/
fn static_test_archive() -> ZipResult<ZipArchive<fs::File>> {
assert!(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use a #cfg attribute instead so that this check can happen at build time.

* dependencies and schedule the symlink dereference (reading the target value from the zip)
* before we create any directories or allocate any output file handles that dereference that
* symlink. This is less of a problem with the synchronous in-order extraction because it
* creates any symlinks immediately (it imposes a total ordering dependency over all entries).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think (2) is a problem, because symlinks to nonexistent targets can be part of a valid archive.

* complex platform-specific programming. However, the result would likely decrease the
* number of syscalls, which may also improve performance. It may also be slightly easier to
* follow the logic if we can refer to directory inodes instead of constructing path strings
* as a proxy. This should be considered if requested by users. */
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems out of scope - if users need that functionality, then they probably also need it for purposes that don't involve zip files, so either the create_dir_all implementation should be changed or a file-management crate's replacement for create_dir_all should be used.

perms_todo.push((path.clone(), fs::Permissions::from_mode(mode)));
}

let handle = fs::OpenOptions::new()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull the OpenOptions out into a variable and reuse it.

}

pub(crate) trait DirByMode {
fn is_dir_by_mode(&self) -> bool;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make this part of impl ZipFileData instead, since no other implementation of DirByMode exists outside of tests.

}

#[derive(PartialEq, Eq, Debug, Clone)]
pub(crate) enum FSEntry<'a, Data> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this could really use methods #[cfg(test)] add_file(&mut self, name: &str) and #[cfg(test)] add_subdirectory(&mut self, name: &str)

($op:expr) => {
match $op {
Ok(n) => n,
Err(e) if e.kind() == ::std::io::ErrorKind::Interrupted => continue,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like the wrong behavior; shouldn't we generally break out of the loop and return an error when interrupted?

use std::mem::MaybeUninit;
use std::ops;

pub trait FixedFile {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ambiguous name. "Fixed" in the sense of "immutable" or "corrected"? Could something like KnownSizeFile be used instead?

let block = Self::from_le(block);
/// Convert endianness and check the magic value.
#[allow(clippy::wrong_self_convention)]
fn validate(self) -> ZipResult<Self> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A name like try_from_le or valid_from_le would make it clearer that this includes the endian conversion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants