-
Notifications
You must be signed in to change notification settings - Fork 538
Description
I have the following code for listing the contents of an archive given its url:
const zipReader = new zip.ZipReader(new zip.HttpReader(fileUrl));
const entries = await zipReader.getEntries();
console.log(entries);
This works well for small and medium-sized files, but the getEntries() it fails and creates a very ambiguous failed to fetch error for large archives.
The threshold seems to be at about 2GiB. This seems to match the threshold for blob sizes in Google Chrome.
Could this be the reason for the failure and is what I have even the right approach when trying to read large files on the client-side?
I have tried a few of the HttpOptions but enabling chunking had the side-effect of creating a ridiculous number of requests with 500kb each which is way to small and has a severe performance penalty when trying to actually extract the files (which is the intended next step).
Sample link for testing:
https://f003.backblazeb2.com/file/ambientCG-Web/download/Ground054_5jRbQ1OF/Ground054_12K-PNG.zip