Poor performance uploading to S3 bucket with many thousands of subfolders #17237
Unanswered
acutchin-bitpusher
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am attempting to upload a folder containing a small number of small files (<10 files, <10MB) to an S3 bucket prefix/directory that contains over 30K subdirectories/sub-prefixes. Uploading the folder sometimes fails, and sometimes succeeds after about 20 minutes. I do not need to download anything from S3, and I do not need to view modification times for any S3 objects or prefixes.
Is there any way to improve the performance of this process?
Is this related to Cyberduck's handling of S3 modification timestamps? (https://docs.cyberduck.io/protocols/s3/#modification-date)
I have reviewed this discussion forum and the online Cyberduck documentation and have not seen (or recognized) any work-around for this issue.
I am using the default "S3 (HTTPS).cyberduckprofile" S3 connection profile that is packaged with the Cyberduck for MacOS installation file:
Beta Was this translation helpful? Give feedback.
All reactions