You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Super Slurper allows you to quickly and easily copy objects from other cloud providers to an R2 bucket of your choice.
15
14
16
15
Migration jobs:
17
16
18
-
* Preserve custom object metadata from source bucket by copying them on the migrated objects on R2.
19
-
* Do not delete any objects from source bucket.
20
-
* Use TLS encryption over HTTPS connections for safe and private object transfers.
17
+
- Preserve custom object metadata from source bucket by copying them on the migrated objects on R2.
18
+
- Do not delete any objects from source bucket.
19
+
- Use TLS encryption over HTTPS connections for safe and private object transfers.
21
20
22
21
## When to use Super Slurper
23
22
@@ -52,10 +51,27 @@ This setting determines what happens when an object being copied from the source
52
51
53
52
Cloudflare currently supports copying data from the following cloud object storage providers to R2:
54
53
55
-
* Amazon S3
56
-
* Cloudflare R2
57
-
* Google Cloud Storage (GCS)
58
-
* All S3-compatible storage providers
54
+
- Amazon S3
55
+
- Cloudflare R2
56
+
- Google Cloud Storage (GCS)
57
+
- All S3-compatible storage providers
58
+
59
+
### Tested S3-compatible storage providers
60
+
61
+
The following S3-compatible storage providers have been tested and verified to work with Super Slurper:
62
+
63
+
- Backblaze B2
64
+
- DigitalOcean Spaces
65
+
- Scaleway Object Storage
66
+
- Wasabi Cloud Object Storage
67
+
68
+
Super Slurper should support transfers from all S3-compatible storage providers, but the ones listed have been explicitly tested.
69
+
70
+
:::note
71
+
72
+
Have you tested and verified another S3-compatible provider? [Open a pull request](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/r2/data-migration/super-slurper.mdx) or [create a GitHub issue](https://github.com/cloudflare/cloudflare-docs/issues/new).
73
+
74
+
:::
59
75
60
76
## Create credentials for storage providers
61
77
@@ -70,20 +86,14 @@ To create credentials with the correct permissions:
@@ -124,5 +134,5 @@ You can now use this JSON key file when enabling Super Slurper.
124
134
125
135
Objects stored using AWS S3 [archival storage classes](https://aws.amazon.com/s3/storage-classes/#Archive) will be skipped and need to be copied separately. Specifically:
126
136
127
-
* Files stored using S3 Glacier tiers (not including Glacier Instant Retrieval) will be skipped and logged in the migration log.
128
-
* Files stored using S3 Intelligent Tiering and placed in Deep Archive tier will be skipped and logged in the migration log.
137
+
- Files stored using S3 Glacier tiers (not including Glacier Instant Retrieval) will be skipped and logged in the migration log.
138
+
- Files stored using S3 Intelligent Tiering and placed in Deep Archive tier will be skipped and logged in the migration log.
0 commit comments