Skip to content

Commit 0064178

Browse files
authored
Added tested s3-compatible providers for Super Slurper (#20644)
1 parent a360e2b commit 0064178

File tree

1 file changed

+35
-25
lines changed

1 file changed

+35
-25
lines changed

src/content/docs/r2/data-migration/super-slurper.mdx

Lines changed: 35 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -6,18 +6,17 @@ learning_center:
66
link: https://www.cloudflare.com/learning/cloud/what-is-data-migration/
77
sidebar:
88
order: 1
9-
109
---
1110

12-
import { InlineBadge, Render } from "~/components"
11+
import { InlineBadge, Render } from "~/components";
1312

1413
Super Slurper allows you to quickly and easily copy objects from other cloud providers to an R2 bucket of your choice.
1514

1615
Migration jobs:
1716

18-
* Preserve custom object metadata from source bucket by copying them on the migrated objects on R2.
19-
* Do not delete any objects from source bucket.
20-
* Use TLS encryption over HTTPS connections for safe and private object transfers.
17+
- Preserve custom object metadata from source bucket by copying them on the migrated objects on R2.
18+
- Do not delete any objects from source bucket.
19+
- Use TLS encryption over HTTPS connections for safe and private object transfers.
2120

2221
## When to use Super Slurper
2322

@@ -52,10 +51,27 @@ This setting determines what happens when an object being copied from the source
5251

5352
Cloudflare currently supports copying data from the following cloud object storage providers to R2:
5453

55-
* Amazon S3
56-
* Cloudflare R2
57-
* Google Cloud Storage (GCS)
58-
* All S3-compatible storage providers
54+
- Amazon S3
55+
- Cloudflare R2
56+
- Google Cloud Storage (GCS)
57+
- All S3-compatible storage providers
58+
59+
### Tested S3-compatible storage providers
60+
61+
The following S3-compatible storage providers have been tested and verified to work with Super Slurper:
62+
63+
- Backblaze B2
64+
- DigitalOcean Spaces
65+
- Scaleway Object Storage
66+
- Wasabi Cloud Object Storage
67+
68+
Super Slurper should support transfers from all S3-compatible storage providers, but the ones listed have been explicitly tested.
69+
70+
:::note
71+
72+
Have you tested and verified another S3-compatible provider? [Open a pull request](https://github.com/cloudflare/cloudflare-docs/edit/production/src/content/docs/r2/data-migration/super-slurper.mdx) or [create a GitHub issue](https://github.com/cloudflare/cloudflare-docs/issues/new).
73+
74+
:::
5975

6076
## Create credentials for storage providers
6177

@@ -70,20 +86,14 @@ To create credentials with the correct permissions:
7086

7187
```json
7288
{
73-
"Version": "2012-10-17",
74-
"Statement": [
75-
{
76-
"Effect": "Allow",
77-
"Action": [
78-
"s3:Get*",
79-
"s3:List*"
80-
],
81-
"Resource": [
82-
"arn:aws:s3:::<BUCKET_NAME>",
83-
"arn:aws:s3:::<BUCKET_NAME>/*"
84-
]
85-
}
86-
]
89+
"Version": "2012-10-17",
90+
"Statement": [
91+
{
92+
"Effect": "Allow",
93+
"Action": ["s3:Get*", "s3:List*"],
94+
"Resource": ["arn:aws:s3:::<BUCKET_NAME>", "arn:aws:s3:::<BUCKET_NAME>/*"]
95+
}
96+
]
8797
}
8898
```
8999

@@ -124,5 +134,5 @@ You can now use this JSON key file when enabling Super Slurper.
124134

125135
Objects stored using AWS S3 [archival storage classes](https://aws.amazon.com/s3/storage-classes/#Archive) will be skipped and need to be copied separately. Specifically:
126136

127-
* Files stored using S3 Glacier tiers (not including Glacier Instant Retrieval) will be skipped and logged in the migration log.
128-
* Files stored using S3 Intelligent Tiering and placed in Deep Archive tier will be skipped and logged in the migration log.
137+
- Files stored using S3 Glacier tiers (not including Glacier Instant Retrieval) will be skipped and logged in the migration log.
138+
- Files stored using S3 Intelligent Tiering and placed in Deep Archive tier will be skipped and logged in the migration log.

0 commit comments

Comments
 (0)