diff --git a/public/__redirects b/public/__redirects index b2654a029e939e..247283da8790d3 100644 --- a/public/__redirects +++ b/public/__redirects @@ -1187,7 +1187,8 @@ # r2 /r2/platform/s3-compatibility/api/ /r2/api/s3/api/ 301 -/r2/platform/s3-compatibility/tokens/ /r2/api/s3/tokens/ 301 +/r2/platform/s3-compatibility/tokens/ /r2/api/tokens/ 301 +/r2/api/s3/tokens/ /r2/api/tokens/ 301 /r2/runtime-apis/ /r2/api/workers/workers-api-reference/ 301 /r2/data-access/ /r2/api/ 301 /r2/data-access/public-buckets/ /r2/buckets/public-buckets/ 301 diff --git a/src/content/docs/data-localization/how-to/r2.mdx b/src/content/docs/data-localization/how-to/r2.mdx index a0857b2e2c2978..2eb543f1f94bb3 100644 --- a/src/content/docs/data-localization/how-to/r2.mdx +++ b/src/content/docs/data-localization/how-to/r2.mdx @@ -3,10 +3,9 @@ title: R2 Object Storage pcx_content_type: how-to sidebar: order: 6 - --- -import { Details } from "~/components" +import { Details } from "~/components"; In the following sections, we will give you some details about how to configure R2 with Regional Services and Customer Metadata Boundary. @@ -32,7 +31,7 @@ The following instructions will show you how to set up a Logpush job using an S3
-Go to the R2 section of your Cloudflare dashboard and select **Manage R2 API Tokens** to generate a token directly tied to your specific bucket. You can follow the instructions in the [Authentication](/r2/api/s3/tokens/) section. +Go to the R2 section of your Cloudflare dashboard and select **Manage R2 API Tokens** to generate a token directly tied to your specific bucket. You can follow the instructions in the [Authentication](/r2/api/tokens/) section.
@@ -54,19 +53,19 @@ The result: ```json { - "result": { - "id": "325xxxxcd", - "status": "active" - }, - "success": true, - "errors": [], - "messages": [ - { - "code": 10000, - "message": "This API Token is valid and active", - "type": null - } - ] + "result": { + "id": "325xxxxcd", + "status": "active" + }, + "success": true, + "errors": [], + "messages": [ + { + "code": 10000, + "message": "This API Token is valid and active", + "type": null + } + ] } ``` @@ -97,10 +96,8 @@ With Customer Metadata Boundary set to `EU`, **R2** > **Bucket** > [**Metrics**] :::note - Additionally, customers can create R2 buckets with [jurisdictional restrictions set to EU](/r2/reference/data-location/#jurisdictional-restrictions). In this case, we recommend [using jurisdictions with the S3 API](/r2/reference/data-location/#using-jurisdictions-with-the-s3-api). - ::: Refer to the [R2 documentation](/r2/) for more information. diff --git a/src/content/docs/logs/R2-log-retrieval.mdx b/src/content/docs/logs/R2-log-retrieval.mdx index 3e2406916fa655..9172d6af4c3c86 100644 --- a/src/content/docs/logs/R2-log-retrieval.mdx +++ b/src/content/docs/logs/R2-log-retrieval.mdx @@ -3,10 +3,9 @@ pcx_content_type: how-to title: Logs Engine sidebar: order: 117 - --- -import { Details } from "~/components" +import { Details } from "~/components"; Logs Engine gives you the ability to store your logs in R2 and query them directly. @@ -17,12 +16,12 @@ Logs Engine is going to be replaced by Log Explorer. For further details, consul ## Store logs in R2 -* Set up a [Logpush to R2](/logs/get-started/enable-destinations/r2/) job. -* Create an [R2 access key](/r2/api/s3/tokens/) with at least R2 read permissions. -* Ensure that you have Logshare read permissions. -* Alternatively, create a Cloudflare API token with the following permissions: - * Account scope - * Logs read permissions +- Set up a [Logpush to R2](/logs/get-started/enable-destinations/r2/) job. +- Create an [R2 access key](/r2/api/tokens/) with at least R2 read permissions. +- Ensure that you have Logshare read permissions. +- Alternatively, create a Cloudflare API token with the following permissions: + - Account scope + - Logs read permissions ## Query logs @@ -32,19 +31,19 @@ You can use the API to query and download your logs by time range or RayID. The following headers are required for all API calls: -* `X-Auth-Email` - the Cloudflare account email address associated with the domain -* `X-Auth-Key` - the Cloudflare API key +- `X-Auth-Email` - the Cloudflare account email address associated with the domain +- `X-Auth-Key` - the Cloudflare API key Alternatively, API tokens with Logs edit permissions can also be used for authentication: -* `Authorization: Bearer ` +- `Authorization: Bearer ` ### Required headers In addition to the required authentication headers mentioned, the following headers are required for the API to access logs stored in your R2 bucket. -`R2-access-key-id` (required) - [R2 Access Key Id](/r2/api/s3/tokens/) -`R2-secret-access-key` (required) - [R2 Secret Access Key](/r2/api/s3/tokens/) +- `R2-access-key-id` (required) - [R2 Access Key Id](/r2/api/tokens/) +- `R2-secret-access-key` (required) - [R2 Secret Access Key](/r2/api/tokens/) ## List files @@ -52,15 +51,15 @@ List relevant R2 objects containing logs matching the provided query parameters, ### Query parameters -* `start` (required) string (TimestampRFC3339) - Start time in RFC 3339 format, for example `start=2022-06-06T16:00:00Z`. +- `start` (required) string (TimestampRFC3339) - Start time in RFC 3339 format, for example `start=2022-06-06T16:00:00Z`. -* `end` (required) string (TimestampRFC3339) - End time in RFC 3339 format, for example `end=2022-06-06T16:00:00Z`. +- `end` (required) string (TimestampRFC3339) - End time in RFC 3339 format, for example `end=2022-06-06T16:00:00Z`. -* `bucket` (required) string (Bucket) - R2 bucket name, for example `bucket=cloudflare-logs`. +- `bucket` (required) string (Bucket) - R2 bucket name, for example `bucket=cloudflare-logs`. -* `prefix` string (Prefix) - R2 bucket prefix logs are stored under, for example `prefix=http_requests/example.com/{DATE}`. +- `prefix` string (Prefix) - R2 bucket prefix logs are stored under, for example `prefix=http_requests/example.com/{DATE}`. -* `limit` number (Limit) - Maximum number of results to return, for example `limit=100`. +- `limit` number (Limit) - Maximum number of results to return, for example `limit=100`. ## Retrieve logs by time range @@ -68,13 +67,13 @@ Stream logs stored in R2 that match the provided query parameters, using the end ### Query parameters -* `start` (required) string (TimestampRFC3339) - Start time in RFC 3339 format, for example `start=2022-06-06T16:00:00Z` +- `start` (required) string (TimestampRFC3339) - Start time in RFC 3339 format, for example `start=2022-06-06T16:00:00Z` -* `end` (required) string (TimestampRFC3339) - End time in RFC 3339 format, for example `end=2022-06-06T16:00:00Z` +- `end` (required) string (TimestampRFC3339) - End time in RFC 3339 format, for example `end=2022-06-06T16:00:00Z` -* `bucket` (required) string (Bucket) - R2 bucket name, for example `bucket=cloudflare-logs` +- `bucket` (required) string (Bucket) - R2 bucket name, for example `bucket=cloudflare-logs` -* `prefix` string (Prefix) - R2 bucket prefix logs are stored under, for example `prefix=http_requests/example.com/{DATE}` +- `prefix` string (Prefix) - R2 bucket prefix logs are stored under, for example `prefix=http_requests/example.com/{DATE}` ### Example API request @@ -133,52 +132,42 @@ curl --globoff "https://api.cloudflare.com/client/v4/accounts/{account_id}/logs/ ## Troubleshooting -
-* **Error**: Time range returned too many results. Try reducing the time range and try again. +- **Error**: Time range returned too many results. Try reducing the time range and try again. HTTP status code `422` will be returned if the time range between the start and end parameters is too wide. Try querying a shorter time range if you are running into this limit. -* **Error**: Provided token does not have the required features enabled. +- **Error**: Provided token does not have the required features enabled. Contact your account representative to have the beta Logpull RayID Lookup subscription added to your account. -* **Error**: Time range returned too many results. Try reducing the time range and try again. +- **Error**: Time range returned too many results. Try reducing the time range and try again. High volume zones can produce many log files in R2. Try reducing your start and end time range until you find a duration that works best for your log volume. -
-
Currently, there is no process to index logs as they arrive. If you have the RayID and know the time the request was made, try indexing the next 5-10 minutes of logs after the request was completed. -
-
Logpush delivers logs in batches as soon as possible, generally in less than one minute. After this, logs can be accessed using Logs Engine. -
-
R2 does not currently have retention controls in place. You can query back as far as when you created the Logpush job. -
-
The retrieval API is compatible with all the datasets we support. The full list is available on the [Log fields](/logs/reference/log-fields/) section. -
diff --git a/src/content/docs/r2/api/s3/presigned-urls.mdx b/src/content/docs/r2/api/s3/presigned-urls.mdx index 1d9a4ffe1205e9..caa680db54e3c6 100644 --- a/src/content/docs/r2/api/s3/presigned-urls.mdx +++ b/src/content/docs/r2/api/s3/presigned-urls.mdx @@ -1,7 +1,6 @@ --- title: Presigned URLs pcx_content_type: concept - --- Presigned URLs are an [S3 concept](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-presigned-url.html) for sharing direct access to your bucket without revealing your token secret. A presigned URL authorizes anyone with the URL to perform an action to the S3 compatibility endpoint for an R2 bucket. By default, the S3 endpoint requires an `AUTHORIZATION` header signed by your token. Every presigned URL has S3 parameters and search parameters containing the signature information that would be present in an `AUTHORIZATION` header. The performable action is restricted to a specific resource, an [operation](/r2/api/s3/api/), and has an associated timeout. @@ -24,7 +23,7 @@ Presigned URLs are generated with no communication with R2 and must be generated There are three ways to grant an application access to R2: -1. The application has its own copy of an [R2 API token](/r2/api/s3/tokens/). +1. The application has its own copy of an [R2 API token](/r2/api/tokens/). 2. The application requests a copy of an R2 API token from a vault application and promises to not permanently store that token locally. 3. The application requests a central application to give it a presigned URL it can use to perform an action. @@ -44,10 +43,10 @@ Another potential use case for presigned URLs is debugging. For example, if you R2 currently supports the following methods when generating a presigned URL: -* `GET`: allows a user to fetch an object from a bucket -* `HEAD`: allows a user to fetch an object's metadata from a bucket -* `PUT`: allows a user to upload an object to a bucket -* `DELETE`: allows a user to delete an object from a bucket +- `GET`: allows a user to fetch an object from a bucket +- `HEAD`: allows a user to fetch an object's metadata from a bucket +- `PUT`: allows a user to upload an object to a bucket +- `DELETE`: allows a user to delete an object from a bucket `POST`, which performs uploads via native HTML forms, is not currently supported. @@ -55,11 +54,11 @@ R2 currently supports the following methods when generating a presigned URL: Generate a presigned URL by referring to the following examples: -* [AWS SDK for Go](/r2/examples/aws/aws-sdk-go/#generate-presigned-urls) -* [AWS SDK for JS v3](/r2/examples/aws/aws-sdk-js-v3/#generate-presigned-urls) -* [AWS SDK for JS](/r2/examples/aws/aws-sdk-js/#generate-presigned-urls) -* [AWS SDK for PHP](/r2/examples/aws/aws-sdk-php/#generate-presigned-urls) -* [AWS CLI](/r2/examples/aws/aws-cli/#generate-presigned-urls) +- [AWS SDK for Go](/r2/examples/aws/aws-sdk-go/#generate-presigned-urls) +- [AWS SDK for JS v3](/r2/examples/aws/aws-sdk-js-v3/#generate-presigned-urls) +- [AWS SDK for JS](/r2/examples/aws/aws-sdk-js/#generate-presigned-urls) +- [AWS SDK for PHP](/r2/examples/aws/aws-sdk-php/#generate-presigned-urls) +- [AWS CLI](/r2/examples/aws/aws-cli/#generate-presigned-urls) ## Presigned URL alternative with Workers @@ -67,12 +66,10 @@ A valid alternative design to presigned URLs is to use a Worker with a [binding] :::note[Bindings] - A binding is how your Worker interacts with external resources such as [KV Namespaces](/kv/concepts/kv-namespaces/), [Durable Objects](/durable-objects/), or [R2 Buckets](/r2/buckets/). A binding is a runtime variable that the Workers runtime provides to your code. You can declare a variable name in your Wrangler file that will be bound to these resources at runtime, and interact with them through this variable. Every binding's variable name and behavior is determined by you when deploying the Worker. Refer to [Environment Variables](/workers/configuration/environment-variables/) for more information. A binding is defined in the Wrangler file of your Worker project's directory. - ::: A possible use case may be restricting an application to only be able to upload to a specific URL. With presigned URLs, your central signing application might look like the following JavaScript code running on Cloudflare Workers, workerd, or another platform. @@ -83,51 +80,51 @@ If the Worker received a request for `https://example.com/uploads/dog.png`, it w import { AwsClient } from "aws4fetch"; const r2 = new AwsClient({ - accessKeyId: "", - secretAccessKey: "", + accessKeyId: "", + secretAccessKey: "", }); export default { - async fetch(req): Promise { - // This is just an example to demonstrating using aws4fetch to generate a presigned URL. - // This Worker should not be used as-is as it does not authenticate the request, meaning - // that anyone can upload to your bucket. - // - // Consider implementing authorization, such as a preshared secret in a request header. - const requestPath = new URL(req.url).pathname; - - // Cannot upload to the root of a bucket - if (requestPath === "/") { - return new Response("Missing a filepath", { status: 400 }); - } - - const bucketName = ""; - const accountId = ""; - - const url = new URL( - `https://${bucketName}.${accountId}.r2.cloudflarestorage.com` - ); - - // preserve the original path - url.pathname = requestPath; - - // Specify a custom expiry for the presigned URL, in seconds - url.searchParams.set("X-Amz-Expires", "3600"); - - const signed = await r2.sign( - new Request(url, { - method: "PUT", - }), - { - aws: { signQuery: true }, - } - ); - - // Caller can now use this URL to upload to that object. - return new Response(signed.url, { status: 200 }); - }, - - // ... handle other kinds of requests + async fetch(req): Promise { + // This is just an example to demonstrating using aws4fetch to generate a presigned URL. + // This Worker should not be used as-is as it does not authenticate the request, meaning + // that anyone can upload to your bucket. + // + // Consider implementing authorization, such as a preshared secret in a request header. + const requestPath = new URL(req.url).pathname; + + // Cannot upload to the root of a bucket + if (requestPath === "/") { + return new Response("Missing a filepath", { status: 400 }); + } + + const bucketName = ""; + const accountId = ""; + + const url = new URL( + `https://${bucketName}.${accountId}.r2.cloudflarestorage.com`, + ); + + // preserve the original path + url.pathname = requestPath; + + // Specify a custom expiry for the presigned URL, in seconds + url.searchParams.set("X-Amz-Expires", "3600"); + + const signed = await r2.sign( + new Request(url, { + method: "PUT", + }), + { + aws: { signQuery: true }, + }, + ); + + // Caller can now use this URL to upload to that object. + return new Response(signed.url, { status: 200 }); + }, + + // ... handle other kinds of requests } satisfies ExportedHandler; ``` @@ -137,15 +134,15 @@ In some cases, Workers lets you implement certain functionality more easily. For ```ts const signed = await r2.sign( - new Request(url, { - method: "PUT", - }), - { - aws: { signQuery: true }, - headers: { - "If-Unmodified-Since": "Tue, 28 Sep 2021 16:00:00 GMT", - }, - } + new Request(url, { + method: "PUT", + }), + { + aws: { signQuery: true }, + headers: { + "If-Unmodified-Since": "Tue, 28 Sep 2021 16:00:00 GMT", + }, + }, ); ``` @@ -163,18 +160,18 @@ const existingObject = await env.DROP_BOX_BUCKET.put( // is the initial R2 announcement. uploadedBefore: new Date(1632844800000), }, - } + }, ); -if (existingObject?.etag !== request.headers.get('etag')) { - return new Response('attempt to overwrite object', { status: 400 }); +if (existingObject?.etag !== request.headers.get("etag")) { + return new Response("attempt to overwrite object", { status: 400 }); } ``` Cloudflare Workers currently have some limitations that you may need to consider: -* You cannot upload more than 100 MiB (200 MiB for Business customers) to a Worker. -* Enterprise customers can upload 500 MiB by default and can ask their account team to raise this limit. -* Detecting [precondition failures](/r2/api/s3/extensions/#conditional-operations-in-putobject) is currently easier with presigned URLs as compared with R2 bindings. +- You cannot upload more than 100 MiB (200 MiB for Business customers) to a Worker. +- Enterprise customers can upload 500 MiB by default and can ask their account team to raise this limit. +- Detecting [precondition failures](/r2/api/s3/extensions/#conditional-operations-in-putobject) is currently easier with presigned URLs as compared with R2 bindings. Note that these limitations depends on R2's extension for conditional uploads. Amazon's S3 service does not offer such functionality at this time. @@ -194,5 +191,5 @@ Presigned URLs can only be used with the `.r2.cloudflarestorage.com` ## Related resources -* [Create a public bucket](/r2/buckets/public-buckets/) -* [Storing user generated content](/reference-architecture/diagrams/storage/storing-user-generated-content/) +- [Create a public bucket](/r2/buckets/public-buckets/) +- [Storing user generated content](/reference-architecture/diagrams/storage/storing-user-generated-content/) diff --git a/src/content/docs/r2/api/s3/tokens.mdx b/src/content/docs/r2/api/tokens.mdx similarity index 64% rename from src/content/docs/r2/api/s3/tokens.mdx rename to src/content/docs/r2/api/tokens.mdx index 69eb9826bb2faf..b870198dd24793 100644 --- a/src/content/docs/r2/api/s3/tokens.mdx +++ b/src/content/docs/r2/api/tokens.mdx @@ -3,7 +3,6 @@ title: Authentication pcx_content_type: how-to sidebar: order: 2 - --- You can generate an API token to serve as the Access Key for usage with existing S3-compatible SDKs or XML APIs. @@ -13,38 +12,35 @@ You must purchase R2 before you can generate an API token. To create an API token: 1. In **Account Home**, select **R2**. -2. Under **Account details**, select **Manage R2 API tokens**. -3. Select [**Create API token**](https://dash.cloudflare.com/?to=/:account/r2/api-tokens). -4. Select the **R2 Token** text to edit your API token name. -5. Under **Permissions**, choose a permission types for your token. Refer to [Permissions](#permissions) for information about each option. -6. (Optional) If you select the **Object Read and Write** or **Object Read** permissions, you can scope your token to a set of buckets. -7. Select **Create API Token**. +2. Under the **API** dropdown, select [**Manage API tokens**](https://dash.cloudflare.com/?to=/:account/r2/api-tokens). +3. Choose to create either: + - **Create Account API token** - These tokens are tied to the Cloudflare account itself and can be used by any authorized system or user. Only users with the Super Administrator role can view or create them. These tokens remain valid until manually revoked. + - **Create User API token** - These tokens are tied to your individual Cloudflare user. They inherit your personal permissions and become inactive if your user is removed from the account. +4. Under **Permissions**, choose a permission types for your token. Refer to [Permissions](#permissions) for information about each option. +5. (Optional) If you select the **Object Read and Write** or **Object Read** permissions, you can scope your token to a set of buckets. +6. Select **Create Account API token** or **Create User API token**. After your token has been successfully created, review your **Secret Access Key** and **Access Key ID** values. These may often be referred to as Client Secret and Client ID, respectively. :::caution - You will not be able to access your **Secret Access Key** again after this step. Copy and record both values to avoid losing them. - ::: You will also need to configure the `endpoint` in your S3 client to `https://.r2.cloudflarestorage.com`. Find your [account ID in the Cloudflare dashboard](/fundamentals/setup/find-account-and-zone-ids/). -Buckets created with jurisdictions must be accessed via jurisdiction-specific `endpoint`s: +Buckets created with jurisdictions must be accessed via jurisdiction-specific endpoints: -* European Union (EU): `https://.eu.r2.cloudflarestorage.com` -* FedRAMP: `https://.fedramp.r2.cloudflarestorage.com` +- European Union (EU): `https://.eu.r2.cloudflarestorage.com` +- FedRAMP: `https://.fedramp.r2.cloudflarestorage.com` :::caution - Jurisdictional buckets can only be accessed via the corresponding jurisdictional endpoint. Most S3 clients will not let you configure multiple `endpoints`, so you'll generally have to initialize one client per jurisdiction. - ::: ## Permissions @@ -78,9 +74,9 @@ A specific bucket is represented as: "com.cloudflare.edge.r2.bucket.__": "*" ``` -* `ACCOUNT_ID`: Refer to [Find zone and account IDs](/fundamentals/setup/find-account-and-zone-ids/#find-account-id-workers-and-pages). -* `JURISDICTION`: The [jurisdiction](/r2/reference/data-location/#available-jurisdictions) where the R2 bucket lives. For buckets not created in a specific jurisdiction this value will be `default`. -* `BUCKET_NAME`: The name of the bucket your Access Policy applies to. +- `ACCOUNT_ID`: Refer to [Find zone and account IDs](/fundamentals/setup/find-account-and-zone-ids/#find-account-id-workers-and-pages). +- `JURISDICTION`: The [jurisdiction](/r2/reference/data-location/#available-jurisdictions) where the R2 bucket lives. For buckets not created in a specific jurisdiction this value will be `default`. +- `BUCKET_NAME`: The name of the bucket your Access Policy applies to. All buckets in an account are represented as: @@ -90,88 +86,88 @@ All buckets in an account are represented as: } ``` -* `ACCOUNT_ID`: Refer to [Find zone and account IDs](/fundamentals/setup/find-account-and-zone-ids/#find-account-id-workers-and-pages). +- `ACCOUNT_ID`: Refer to [Find zone and account IDs](/fundamentals/setup/find-account-and-zone-ids/#find-account-id-workers-and-pages). #### Permission groups Determine what [permission groups](/fundamentals/api/how-to/create-via-api/#permission-groups) should be applied. There are four relevant permission groups for R2. - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + +
- Permission group - - Resource - - Permission -
- Workers R2 Storage Write - - Account - - Admin Read & Write -
- Workers R2 Storage Read - - Account - - Admin Read only -
- Workers R2 Storage Bucket Item Write - - Bucket - - Object Read & Write -
- Workers R2 Storage Bucket Item Read - - Bucket - - Object Read only -
+ Permission group + + Resource + + Permission +
+ Workers R2 Storage Write + + Account + + Admin Read & Write +
+ Workers R2 Storage Read + + Account + + Admin Read only +
+ Workers R2 Storage Bucket Item Write + + Bucket + + Object Read & Write +
+ Workers R2 Storage Bucket Item Read + + Bucket + + Object Read only +
#### Example Access Policy ```json [ - { - "id": "f267e341f3dd4697bd3b9f71dd96247f", - "effect": "allow", - "resources": { - "com.cloudflare.edge.r2.bucket.4793d734c0b8e484dfc37ec392b5fa8a_default_my-bucket": "*", - "com.cloudflare.edge.r2.bucket.4793d734c0b8e484dfc37ec392b5fa8a_eu_my-eu-bucket": "*" - }, - "permission_groups": [ - { - "id": "6a018a9f2fc74eb6b293b0c548f38b39", - "name": "Workers R2 Storage Bucket Item Read" - } - ] - } + { + "id": "f267e341f3dd4697bd3b9f71dd96247f", + "effect": "allow", + "resources": { + "com.cloudflare.edge.r2.bucket.4793d734c0b8e484dfc37ec392b5fa8a_default_my-bucket": "*", + "com.cloudflare.edge.r2.bucket.4793d734c0b8e484dfc37ec392b5fa8a_eu_my-eu-bucket": "*" + }, + "permission_groups": [ + { + "id": "6a018a9f2fc74eb6b293b0c548f38b39", + "name": "Workers R2 Storage Bucket Item Read" + } + ] + } ] ``` @@ -179,8 +175,8 @@ Determine what [permission groups](/fundamentals/api/how-to/create-via-api/#perm You can get the Access Key ID and Secret Access Key values from the response of the [Create Token](/api/resources/user/subresources/tokens/methods/create/) API: -* Access Key ID: The `id` of the API token. -* Secret Access Key: The SHA-256 hash of the API token `value`. +- Access Key ID: The `id` of the API token. +- Secret Access Key: The SHA-256 hash of the API token `value`. Refer to [Authenticate against R2 API using auth tokens](/r2/examples/authenticate-r2-auth-tokens/) for a tutorial with JavaScript, Python, and Go examples. @@ -196,8 +192,6 @@ AWS_SESSION_TOKEN = :::note - The temporary access key cannot have a permission that is higher than the parent access key. e.g. if the parent key is set to `Object Read Write`, the temporary access key could only have `Object Read Write` or `Object Read Only` permissions. - ::: diff --git a/src/content/docs/r2/buckets/bucket-locks.mdx b/src/content/docs/r2/buckets/bucket-locks.mdx index d4ca07e172a7c8..dceb987d8f9fbd 100644 --- a/src/content/docs/r2/buckets/bucket-locks.mdx +++ b/src/content/docs/r2/buckets/bucket-locks.mdx @@ -10,7 +10,7 @@ Bucket locks prevent the deletion and overwriting of objects in an R2 bucket for Before getting started, you will need: - An existing R2 bucket. If you do not already have an existing R2 bucket, refer to [Create buckets](/r2/buckets/create-buckets/). -- (API only) An API token with [permissions](/r2/api/s3/tokens/#permissions) to edit R2 bucket configuration. +- (API only) An API token with [permissions](/r2/api/tokens/#permissions) to edit R2 bucket configuration. ### Enable bucket lock via dashboard diff --git a/src/content/docs/r2/buckets/object-lifecycles.mdx b/src/content/docs/r2/buckets/object-lifecycles.mdx index 6f0975bae79fcc..eee9955b9c64b2 100644 --- a/src/content/docs/r2/buckets/object-lifecycles.mdx +++ b/src/content/docs/r2/buckets/object-lifecycles.mdx @@ -23,7 +23,7 @@ For example, you can create an object lifecycle rule to delete objects after 90 When you create an object lifecycle rule, you can specify which prefix you would like it to apply to. - Note that object lifecycles currently has a 1000 rule maximum. -- Managing object lifecycles is a bucket-level action, and requires an API token with the [`Workers R2 Storage Write`](/r2/api/s3/tokens/#permission-groups) permission group. +- Managing object lifecycles is a bucket-level action, and requires an API token with the [`Workers R2 Storage Write`](/r2/api/tokens/#permission-groups) permission group. ### Dashboard diff --git a/src/content/docs/r2/data-migration/sippy.mdx b/src/content/docs/r2/data-migration/sippy.mdx index fe1d44406893e2..00f58d233026bc 100644 --- a/src/content/docs/r2/data-migration/sippy.mdx +++ b/src/content/docs/r2/data-migration/sippy.mdx @@ -38,7 +38,7 @@ Before getting started, you will need: - An existing R2 bucket. If you don't already have one, refer to [Create buckets](/r2/buckets/create-buckets/). - [API credentials](/r2/data-migration/sippy/#create-credentials-for-storage-providers) for your source object storage bucket. -- (Wrangler only) Cloudflare R2 Access Key ID and Secret Access Key with read and write permissions. For more information, refer to [Authentication](/r2/api/s3/tokens/). +- (Wrangler only) Cloudflare R2 Access Key ID and Secret Access Key with read and write permissions. For more information, refer to [Authentication](/r2/api/tokens/). ### Enable Sippy via the Dashboard @@ -180,10 +180,13 @@ When Sippy is enabled, it changes the behavior of certain actions on your R2 buc Remaining metadata will be omitted.
  • - For larger objects (greater than 199 MiB), multiple GET requests may be required to fully copy the object to R2. + For larger objects (greater than 199 MiB), multiple GET requests may + be required to fully copy the object to R2.
  • - If there are multiple simultaneous GET requests for an object which has not yet been fully copied to R2, Sippy may fetch the object from the source storage bucket multiple times to serve those requests. + If there are multiple simultaneous GET requests for an object which + has not yet been fully copied to R2, Sippy may fetch the object from + the source storage bucket multiple times to serve those requests.
  • diff --git a/src/content/docs/r2/examples/rclone.mdx b/src/content/docs/r2/examples/rclone.mdx index a5e8cb6445df94..f71e9f32467620 100644 --- a/src/content/docs/r2/examples/rclone.mdx +++ b/src/content/docs/r2/examples/rclone.mdx @@ -44,7 +44,7 @@ acl = private :::note -If you are using a token with [Object-level permissions](/r2/api/s3/tokens/#permissions), you will need to add `no_check_bucket = true` to the configuration to avoid errors. +If you are using a token with [Object-level permissions](/r2/api/tokens/#permissions), you will need to add `no_check_bucket = true` to the configuration to avoid errors. ::: You may then use the new `rclone` provider for any of your normal workflows. diff --git a/src/content/docs/r2/examples/terraform-aws.mdx b/src/content/docs/r2/examples/terraform-aws.mdx index 1d6be7b07b313d..b05df5de5b053e 100644 --- a/src/content/docs/r2/examples/terraform-aws.mdx +++ b/src/content/docs/r2/examples/terraform-aws.mdx @@ -1,28 +1,26 @@ --- title: Terraform (AWS) pcx_content_type: example - --- -import { Render } from "~/components" +import { Render } from "~/components"; -
    + +
    This example shows how to configure R2 with Terraform using the [AWS provider](https://github.com/hashicorp/terraform-provider-aws). :::note[Note for using AWS provider] - For using only the Cloudflare provider, see [Terraform](/r2/examples/terraform/). - ::: With [`terraform`](https://developer.hashicorp.com/terraform/downloads) installed: 1. Create `main.tf` file, or edit your existing Terraform configuration 2. Populate the endpoint URL at `endpoints.s3` with your [Cloudflare account ID](/fundamentals/setup/find-account-and-zone-ids/) -3. Populate `access_key` and `secret_key` with the corresponding [R2 API credentials](/r2/api/s3/tokens/). +3. Populate `access_key` and `secret_key` with the corresponding [R2 API credentials](/r2/api/tokens/). 4. Ensure that `skip_region_validation = true`, `skip_requesting_account_id = true`, and `skip_credentials_validation = true` are set in the provider configuration. ```hcl diff --git a/src/content/docs/r2/index.mdx b/src/content/docs/r2/index.mdx index 2fd1bd005e4283..f3d1fef3bec523 100644 --- a/src/content/docs/r2/index.mdx +++ b/src/content/docs/r2/index.mdx @@ -10,30 +10,40 @@ head: content: Overview --- -import { CardGrid, Description, Feature, LinkButton, LinkTitleCard, Plan, RelatedProduct } from "~/components" +import { + CardGrid, + Description, + Feature, + LinkButton, + LinkTitleCard, + Plan, + RelatedProduct, +} from "~/components"; - Object storage for all your data. - Cloudflare R2 Storage allows developers to store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services. You can use R2 for multiple scenarios, including but not limited to: -* Storage for cloud-native applications -* Cloud storage for web content -* Storage for podcast episodes -* Data lakes (analytics and big data) -* Cloud storage output for large batch processes, such as machine learning model artifacts or datasets +- Storage for cloud-native applications +- Cloud storage for web content +- Storage for podcast episodes +- Data lakes (analytics and big data) +- Cloud storage output for large batch processes, such as machine learning model artifacts or datasets - Get started Browse the examples + + Get started + + + Browse the examples + - -*** +--- ## Features @@ -41,65 +51,70 @@ You can use R2 for multiple scenarios, including but not limited to: Location Hints are optional parameters you can provide during bucket creation to indicate the primary geographical location you expect data will be accessed from. - Configure CORS to interact with objects in your bucket and configure policies on your bucket. - Public buckets expose the contents of your R2 bucket directly to the Internet. - - + Create bucket scoped tokens for granular control over who can access your data. - -*** +--- ## Related products A [serverless](https://www.cloudflare.com/learning/serverless/what-is-serverless/) execution environment that allows you to create entirely new applications or augment existing ones without configuring or maintaining infrastructure. + Upload, store, encode, and deliver live and on-demand video with one API, without configuring or maintaining infrastructure. + A suite of products tailored to your image-processing needs. + -*** +--- ## More resources - Understand pricing for free and paid tier rates. + Understand pricing for free and paid tier rates. - - Ask questions, show off what you are building, and discuss the platform with other developers. + + Ask questions, show off what you are building, and discuss the platform + with other developers. - Learn about product announcements, new tutorials, and what is new in Cloudflare Workers. + Learn about product announcements, new tutorials, and what is new in + Cloudflare Workers. diff --git a/src/content/docs/r2/tutorials/cloudflare-access.mdx b/src/content/docs/r2/tutorials/cloudflare-access.mdx index 0277baf0a21636..828720cb852e92 100644 --- a/src/content/docs/r2/tutorials/cloudflare-access.mdx +++ b/src/content/docs/r2/tutorials/cloudflare-access.mdx @@ -2,10 +2,9 @@ title: Protect an R2 Bucket with Cloudflare Access pcx_content_type: tutorial updated: 2024-04-16 - --- -import { Render } from "~/components" +import { Render } from "~/components"; You can secure access to R2 buckets using [Cloudflare Access](/cloudflare-one/applications/configure-apps/). @@ -13,17 +12,15 @@ Access allows you to only allow specific users, groups or applications within yo :::note - For providing secure access to bucket objects for anonymous users, we recommend using [pre-signed URLs](/r2/api/s3/presigned-urls/) instead. Pre-signed URLs do not require users to be a member of your organization and enable programmatic application directly. - ::: ## 1. Create a bucket -*If you have an existing R2 bucket, you can skip this step.* +_If you have an existing R2 bucket, you can skip this step._ You will need to create an R2 bucket. Follow the [R2 get started guide](/r2/get-started/) to create a bucket before returning to this guide. @@ -33,7 +30,7 @@ Within the **Zero Trust** section of the Cloudflare Dashboard, you will need to If you have not configured Cloudflare Access before, we recommend: -* Configuring an [identity provider](/cloudflare-one/identity/) first to enable Access to use your organization's single-sign on (SSO) provider as an authentication method. +- Configuring an [identity provider](/cloudflare-one/identity/) first to enable Access to use your organization's single-sign on (SSO) provider as an authentication method. To create an Access application for your R2 bucket: @@ -43,9 +40,9 @@ To create an Access application for your R2 bucket: 4. Select **Add a public hostname** and enter the application domain. The **Domain** must be a domain hosted on Cloudflare, and the **Subdomain** part of the custom domain you will connect to your R2 bucket. For example, if you want to serve files from `behind-access.example.com` and `example.com` is a domain within your Cloudflare account, then enter `behind-access` in the subdomain field and select `example.com` from the **Domain** list. 5. Add [Access policies](/cloudflare-one/policies/access/) to control who can connect to your application. This should be an **Allow** policy so that users can access objects within the bucket behind this Access application. - :::note - Ensure that your policies only allow the users within your organization that need access to this R2 bucket. - ::: + :::note + Ensure that your policies only allow the users within your organization that need access to this R2 bucket. + ::: 6. Follow the remaining [self-hosted application creation steps](/cloudflare-one/applications/configure-apps/self-hosted-public-app/) to publish the application. @@ -53,10 +50,8 @@ To create an Access application for your R2 bucket: :::caution - You should create an Access application before connecting a custom domain to your bucket, as connecting a custom domain will otherwise make your bucket public by default. - ::: You will need to [connect a custom domain](/r2/buckets/public-buckets/#connect-a-bucket-to-a-custom-domain) to your bucket in order to configure it as an Access application. Make sure the custom domain **is the same domain** you entered when configuring your Access policy. @@ -73,6 +68,6 @@ If you cannot authenticate or receive a block page after authenticating, check t ## Next steps -* Learn more about [Access applications](/cloudflare-one/applications/configure-apps/) and how to configure them. -* Understand how to use [pre-signed URLs](/r2/api/s3/presigned-urls/) to issue time-limited and prefix-restricted access to objects for users not within your organization. -* Review the [documentation on using API tokens to authenticate](/r2/api/s3/tokens/) against R2 buckets. +- Learn more about [Access applications](/cloudflare-one/applications/configure-apps/) and how to configure them. +- Understand how to use [pre-signed URLs](/r2/api/s3/presigned-urls/) to issue time-limited and prefix-restricted access to objects for users not within your organization. +- Review the [documentation on using API tokens to authenticate](/r2/api/tokens/) against R2 buckets. diff --git a/src/content/docs/terraform/advanced-topics/remote-backend.mdx b/src/content/docs/terraform/advanced-topics/remote-backend.mdx index 3c8192b247c972..a8153bbedcc17c 100644 --- a/src/content/docs/terraform/advanced-topics/remote-backend.mdx +++ b/src/content/docs/terraform/advanced-topics/remote-backend.mdx @@ -5,7 +5,7 @@ title: Remote R2 backend import { Render } from "~/components"; -[Cloudflare R2](/r2/) and [Terraform remote backends](https://developer.hashicorp.com/terraform/language/settings/backends/remote) can interact with each other to provide a seamless experience for Terraform state management. +[Cloudflare R2](/r2/) and [Terraform remote backends](https://developer.hashicorp.com/terraform/language/settings/backends/remote) can interact with each other to provide a seamless experience for Terraform state management. Cloudflare R2 is an object storage service that provides a highly available, scalable, and secure way to store and serve static assets, such as images, videos, and static websites. R2 has [S3 API compatibility](/r2/api/s3/api/) making it easy to integrate with existing cloud infrastructure and applications. @@ -18,12 +18,12 @@ Using [Wrangler](/workers/wrangler/install-and-update/), [API](/api/resources/r2 :::note -Bucket names can only contain lowercase letters (`a-z`), numbers (`0-9`), and hyphens (`-`). +Bucket names can only contain lowercase letters (`a-z`), numbers (`0-9`), and hyphens (`-`). ::: ### Create scoped bucket API keys -Next you will need to create a [bucket scoped R2 API token](/r2/api/s3/tokens/) with `Object Read & Write` permissions. To create an API token, do the following: +Next you will need to create a [bucket scoped R2 API token](/r2/api/tokens/) with `Object Read & Write` permissions. To create an API token, do the following: 1. In **Account Home**, select **R2**. 2. Under **Account details**, select **Manage R2 API tokens**. @@ -71,4 +71,4 @@ variable "account_id" { default = "" } ## Migrate state file to R2 backend -After updating your `cloudflare.tf` file you can issue the `terraform init -reconfigure` command to migrate from a local state to [remote state](https://developer.hashicorp.com/terraform/language/state/remote). \ No newline at end of file +After updating your `cloudflare.tf` file you can issue the `terraform init -reconfigure` command to migrate from a local state to [remote state](https://developer.hashicorp.com/terraform/language/state/remote). diff --git a/src/content/partials/r2/keys.mdx b/src/content/partials/r2/keys.mdx index cae552571e1843..2cd748e12138df 100644 --- a/src/content/partials/r2/keys.mdx +++ b/src/content/partials/r2/keys.mdx @@ -1,6 +1,5 @@ --- {} - --- -You must [generate an Access Key](/r2/api/s3/tokens/) before getting started. All examples will utilize `access_key_id` and `access_key_secret` variables which represent the **Access Key ID** and **Secret Access Key** values you generated. +You must [generate an Access Key](/r2/api/tokens/) before getting started. All examples will utilize `access_key_id` and `access_key_secret` variables which represent the **Access Key ID** and **Secret Access Key** values you generated.