diff --git a/changelog/container-registry/january2022/2022-01-02-registry-changed-backend.mdx b/changelog/container-registry/january2022/2022-01-02-registry-changed-backend.mdx index 7c36054b22..5d9fefba94 100644 --- a/changelog/container-registry/january2022/2022-01-02-registry-changed-backend.mdx +++ b/changelog/container-registry/january2022/2022-01-02-registry-changed-backend.mdx @@ -1,5 +1,5 @@ --- -title: Migration to the new S3 backend (HIVE) for all regions +title: Migration to the new Object Storage backend (HIVE) for all regions status: changed author: fullname: 'Join the #container-registry channel on Slack.' @@ -9,4 +9,4 @@ category: containers product: container-registry --- -All regions were migrated to the new S3 backend (HIVE) and are now using its highly redundant #MultiAZ infrastructure in `FR-PAR`. As a result, almost all recent issues regarding the registry are resolved. \ No newline at end of file +All regions were migrated to the new Object Storage backend (HIVE) and are now using its highly redundant #MultiAZ infrastructure in `FR-PAR`. As a result, almost all recent issues regarding the registry are resolved. \ No newline at end of file diff --git a/components/docs-editor.mdx b/components/docs-editor.mdx index 26da9e23e1..626c7df263 100644 --- a/components/docs-editor.mdx +++ b/components/docs-editor.mdx @@ -259,7 +259,7 @@ At top of `.mdx` file, you MUST add data in frontmatter: ``` --- -title: Migration to the new S3 backend (HIVE) for all regions +title: Migration to the new Object Storage backend (HIVE) for all regions status: changed author: fullname: 'Join the #container-registry channel on Slack.' diff --git a/compute/instances/api-cli/snapshot-import-export-feature.mdx b/compute/instances/api-cli/snapshot-import-export-feature.mdx index 9dd5a91819..c7d0305e8c 100644 --- a/compute/instances/api-cli/snapshot-import-export-feature.mdx +++ b/compute/instances/api-cli/snapshot-import-export-feature.mdx @@ -35,7 +35,7 @@ More information on the QCOW2 file format, and how to use it can be found in the 1. Create a Scaleway Object Storage bucket. - You need an S3 bucket to export your QCOW2 file into. Any bucket that belongs to the same project as the snapshot can be used. However, if you do not have one already, you can [create it](/storage/object/how-to/create-a-bucket/) in the console. + You need an Object Storage bucket to export your QCOW2 file into. Any bucket that belongs to the same project as the snapshot can be used. However, if you do not have one already, you can [create it](/storage/object/how-to/create-a-bucket/) in the console. 2. Create a snapshot from a volume. To use this functionality, you must [create a snapshot](/compute/instances/how-to/create-a-snapshot/#how-to-create-a-snapshot) from the volume you want to export. @@ -53,7 +53,7 @@ More information on the QCOW2 file format, and how to use it can be found in the - The secret key of your API key pair (``) - The snapshot ID (``) - The name of the Object Storage bucket to store the snapshot (which has to exist in the same Scaleway region as the snapshot) - - A key (can be any acceptable key/object name for Scaleway S3 (suffixing qcow2 images with `.qcow2`)) + - A key (can be any acceptable key/object name for Scaleway Object Storage (suffixing qcow2 images with `.qcow2`)) The API returns an output as in the following example: ```json diff --git a/compute/instances/troubleshooting/bootscript-eol.mdx b/compute/instances/troubleshooting/bootscript-eol.mdx index 8bb1b5c1dc..2fb72ee52a 100644 --- a/compute/instances/troubleshooting/bootscript-eol.mdx +++ b/compute/instances/troubleshooting/bootscript-eol.mdx @@ -90,10 +90,10 @@ If your Instance is using the bootscript option to boot in normal mode you are i - #### Create a snapshot of the volume(s) and export it to S3 to retrieve the data + #### Create a snapshot of the volume(s) and export it to Object Storage to retrieve the data 1. [Create a snapshot](/compute/instances/how-to/create-a-snapshot/) of the volume using the **l_ssd** type of snapshot. - 2. [Export](/compute/instances/how-to/snapshot-import-export-feature/) the snapshot to an S3 bucket in the same region as the Instance. + 2. [Export](/compute/instances/how-to/snapshot-import-export-feature/) the snapshot to an Object Storage bucket in the same region as the Instance. 3. Retrieve your data from the Object Storage bucket and reuse it at your convenience. 4. Delete the old Instance that was using a bootscript once you have recovered your data. diff --git a/containers/kubernetes/how-to/edit-kosmos-cluster.mdx b/containers/kubernetes/how-to/edit-kosmos-cluster.mdx index 1128a4d8a9..80d3553236 100644 --- a/containers/kubernetes/how-to/edit-kosmos-cluster.mdx +++ b/containers/kubernetes/how-to/edit-kosmos-cluster.mdx @@ -111,7 +111,7 @@ In order to add external nodes to your multi-cloud cluster, you must first [crea The Kubernetes version of the existing nodes in your multi-cloud pool can be upgraded in place. Your workload will theoretically keep running during the upgrade, but it is best to drain the node before the upgrade. 1. In the Pools section of your Kosmos cluster, click **Upgrade** next to the node pool. This will not cause any of your existing nodes to upgrade, but will instead ensure that any new nodes added to the pool will start up with the newer version. -2. Run the installer program as you would do for a fresh node install, with the additional option `-self-update`. If the option is not available, redownload the program from S3 bucket. +2. Run the installer program as you would do for a fresh node install, with the additional option `-self-update`. If the option is not available, redownload the program from the Object Storage bucket. 3. Now the node will register itself with the Apiserver. Once it is ready, you will see the same node with two kubelet versions. The older node should end up `NotReady` after 5m, you can safely delete it with `kubectl`. 4. Detach the older node in Scaleway API. diff --git a/faq/objectstorage.mdx b/faq/objectstorage.mdx index 24ac111baa..cefbf7572a 100644 --- a/faq/objectstorage.mdx +++ b/faq/objectstorage.mdx @@ -1,7 +1,7 @@ --- meta: title: Object Storage FAQ - description: Discover S3 Object Storage. + description: Discover Scaleway Object Storage. content: h1: Object Storage hero: assets/objectstorage.webp @@ -13,14 +13,14 @@ category: storage ## What is Object Storage? -Object Storage is a service based on the S3 protocol. It allows you to store any kind of object (documents, images, videos, etc.). +Object Storage is a service based on the Amazon S3 protocol. It allows you to store any kind of object (documents, images, videos, etc.). Scaleway provides an integrated UI in the [console](https://console.scaleway.com) for convenience. As browsing infinite storage through the web requires some technical trade-offs, some actions are limited in the console for Object Storage: - batch deletion is limited to 1000 objects. - empty files are not reported as empty folders. -We provide an S3-compatible API for programmatic access or usage with any compatible software. Therefore, we recommend using dedicated tools such as `s3cmd` to manage large data sets. +We provide an Amazon Amazon S3-compatible API for programmatic access or usage with any compatible software. Therefore, we recommend using dedicated tools such as `s3cmd` to manage large data sets. ## How am I billed for Object Storage? @@ -283,4 +283,4 @@ Large objects can be uploaded using [multipart uploads](/storage/object/api-cli/ Yes, a best practice is to create a [lifecycle rule](/storage/object/how-to/manage-lifecycle-rules/) targeting all objects in the bucket, or using a filter with an empty prefix. In this case, all files contained within the selected bucket will have their storage class altered automatically on the dates stipulated by you. -However, due to S3 Protocol restrictions, a lifecycle rule cannot be created to modify the storage class from Glacier to Standard. \ No newline at end of file +However, due to some restrictions on Amazon's S3 protocol, a lifecycle rule cannot be created to modify the storage class from Glacier to Standard. \ No newline at end of file diff --git a/identity-and-access-management/iam/api-cli/using-api-key-object-storage.mdx b/identity-and-access-management/iam/api-cli/using-api-key-object-storage.mdx index 755832cf23..4fb19e53ca 100644 --- a/identity-and-access-management/iam/api-cli/using-api-key-object-storage.mdx +++ b/identity-and-access-management/iam/api-cli/using-api-key-object-storage.mdx @@ -5,7 +5,7 @@ meta: content: h1: Using IAM API keys with Object Storage paragraph: This page explains how to use IAM API keys with Object Storage -tags: API key Projects IAM API-key Preferred-project Object-Storage S3 +tags: API key Projects IAM API-key Preferred-project Object-Storage Amazon-S3 dates: validation: 2024-05-27 posted: 2022-11-02 @@ -15,7 +15,7 @@ categories: You can carry out actions on Scaleway Object Storage resources either via the [Scaleway console](https://console.scaleway.com/), or via a third-party API or CLI, such as [the AWS CLI](/storage/object/api-cli/object-storage-aws-cli/), [MinIOClient](/storage/object/api-cli/installing-minio-client/) or [Rclone](/storage/object/api-cli/installing-rclone/). -While the Scaleway console gives you the option to specify the [Scaleway Project](#what-is-a-project) to carry out your Object Storage actions in, this option is not available via third-party API/CLI tools. These tools are based on a [standard S3 programming interface](https://en.wikipedia.org/wiki/Amazon_S3#S3_API_and_competing_services), which does not accept Project ID as a parameter. Therefore, when you create a Scaleway API key with IAM, you are prompted to specify the API key's **preferred Project for Object Storage**. This API key will always use this Project when carrying out Object Storage actions via any API/CLI. +While the Scaleway console gives you the option to specify the [Scaleway Project](#what-is-a-project) to carry out your Object Storage actions in, this option is not available via third-party API/CLI tools. These tools are based on a [standard Amazon S3 programming interface](https://en.wikipedia.org/wiki/Amazon_S3#S3_API_and_competing_services), which does not accept Project ID as a parameter. Therefore, when you create a Scaleway API key with IAM, you are prompted to specify the API key's **preferred Project for Object Storage**. This API key will always use this Project when carrying out Object Storage actions via any API/CLI. In this document, we explain the concept of preferred Projects for Object Storage, explain how to configure your IAM API key for this, and give some code examples for overriding the preferred Project when making an API call. @@ -35,13 +35,13 @@ When you generate an API key with IAM, the key is associated with a specific [IA ## The impact of preferred Projects -When you perform an action on Scaleway Object Storage resources using a third-party API or CLI, such as [the AWS CLI](/storage/object/api-cli/object-storage-aws-cli/), [MinIOClient](/storage/object/api-cli/installing-minio-client/) or [Rclone](/storage/object/api-cli/installing-rclone/), you are using tools based on a [standard S3 programming interface](https://en.wikipedia.org/wiki/Amazon_S3#S3_API_and_competing_services). This standard interface does not accept Project ID as a parameter. Therefore, when you create a Scaleway API key with IAM, you are prompted to specify the API key's **preferred Project for Object Storage**. This API key will always use this Project when carrying out Object Storage actions via any API/CLI. The preferred Project is specified when creating the API key (or can be edited at a later date). +When you perform an action on Scaleway Object Storage resources using a third-party API or CLI, such as [the AWS CLI](/storage/object/api-cli/object-storage-aws-cli/), [MinIOClient](/storage/object/api-cli/installing-minio-client/) or [Rclone](/storage/object/api-cli/installing-rclone/), you are using tools based on a [standard Amazon S3 programming interface](https://en.wikipedia.org/wiki/Amazon_S3#S3_API_and_competing_services). This standard interface does not accept Project ID as a parameter. Therefore, when you create a Scaleway API key with IAM, you are prompted to specify the API key's **preferred Project for Object Storage**. This API key will always use this Project when carrying out Object Storage actions via any API/CLI. The preferred Project is specified when creating the API key (or can be edited at a later date). Setting the preferred Project does not automatically give the API key bearer permissions for Object Storage in this Project. Ensure that the user/application is either the Owner of the Organization, or has a [policy](/identity-and-access-management/iam/concepts/#policy) giving them appropriate permissions for Object Storage in this Project. Note that the application of Object Storage permissions can take up to 5 minutes after creating a new rule or policy. -When using the S3 CLI: +When using the AWS S3 CLI: - An action of listing the buckets (`aws s3 ls`) will list the buckets of the preferred Project - An action of creating a bucket (`aws s3 mb`) will create a new bucket inside the preferred Project - An action of moving an object from a bucket to another (`aws s3 mv source destination`) will only work if the source bucket and the destination buckets are in the preferred Project for an API key diff --git a/identity-and-access-management/iam/concepts.mdx b/identity-and-access-management/iam/concepts.mdx index 617e1e057b..54c9b71573 100644 --- a/identity-and-access-management/iam/concepts.mdx +++ b/identity-and-access-management/iam/concepts.mdx @@ -95,7 +95,7 @@ For each policy rule, you specify one or more permission sets (e.g. "list all In ## Preferred Project -You can carry out actions on Scaleway Object Storage resources either via the [Scaleway console](https://console.scaleway.com), or via a third-party API or CLI, such as [the AWS CLI](/storage/object/api-cli/object-storage-aws-cli/), [MinIOClient](/storage/object/api-cli/installing-minio-client/) or [Rclone](/storage/object/api-cli/installing-rclone/). While the Scaleway console gives you the option to specify the [Scaleway Project](#what-is-a-project) to carry out your Object Storage actions in, this option is not available via third-party API/CLI tools. These tools are based on a [standard S3 programming interface](https://en.wikipedia.org/wiki/Amazon_S3#S3_API_and_competing_services), which does not accept Project ID as a parameter. Therefore, when you create a Scaleway API key with IAM, you are prompted to specify the API key's **preferred Project for Object Storage**. This API key will always use this Project when carrying out Object Storage actions via any API/CLI. See our page on [using API keys with Object Storage](/identity-and-access-management/iam/api-cli/using-api-key-object-storage/) for more information. +You can carry out actions on Scaleway Object Storage resources either via the [Scaleway console](https://console.scaleway.com), or via a third-party API or CLI, such as [the AWS CLI](/storage/object/api-cli/object-storage-aws-cli/), [MinIOClient](/storage/object/api-cli/installing-minio-client/) or [Rclone](/storage/object/api-cli/installing-rclone/). While the Scaleway console gives you the option to specify the [Scaleway Project](#what-is-a-project) to carry out your Object Storage actions in, this option is not available via third-party API/CLI tools. These tools are based on a [standard Amazon S3 programming interface](https://en.wikipedia.org/wiki/Amazon_S3#S3_API_and_competing_services), which does not accept Project ID as a parameter. Therefore, when you create a Scaleway API key with IAM, you are prompted to specify the API key's **preferred Project for Object Storage**. This API key will always use this Project when carrying out Object Storage actions via any API/CLI. See our page on [using API keys with Object Storage](/identity-and-access-management/iam/api-cli/using-api-key-object-storage/) for more information. ## Principal diff --git a/managed-services/iot-hub/api-cli/iot-hub-routes.mdx b/managed-services/iot-hub/api-cli/iot-hub-routes.mdx index e195ce3c37..5e1fd555b8 100644 --- a/managed-services/iot-hub/api-cli/iot-hub-routes.mdx +++ b/managed-services/iot-hub/api-cli/iot-hub-routes.mdx @@ -9,7 +9,7 @@ categories: - managed-services dates: validation: 2024-04-22 -tags: iot iot-hub mqtt cli s3cmd s3 +tags: iot iot-hub mqtt cli s3cmd amazon-s3 --- Routes are integrations with the Scaleway ecosystem: they can forward MQTT messages to Scaleway services. @@ -26,9 +26,9 @@ Routes are integrations with the Scaleway ecosystem: they can forward MQTT messa - Installed the [Scaleway CLI](https://github.com/scaleway/scaleway-cli#scaleway-cli-v2) and [read the accompanying IoT document](/managed-services/iot-hub/api-cli/getting-started-with-iot-hub-cli/) - Installed and configured [`s3cmd`](/tutorials/s3cmd/) for Scaleway -## S3 Routes +## Amazon S3 Routes -The S3 route allows you to put the payload of MQTT messages directly into Scaleway's Object Storage. +The Amazon S3 route allows you to put the payload of MQTT messages directly into Scaleway's Object Storage. This section is a continuation of the [Iot Hub CLI quickstart](/managed-services/iot-hub/api-cli/getting-started-with-iot-hub-cli/). Make sure to follow the quickstart before beginning. @@ -41,9 +41,9 @@ The S3 route allows you to put the payload of MQTT messages directly into Scalew PREFIX="iot/messages" # Create the bucket s3cmd mb --region "$REGION" "s3://$BUCKET" - # Grant write access to IoT Hub S3 Route Service to your bucket + # Grant write access to IoT Hub Amazon S3 Route Service to your bucket s3cmd setacl --region "$REGION" "s3://$BUCKET" --acl-grant=write:555c69c3-87d0-4bf8-80f1-99a2f757d031:555c69c3-87d0-4bf8-80f1-99a2f757d031 - # Create the IoT Hub S3 Route + # Create the IoT Hub Amazon S3 Route scw iot route create \ hub-id=$(jq -r '.id' hub.json) \ name=route-s3-cli topic="hello/world" \ diff --git a/managed-services/iot-hub/concepts.mdx b/managed-services/iot-hub/concepts.mdx index ff81fc3302..f18f18d565 100644 --- a/managed-services/iot-hub/concepts.mdx +++ b/managed-services/iot-hub/concepts.mdx @@ -96,7 +96,7 @@ Increasing the QoS level decreases message throughput because of the additional ## Routes -IoT Routes forward messages to non publish/subscribe destinations such as databases, REST APIs, Serverless functions and S3 buckets. See [Understanding Routes](/managed-services/iot-hub/reference-content/routes/) for further information. +IoT Routes forward messages to non publish/subscribe destinations such as databases, REST APIs, Serverless functions and Object Storage buckets. See [Understanding Routes](/managed-services/iot-hub/reference-content/routes/) for further information. ## TLS diff --git a/managed-services/iot-hub/how-to/understand-event-messages.mdx b/managed-services/iot-hub/how-to/understand-event-messages.mdx index 46baac3f28..8aee54ccf3 100644 --- a/managed-services/iot-hub/how-to/understand-event-messages.mdx +++ b/managed-services/iot-hub/how-to/understand-event-messages.mdx @@ -60,9 +60,9 @@ This section shows you the types of message that can be received in IoT Hub Even ## Route messages -### S3 route errors +### Amazon S3 route errors - `"'BUCKET_NAME' s3 bucket write failed. Error HTTP_STATUS_CODE: ERROR_CODE (request-id: REQUEST_ID)"`: - The route failed to write to the specified s3 bucket. + The route failed to write to the specified Object Storage bucket. `BUCKET_NAME` is the name of the bucket route attempt to write to, `HTTP_STATUS_CODE` and `ERROR_CODE` are standard [S3 error codes](https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html#ErrorCodeList) ## Database errors diff --git a/managed-services/iot-hub/reference-content/routes.mdx b/managed-services/iot-hub/reference-content/routes.mdx index f454f94c51..d45c10046c 100644 --- a/managed-services/iot-hub/reference-content/routes.mdx +++ b/managed-services/iot-hub/reference-content/routes.mdx @@ -8,7 +8,7 @@ content: excerpt: | This page provides detailed information about Scaleway IoT Hub Routes. totalTime: PT5M -tags: iot iot-hub route s3 database postgres postgresql mysql rest api inference +tags: iot iot-hub route amazon-s3 database postgres postgresql mysql rest api inference dates: validation: 2024-05-06 posted: 2021-08-31 diff --git a/menu/navigation.json b/menu/navigation.json index 9dd909be3d..02e0b8785c 100644 --- a/menu/navigation.json +++ b/menu/navigation.json @@ -4397,7 +4397,7 @@ "slug": "optimize-object-storage-performance" }, { - "label": "Equivalence between S3 actions and IAM permissions", + "label": "Equivalence between Object Storage actions and IAM permissions", "slug": "s3-iam-permissions-equivalence" } ], diff --git a/network/load-balancer/concepts.mdx b/network/load-balancer/concepts.mdx index 13fd072fa4..fa28cab6d8 100644 --- a/network/load-balancer/concepts.mdx +++ b/network/load-balancer/concepts.mdx @@ -159,7 +159,7 @@ See [balancing-methods](#balancing-methods). Routes allow you to specify, for a given frontend, which of its backends it should direct traffic to. For [HTTP](#protocol) frontends/backends, routes are based on HTTP Host headers. For [TCP](#protocol) frontends/backends, they are based on **S**erver **N**ame **I**dentification (SNI). You can configure multiple routes on a single Load Balancer. -## S3 failover +## Object Storage failover See [customized error page](#customized-error-page) diff --git a/network/load-balancer/how-to/set-up-s3-failover.mdx b/network/load-balancer/how-to/set-up-s3-failover.mdx index 9480263afd..7f1cdb3fcb 100644 --- a/network/load-balancer/how-to/set-up-s3-failover.mdx +++ b/network/load-balancer/how-to/set-up-s3-failover.mdx @@ -5,7 +5,7 @@ meta: content: h1: How to configure a customized error page paragraph: This page explains how to configure a customized error page for your Load Balancer, using the Scaleway Object Storage Bucket Website feature -tags: s3-failover s3 failover load-balancer object-storage bucket +tags: s3-failover amazon-s3 failover load-balancer object-storage bucket dates: validation: 2024-05-26 posted: 2022-02-21 diff --git a/network/load-balancer/reference-content/configuring-backends.mdx b/network/load-balancer/reference-content/configuring-backends.mdx index ea5bfd099a..39a86d75a1 100644 --- a/network/load-balancer/reference-content/configuring-backends.mdx +++ b/network/load-balancer/reference-content/configuring-backends.mdx @@ -159,7 +159,7 @@ Benefits of this feature include: - Providing information on service status or maintenance - Redirecting to a mirrored site or skeleton site -Note that when entering the S3 link to redirect to, the URL of the bucket endpoint is not sufficient. The bucket website URL is specifically required (e.g.`https://my-bucket.s3-website.nl-ams.scw.cloud`). See our [dedicated documentation](/network/load-balancer/how-to/set-up-s3-failover/) for further help. +Note that when entering the Object Storage link to redirect to, the URL of the bucket endpoint is not sufficient. The bucket website URL is specifically required (e.g.`https://my-bucket.s3-website.nl-ams.scw.cloud`). See our [dedicated documentation](/network/load-balancer/how-to/set-up-s3-failover/) for further help. ## Health checks diff --git a/serverless/functions/index.mdx b/serverless/functions/index.mdx index 31272f77d7..4a5baab212 100644 --- a/serverless/functions/index.mdx +++ b/serverless/functions/index.mdx @@ -64,7 +64,7 @@ meta: label="Read more" /> diff --git a/storage/object/api-cli/bucket-operations.mdx b/storage/object/api-cli/bucket-operations.mdx index 1c8aeacfa2..29e67fb235 100644 --- a/storage/object/api-cli/bucket-operations.mdx +++ b/storage/object/api-cli/bucket-operations.mdx @@ -668,7 +668,7 @@ aws s3api put-bucket-versioning --bucket BucketName ## PutBucketPolicy -This operation applies an S3 bucket policy to an S3 bucket. +This operation applies an Object Storage bucket policy to an Object Storage bucket. If the operation is successful, no output will be returned. diff --git a/storage/object/api-cli/bucket-policy.mdx b/storage/object/api-cli/bucket-policy.mdx index 8a29db9252..cfbb255055 100644 --- a/storage/object/api-cli/bucket-policy.mdx +++ b/storage/object/api-cli/bucket-policy.mdx @@ -362,7 +362,7 @@ Bucket policies use a JSON-based access policy language and are composed of stri ### Action **Description** -: Consists of an S3 namespace, a colon, and the name of an action. Action names can include wildcards represented by `*`. +: Consists of an Amazon S3 namespace, a colon, and the name of an action. Action names can include wildcards represented by `*`. **Required** : Yes @@ -451,7 +451,7 @@ Bucket policies use a JSON-based access policy language and are composed of stri ### Resource **Description** -: Consists in the S3 resource path. +: Consists in the Amazon S3 resource path. **Required** : Yes diff --git a/storage/object/api-cli/bucket-website-api.mdx b/storage/object/api-cli/bucket-website-api.mdx index 537142da84..51852b3e07 100644 --- a/storage/object/api-cli/bucket-website-api.mdx +++ b/storage/object/api-cli/bucket-website-api.mdx @@ -179,7 +179,7 @@ If you want your website to be accessible, you need to set up a bucket policy. ### Configuring your URL -You can access your website using the website endpoint of your bucket, generated by s3 under the default format: +You can access your website using the website endpoint of your bucket, generated by Amazon S3 under the default format: `https://.s3-website..scw.cloud` diff --git a/storage/object/api-cli/combining-iam-and-object-storage.mdx b/storage/object/api-cli/combining-iam-and-object-storage.mdx index 72d993fa08..d01825e3c9 100644 --- a/storage/object/api-cli/combining-iam-and-object-storage.mdx +++ b/storage/object/api-cli/combining-iam-and-object-storage.mdx @@ -5,7 +5,7 @@ meta: content: h1: Combining IAM and bucket policies to set up granular access to Object Storage paragraph: Integrate IAM with Scaleway Object Storage for enhanced access control. -tags: object storage command bucket s3 iam permissions acl policy +tags: object storage command bucket amazon-s3 iam permissions acl policy dates: validation: 2024-05-14 posted: 2023-01-17 diff --git a/storage/object/api-cli/installing-minio-client.mdx b/storage/object/api-cli/installing-minio-client.mdx index eb2d6991cb..832e0c645d 100644 --- a/storage/object/api-cli/installing-minio-client.mdx +++ b/storage/object/api-cli/installing-minio-client.mdx @@ -14,7 +14,7 @@ categories: - object-storage --- -The [MinIO Client](https://min.io/docs/minio/linux/reference/minio-mc.html) (`mc`) is a command-line tool that allows you to manage your s3 projects, providing a modern alternative to UNIX commands. +The [MinIO Client](https://min.io/docs/minio/linux/reference/minio-mc.html) (`mc`) is a command-line tool that allows you to manage your Object Storage projects, providing a modern alternative to UNIX commands. diff --git a/storage/object/api-cli/installing-rclone.mdx b/storage/object/api-cli/installing-rclone.mdx index 754693cc3e..3e85740fc3 100644 --- a/storage/object/api-cli/installing-rclone.mdx +++ b/storage/object/api-cli/installing-rclone.mdx @@ -14,7 +14,7 @@ categories: - object-storage --- -[Rclone](https://rclone.org) is a command-line tool that can be used to manage your cloud storage. It communicates with any S3-compatible cloud storage provider as well as other storage platforms. +[Rclone](https://rclone.org) is a command-line tool that can be used to manage your cloud storage. It communicates with any Amazon S3-compatible cloud storage provider as well as other storage platforms. Follow the instructions given in the [official Rclone documentation here](https://rclone.org/install/) to install Rclone. @@ -79,7 +79,7 @@ For example, on Linux: ``` 3. Type `s3` and hit enter to confirm this storage type. The following output displays: ``` - Choose your S3 provider. + Choose your Amazon S3 provider. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Amazon Web Services (AWS) S3 @@ -106,10 +106,10 @@ For example, on Linux: \ "TencentCOS" 12 / Wasabi Object Storage \ "Wasabi" - 13 / Any other S3 compatible provider + 13 / Any other Amazon S3 compatible provider \ "Other" ``` -4. Type `Scaleway` and hit enter to confirm this S3 provider. The following output displays: +4. Type `Scaleway` and hit enter to confirm this Amazon S3 provider. The following output displays: ``` Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. @@ -252,6 +252,6 @@ For example, on Linux: If you want to be able to transfer data to or from a bucket in a different region to the one you just set up, repeat steps 1-14 again to set up a new remote in the required region. Simply enter the required region at steps 7 and 8. Similarly, you may wish to set up a new remote for a different Object Storage provider. -For further information, refer to the official [RClone S3 Object Storage Documentation](https://rclone.org/s3/). Official documentation also exists for [other storage backends](https://rclone.org/docs/). +For further information, refer to the official [RClone Object Storage Documentation](https://rclone.org/s3/). Official documentation also exists for [other storage backends](https://rclone.org/docs/). diff --git a/storage/object/api-cli/lifecycle-rules-api.mdx b/storage/object/api-cli/lifecycle-rules-api.mdx index 969274fa17..b841ab1112 100644 --- a/storage/object/api-cli/lifecycle-rules-api.mdx +++ b/storage/object/api-cli/lifecycle-rules-api.mdx @@ -37,7 +37,7 @@ Currently, the **expiration**, **transition**, and **incomplete multipart upload There might, for example, be a need to store log files for a week or a month, after which they become obsolete. It is possible to set a lifecycle rule to delete them automatically when they become obsolete. If you consider that a 3-month-old object is rarely used but still has a value, you might want to configure a rule to send it automatically to [Scaleway Glacier](https://www.scaleway.com/en/glacier-cold-storage/), for example. -Lifecycle management on Object Storage is available on every AWS S3 compliant tool (sdk, aws-cli, boto, etc), as well as from the Scaleway [console](https://console.scaleway.com/organization). +Lifecycle management on Object Storage is available on every Amazon S3 compliant tool (sdk, aws-cli, boto, etc), as well as from the Scaleway [console](https://console.scaleway.com/organization). ## Lifecycle specification diff --git a/storage/object/api-cli/manage-bucket-permissions-ip.mdx b/storage/object/api-cli/manage-bucket-permissions-ip.mdx index 2cb9c1de20..a42c76fa32 100644 --- a/storage/object/api-cli/manage-bucket-permissions-ip.mdx +++ b/storage/object/api-cli/manage-bucket-permissions-ip.mdx @@ -14,7 +14,7 @@ categories: - object-storage --- -You can stipulate which IP addresses or IP ranges have access or permission to perform S3 operations on your buckets by creating a [bucket policy](/storage/object/api-cli/bucket-policy/) with the `IpAddress` or `NotIpAddress` conditions. +You can stipulate which IP addresses or IP ranges have access or permission to perform operations on your buckets by creating a [bucket policy](/storage/object/api-cli/bucket-policy/) with the `IpAddress` or `NotIpAddress` conditions. It is possible to `Allow` actions for a specific IP address or range of IPs, using the `IpAddress` condition and the `aws:SourceIp` condition key. diff --git a/storage/object/api-cli/managing-lifecycle-cliv2.mdx b/storage/object/api-cli/managing-lifecycle-cliv2.mdx index 043c2a1b5f..df8185b9ff 100644 --- a/storage/object/api-cli/managing-lifecycle-cliv2.mdx +++ b/storage/object/api-cli/managing-lifecycle-cliv2.mdx @@ -14,7 +14,7 @@ categories: - object-storage --- -[Scaleway Object Storage](/storage/object/concepts/#object-storage) is a service based on the S3 protocol. It allows you to store different types of objects (documents, images, videos, etc.) and distribute them instantly, anywhere in the world. You can create and manage your Object Storage resources from the [console](https://console.scaleway.com/login), or via the [Scaleway Command Line Interface](/developer-tools/scaleway-cli/quickstart/) that uses external tools such as `rclone`, `s3cmd` and `mc`. +[Scaleway Object Storage](/storage/object/concepts/#object-storage) is a service based on the Amazon S3 protocol. It allows you to store different types of objects (documents, images, videos, etc.) and distribute them instantly, anywhere in the world. You can create and manage your Object Storage resources from the [console](https://console.scaleway.com/login), or via the [Scaleway Command Line Interface](/developer-tools/scaleway-cli/quickstart/) that uses external tools such as `rclone`, `s3cmd` and `mc`. ## Scaleway Command Line Interface Overview @@ -27,7 +27,7 @@ categories: - A valid [API key](/identity-and-access-management/iam/how-to/create-api-keys/) - An [Object Storage bucket](/storage/object/how-to/create-a-bucket/) - Installed and initialized the [Scaleway CLI](/developer-tools/scaleway-cli/quickstart/) -- Downloaded [S3cmd](https://github.com/s3tools/s3cmd), [rclone](https://rclone.org/downloads/) and [mc](https://github.com/minio/mc) s3 tools +- Downloaded [S3cmd](https://github.com/s3tools/s3cmd), [rclone](https://rclone.org/downloads/) and [mc](https://github.com/minio/mc) ## Creating a configuration file for the Scaleway CLI @@ -98,7 +98,7 @@ categories: ``` -## Installing a configuration file for S3 tools (s3cmd, rclone, and mc) +## Installing a configuration file for Amazon S3-compatible tools (s3cmd, rclone, and mc) 1. Run the following command in a terminal to install a configuration file for `s3cmd`: ``` @@ -210,7 +210,7 @@ Run the following command in a terminal to remove an object from your bucket: ``` - For more information about the s3 tools used in this documentation, refer to the official [rclone](https://rclone.org/docs/), [s3cmd](https://s3tools.org/s3cmd-howto), and [mc](https://github.com/minio/mc) documentation. + For more information about the Amazon S3-compatible tools used in this documentation, refer to the official [rclone](https://rclone.org/docs/), [s3cmd](https://s3tools.org/s3cmd-howto), and [mc](https://github.com/minio/mc) documentation. diff --git a/storage/object/api-cli/migrating-buckets.mdx b/storage/object/api-cli/migrating-buckets.mdx index de9e5eda2a..601aaa1d57 100644 --- a/storage/object/api-cli/migrating-buckets.mdx +++ b/storage/object/api-cli/migrating-buckets.mdx @@ -25,7 +25,7 @@ categories: ``` aws s3api create-bucket --bucket BUCKET-TARGET ``` -2. Copy the objects between the S3 buckets. +2. Copy the objects between the Object Storage buckets. If you have objects in the Scaleway `Glacier` storage class you must [restore](/storage/object/how-to/restore-an-object-from-glacier/) them before continuing. diff --git a/storage/object/api-cli/object-operations.mdx b/storage/object/api-cli/object-operations.mdx index 3014f58f22..3301b14371 100644 --- a/storage/object/api-cli/object-operations.mdx +++ b/storage/object/api-cli/object-operations.mdx @@ -388,7 +388,7 @@ aws s3api put-object --bucket BucketName --key dir-1/ObjectName --body ObjectNam ``` - To define the [storage class](/storage/object/concepts/#storage-class) of the object directly upon creation, use the `--storage-class ` option with `awscli` or add the `x-amz-storage-class: ` header when using the S3 API. You can specify one of the following classes: `STANDARD`, `ONEZONE_IA`, `GLACIER`. Example: `x-amz-storage-class: ONEZONE_IA`. +To define the [storage class](/storage/object/concepts/#storage-class) of the object directly upon creation, use the `--storage-class ` option with `awscli` or add the `x-amz-storage-class: ` header when using the Amazon S3 API. You can specify one of the following classes: `STANDARD`, `ONEZONE_IA`, `GLACIER`. Example: `x-amz-storage-class: ONEZONE_IA`. If no class is specified, the object is created as STANDARD by default. diff --git a/storage/object/api-cli/object-storage-aws-cli.mdx b/storage/object/api-cli/object-storage-aws-cli.mdx index d47b5abfb0..6131cfa15c 100644 --- a/storage/object/api-cli/object-storage-aws-cli.mdx +++ b/storage/object/api-cli/object-storage-aws-cli.mdx @@ -39,7 +39,7 @@ The AWS-CLI is an open-source tool built on top of the [AWS SDK for Python (Boto 3. When prompted, enter the following elements: - your API access key - your API secret key - - your preferred default S3 region (`fr-par`, `nl-ams`, or `pl-waw`) + - your preferred default Object Storage region (`fr-par`, `nl-ams`, or `pl-waw`) - `json` as the default output format 4. Open the `~/.aws/config` file in a code editor and edit it as follows: diff --git a/storage/object/api-cli/post-object.mdx b/storage/object/api-cli/post-object.mdx index 3c5bec78e3..cd7f883be8 100644 --- a/storage/object/api-cli/post-object.mdx +++ b/storage/object/api-cli/post-object.mdx @@ -87,7 +87,7 @@ import hashlib ACCESS_KEY_ID = "SCWXXXXXXXXXXXXXXXXX" SECRET_ACCESS_KEY = "110e8400-e29b-11d4-a716-446655440000" -# S3 Region +# Object Storage Region REGION = "fr-par" # Example for the demo @@ -213,7 +213,7 @@ import requests from botocore.exceptions import ClientError -# Generate a presigned URL for the S3 object +# Generate a presigned URL for the object session = boto3.session.Session() s3_client = session.client( diff --git a/storage/object/api-cli/setting-cors-rules.mdx b/storage/object/api-cli/setting-cors-rules.mdx index ffa736bc65..09ec7fb09c 100644 --- a/storage/object/api-cli/setting-cors-rules.mdx +++ b/storage/object/api-cli/setting-cors-rules.mdx @@ -5,7 +5,7 @@ meta: content: h1: Setting CORS rules on Object Storage buckets paragraph: Set CORS rules to manage cross-origin requests in Scaleway Object Storage. -tags: object storage object-storage s3 bucket cors cors-rule +tags: object storage object-storage aws-s3 bucket cors cors-rule dates: validation: 2024-06-17 posted: 2021-05-19 diff --git a/storage/object/api-cli/using-api-call-list.mdx b/storage/object/api-cli/using-api-call-list.mdx index 668dae9fd7..ab1f818db4 100644 --- a/storage/object/api-cli/using-api-call-list.mdx +++ b/storage/object/api-cli/using-api-call-list.mdx @@ -1,9 +1,9 @@ --- meta: - title: Scaleway Object Storage supported S3 API calls + title: Supported Object Storage API calls description: Learn how to use the API call list effectively with Scaleway Object Storage. content: - h1: Object Storage API + h1: Supported Object Storage API calls paragraph: Learn how to use the API call list effectively with Scaleway Object Storage. tags: object storage object-storage api bucket dates: @@ -67,7 +67,7 @@ Status: | PutBucketLifecycle | Creates a new lifecycle configuration or replaces an existing bucket lifecycle configuration | ❗ | | PutBucketLifecycleConfiguration| Creates a new lifecycle configuration or replaces an existing bucket lifecycle configuration | ✅ | | PutBucketNotification | Enables notifications of specified events for a bucket | ⌛ | -| [PutBucketPolicy](/storage/object/api-cli/bucket-operations/#putbucketpolicy) | Applies an S3 bucket policy to an S3 bucket. The key elements of bucket policy are [Version](/storage/object/api-cli/bucket-policy/#version), [ID](/storage/object/api-cli/bucket-policy/#id), [Statement](/storage/object/api-cli/bucket-policy/#statement), [Sid](/storage/object/api-cli/bucket-policy/#sid), [Principal](/storage/object/api-cli/bucket-policy/#principal), [Action](/storage/object/api-cli/bucket-policy/#action), [Effect](/storage/object/api-cli/bucket-policy/#effect), [Resource](/storage/object/api-cli/bucket-policy/#resource) and [Condition](/storage/object/api-cli/bucket-policy/#condition). You can find out more about each element by clicking the links, or consulting the full documentation | ✅ | +| [PutBucketPolicy](/storage/object/api-cli/bucket-operations/#putbucketpolicy) | Applies an Object Storage bucket policy to an Object Storage bucket. The key elements of bucket policy are [Version](/storage/object/api-cli/bucket-policy/#version), [ID](/storage/object/api-cli/bucket-policy/#id), [Statement](/storage/object/api-cli/bucket-policy/#statement), [Sid](/storage/object/api-cli/bucket-policy/#sid), [Principal](/storage/object/api-cli/bucket-policy/#principal), [Action](/storage/object/api-cli/bucket-policy/#action), [Effect](/storage/object/api-cli/bucket-policy/#effect), [Resource](/storage/object/api-cli/bucket-policy/#resource) and [Condition](/storage/object/api-cli/bucket-policy/#condition). You can find out more about each element by clicking the links, or consulting the full documentation | ✅ | | [PutBucketTagging](/storage/object/api-cli/bucket-operations/#putbuckettagging) | Sets the tag(s) of a bucket | ✅ | | [PutBucketVersioning](/storage/object/api-cli/bucket-operations/#putbucketversioning) | Sets the versioning state of an existing bucket | ✅ | | [PutBucketWebsite](/storage/object/api-cli/bucket-operations/#putbucketwebsite) | Enables bucket website and sets the basic configuration for the website | ✅ | diff --git a/storage/object/concepts.mdx b/storage/object/concepts.mdx index ccc398fa4f..afc5365c21 100644 --- a/storage/object/concepts.mdx +++ b/storage/object/concepts.mdx @@ -5,7 +5,7 @@ meta: content: h1: Object Storage - Concepts paragraph: Understand key concepts and features of Scaleway Object Storage. -tags: retention endpoint object-storage storage bucket acl multipart object s3 retention signature versioning archived +tags: retention endpoint object-storage storage bucket acl multipart object amazon-s3 retention signature versioning archived dates: validation: 2024-05-06 categories: @@ -15,7 +15,11 @@ categories: ## Access control list (ACL) -Access control lists (ACL) are subresources attached to buckets and objects. They define which Scaleway users have access to the attached object/bucket, and the type of access they have. Whenever a user makes a request against a resource, s3 checks its ACL and verifies that they have permission to carry out the request. +<<<<<<< HEAD +Access control lists (ACL) are subresources attached to buckets and objects. They define which Scaleway users have access to the attached object/bucket, and the type of access they have. Whenever a user makes a request against a resource, Amazon S3 checks its ACL and verifies that they have permission to carry out the request. +======= +Access control lists (ACL) are subresources attached to buckets and objects. They define which Scaleway users have access to the attached object/bucket, and the type of access they have. Whenever a user makes a request against a resource, Object Storage checks its ACL and verifies that they have permission to carry out the request. +>>>>>>> 8888c6b0f (feat(gen): remove mentions of S3 only) ## Bucket @@ -86,13 +90,13 @@ An object is a file and the metadata that describes it. Each object has a **key ## Object lock -An S3 API feature that allows users to lock objects to prevent them from being deleted or overwritten. Objects can be put on lock for a specific amount of time or indefinitely. The lock period is defined by the user. +An Amazon S3 API feature that allows users to lock objects to prevent them from being deleted or overwritten. Objects can be put on lock for a specific amount of time or indefinitely. The lock period is defined by the user. The feature uses a write-once-read-many (WORM) data protection model. This model is generally used in cases where data cannot be altered once it has been written. It provides regulatory compliance and protection against ransomware and malicious or accidental deletion of objects. ## Object Storage -A storage service based on the S3 protocol. It allows you to store different types of objects (documents, images, videos, etc.) and distribute them instantly, anywhere in the world. You can upload, download, and visualize stored objects. +A storage service based on the Amazon S3 protocol. It allows you to store different types of objects (documents, images, videos, etc.) and distribute them instantly, anywhere in the world. You can upload, download, and visualize stored objects. Contrary to other storage types such as block devices or file systems, Object Storage bundles the data itself along with metadata [tags](#tags) and a [prefix](#prefix), rather than a file name and a file path. @@ -141,13 +145,13 @@ Object Lock provides two modes to manage object retention, **Compliance** and ** A retention period specifies a fixed period for which an object remains locked. During this period, your object is WORM-protected and cannot be overwritten or deleted. -## S3 +## Amazon S3 -S3 is the de facto Object Storage protocol. Scaleway Object Storage officially supports a subset of S3. The list of supported features is described in the [Object Storage API documentation](/storage/object/api-cli/using-api-call-list/). +Amazon S3 is the de facto Object Storage protocol. Scaleway Object Storage officially supports a subset of Amazon S3. The list of supported features is described in the [Object Storage API documentation](/storage/object/api-cli/using-api-call-list/). ## Signature V2, Signature V4 -When you send HTTP requests to Object Storage, you sign the requests so that we can identify who sent them. You sign requests with your Scaleway access key, which consists of an access key and a secret key. The two main s3 protocols for authentication are Signature v2 and Signature v4. Signature v4 is more recent and it is the recommended version. +When you send HTTP requests to Object Storage, you sign the requests so that we can identify who sent them. You sign requests with your Scaleway access key, which consists of an access key and a secret key. The two main Amazon S3 protocols for authentication are Signature v2 and Signature v4. Signature v4 is more recent and it is the recommended version. ## Storage class diff --git a/storage/object/how-to/create-bucket-policy.mdx b/storage/object/how-to/create-bucket-policy.mdx index 1f7a5ee149..99658d2e37 100644 --- a/storage/object/how-to/create-bucket-policy.mdx +++ b/storage/object/how-to/create-bucket-policy.mdx @@ -5,7 +5,7 @@ meta: content: h1: How to create and manage bucket policies using the console paragraph: Create and apply bucket policies for Object Storage. -tags: bucket policy bucket console object storage s3 access +tags: bucket policy bucket console object storage aws-s3 access dates: validation: 2024-05-30 posted: 2024-05-30 diff --git a/storage/object/how-to/restore-an-object-from-glacier.mdx b/storage/object/how-to/restore-an-object-from-glacier.mdx index 2deb91fb5c..0a2931dfdc 100644 --- a/storage/object/how-to/restore-an-object-from-glacier.mdx +++ b/storage/object/how-to/restore-an-object-from-glacier.mdx @@ -41,7 +41,7 @@ categories: 4. Enter the number of days after which the object will be transferred back to `Glacier`, or click the toggle to permanently restore the object. -5. Click **Restore object from S3 Glacier**. +5. Click **Restore object from Glacier**. Your object remains available in `Standard` class for the duration you specified. It will be transferred automatically back to `Glacier` once the configured period is over. diff --git a/storage/object/how-to/upload-files-into-a-bucket.mdx b/storage/object/how-to/upload-files-into-a-bucket.mdx index 4e67015b71..7605aa6c75 100644 --- a/storage/object/how-to/upload-files-into-a-bucket.mdx +++ b/storage/object/how-to/upload-files-into-a-bucket.mdx @@ -14,7 +14,7 @@ categories: - object-storage --- -This page explains how to upload files into an Object Storage bucket using the [Scaleway console](https://consol.scaleway.com). To upload an object using the S3 API, refer to the [dedicated documentation](/storage/object/api-cli/object-operations/#putobject). +This page explains how to upload files into an Object Storage bucket using the [Scaleway console](https://consol.scaleway.com). To upload an object using the Amazon S3 API, refer to the [dedicated documentation](/storage/object/api-cli/object-operations/#putobject). diff --git a/storage/object/index.mdx b/storage/object/index.mdx index f85a357ae2..bc14b1306a 100644 --- a/storage/object/index.mdx +++ b/storage/object/index.mdx @@ -7,7 +7,7 @@ meta: @@ -65,7 +65,7 @@ meta: label="Read more" /> @@ -75,7 +75,7 @@ meta: diff --git a/storage/object/quickstart.mdx b/storage/object/quickstart.mdx index b96d0ece6a..37bbd9f037 100644 --- a/storage/object/quickstart.mdx +++ b/storage/object/quickstart.mdx @@ -14,7 +14,7 @@ categories: - object-storage --- -[Scaleway Object Storage](/storage/object/concepts/#object-storage) is an Object Storage service based on the S3 protocol. It allows you to store any objects (documents, images, videos, etc.) and access them anytime from anywhere in the world. You can manage your storage directly from the Scaleway console. On the control panel, you can easily upload, download, and visualize the objects in your buckets. In addition, you can integrate many existing libraries or CLI clients into your application or scripts. +[Scaleway Object Storage](/storage/object/concepts/#object-storage) is an Object Storage service based on the Amazon S3 protocol. It allows you to store any objects (documents, images, videos, etc.) and access them anytime from anywhere in the world. You can manage your storage directly from the Scaleway console. On the control panel, you can easily upload, download, and visualize the objects in your buckets. In addition, you can integrate many existing libraries or CLI clients into your application or scripts. diff --git a/storage/object/reference-content/optimize-object-storage-performance.mdx b/storage/object/reference-content/optimize-object-storage-performance.mdx index 844cc43d67..9be71f2878 100644 --- a/storage/object/reference-content/optimize-object-storage-performance.mdx +++ b/storage/object/reference-content/optimize-object-storage-performance.mdx @@ -14,7 +14,7 @@ categories: - object-storage --- -[Scaleway Object Storage](/storage/object/concepts/#object-storage) is a highly resilient and versatile service that guarantees the reliability and accessibility of your data, while being fully [S3-compatible](/storage/object/concepts/#s3) and user-friendly. +[Scaleway Object Storage](/storage/object/concepts/#object-storage) is a highly resilient and versatile service that guarantees the reliability and accessibility of your data, while being fully [Amazon S3-compatible](/storage/object/concepts/#s3) and user-friendly. Even though it is designed to provide best-in-class latency and throughput, user infrastructure plays a predominant role in achieving optimum efficiency, as many different factors can have an impact on performance, such as your hardware, your software stack, or the way you manage your objects. @@ -50,7 +50,7 @@ For example, if the most CPU-intensive operation uses 20% of your CPU, you can e ### Geographic location -The physical distance to the hardware hosting your Object Storage can also have an impact on performance, especially on latency. Make sure to benchmark the different [regions](/storage/object/concepts/##region-and-availability-zone) where Object Storage is available to compare latency on your mission-critical S3 operations. +The physical distance to the hardware hosting your Object Storage can also have an impact on performance, especially on latency. Make sure to benchmark the different [regions](/storage/object/concepts/##region-and-availability-zone) where Object Storage is available to compare latency on your mission-critical operations. For instance, media and content distribution are often heavily affected by the physical distance between the host and the client, as objects are usually large in this scenario. diff --git a/storage/object/reference-content/s3-iam-permissions-equivalence.mdx b/storage/object/reference-content/s3-iam-permissions-equivalence.mdx index 6a6fc1c990..896b1cb3ac 100644 --- a/storage/object/reference-content/s3-iam-permissions-equivalence.mdx +++ b/storage/object/reference-content/s3-iam-permissions-equivalence.mdx @@ -1,11 +1,11 @@ --- meta: - title: S3 and IAM permissions equivalence - description: Understand how IAM permissions in S3 relate to Scaleway Object Storage. + title: Amazon S3 and IAM permissions equivalence + description: Understand how IAM permissions in Amazon S3 relate to Scaleway Object Storage. content: - h1: S3 and IAM permissions equivalence - paragraph: Understand how IAM permissions in S3 relate to Scaleway Object Storage. -tags: object-storage s3 aws action equivalent iam permission set + h1: Amazon S3 and IAM permissions equivalence + paragraph: Understand how IAM permissions in Amazon S3 relate to Scaleway Object Storage. +tags: object-storage amazon-s3 aws action equivalent iam permission set categories: - storage - object @@ -13,7 +13,7 @@ categories: ## ObjectStorageFullAccess -| S3 Action | IAM Resource | IAM Action | Authorized | +| Amazon S3 Action | IAM Resource | IAM Action | Authorized | |---------------------------------| ------------ |------------|------------| | DeleteBucketPolicy | Policy | Write | ✅ | | GetBucketPolicy | Policy | Read | ✅ | @@ -72,7 +72,7 @@ categories: ## ObjectStorageReadOnly -| S3 Action | IAM Resource | IAM Action | Authorized | +| Amazon S3 Action | IAM Resource | IAM Action | Authorized | | ------------------------------- | ------------ | ---------- | -----------| | AbortMultipartUpload | Object | Delete | | | CompleteMultipartUpload | Object | Create | | @@ -131,7 +131,7 @@ categories: ## ObjectStorageBucketsRead -| S3 Action | IAM Resource | IAM Action | Authorized | +| Amazon S3 Action | IAM Resource | IAM Action | Authorized | |---------------------------------|--------------|------------|------------| | AbortMultipartUpload | Object | Delete | | | CompleteMultipartUpload | Object | Create | | @@ -190,7 +190,7 @@ categories: ## ObjectStorageBucketsWrite -| S3 Action | IAM Resource | IAM Action | Authorized | +| Amazon S3 Action | IAM Resource | IAM Action | Authorized | |---------------------------------|--------------|------------|------------| | AbortMultipartUpload | Object | Delete | | | CompleteMultipartUpload | Object | Create | | @@ -249,7 +249,7 @@ categories: ## ObjectStorageBucketsDelete -| S3 Action | IAM Resource | IAM Action | Authorized | +| Amazon S3 Action | IAM Resource | IAM Action | Authorized | |---------------------------------|--------------|------------|------------| | AbortMultipartUpload | Object | Delete | | | CompleteMultipartUpload | Object | Create | | @@ -308,7 +308,7 @@ categories: ## ObjectStorageObjectsRead -| S3 Action | IAM Resource | IAM Action | Authorized | +| Amazon S3 Action | IAM Resource | IAM Action | Authorized | |---------------------------------|--------------|------------|------------| | AbortMultipartUpload | Object | Delete | | | CompleteMultipartUpload | Object | Create | | @@ -367,7 +367,7 @@ categories: ## ObjectStorageObjectsWrite -| S3 Action | IAM Resource | IAM Action | Authorized | +| Amazon S3 Action | IAM Resource | IAM Action | Authorized | |---------------------------------|--------------|------------|------------| | AbortMultipartUpload | Object | Delete | | | CompleteMultipartUpload | Object | Create | ✅ | @@ -426,7 +426,7 @@ categories: ## ObjectStorageObjectsDelete -| S3 Action | IAM Resource | IAM Action | Authorized | +| Amazon S3 Action | IAM Resource | IAM Action | Authorized | |---------------------------------|--------------|------------|------------| | AbortMultipartUpload | Object | Delete | ✅ | | CompleteMultipartUpload | Object | Create | | diff --git a/storage/object/troubleshooting/api-key-does-not-work.mdx b/storage/object/troubleshooting/api-key-does-not-work.mdx index 2f6847ad2a..38bd43388d 100644 --- a/storage/object/troubleshooting/api-key-does-not-work.mdx +++ b/storage/object/troubleshooting/api-key-does-not-work.mdx @@ -28,9 +28,9 @@ When using third-party API or CLI tools, such as the [AWS CLI](/storage/object/a ## Cause -The API key you used to configure the S3 third-party tool has a [preferred Project](/identity-and-access-management/iam/concepts/#preferred-project) assigned. +The API key you used to configure the Amazon S3 third-party tool has a [preferred Project](/identity-and-access-management/iam/concepts/#preferred-project) assigned. -If you try to perform S3 operations in a Project that is **NOT** the [preferred Project](/identity-and-access-management/iam/concepts/#preferred-project) using a third-party tool, you will not be able to access your resources, resulting in an error message or an empty response. +If you try to perform Object Storage operations in a Project that is **NOT** the [preferred Project](/identity-and-access-management/iam/concepts/#preferred-project) using a third-party tool, you will not be able to access your resources, resulting in an error message or an empty response. ## Solution @@ -39,14 +39,14 @@ You can change the preferred project of your API key: - by editing it from the [Scaleway console](/identity-and-access-management/iam/how-to/manage-api-keys/#how-to-edit-an-api-key) - by [overriding it while making an API call](/identity-and-access-management/iam/api-cli/using-api-key-object-storage/#overriding-the-preferred-project-when-making-a-call) -You should now be able to list your buckets using a supported S3-compatible third-party tool. +You should now be able to list your buckets using a supported Amazon Amazon S3-compatible third-party tool. ## Going further - Refer to the documentation on [using IAM API keys with Object Storage](/identity-and-access-management/iam/api-cli/using-api-key-object-storage/) for more information. - If you did not manage to identify the error and solve it by yourself, [open a support ticket](/console/account/how-to/open-a-support-ticket/), and provide as many details as possible, along with the necessary information below: - - S3 Endpoint (e.g. `s3.fr-par.scw.cloud`) + - Object Storage Endpoint (e.g. `s3.fr-par.scw.cloud`) - Bucket name - Object name (if the request concerns an object) - Request type (PUT, GET, etc.) diff --git a/storage/object/troubleshooting/cannot-access-data.mdx b/storage/object/troubleshooting/cannot-access-data.mdx index d3e3b36adc..11d994b942 100644 --- a/storage/object/troubleshooting/cannot-access-data.mdx +++ b/storage/object/troubleshooting/cannot-access-data.mdx @@ -26,7 +26,7 @@ I am experiencing issues while trying to access my buckets and objects stored on - Go to the [Status page](https://status.scaleway.com/) to see if there is an ongoing incident on the Scaleway infrastructure. -- Retrieve the logs of your buckets using any S3-compatible tool to identify the cause of the problem: +- Retrieve the logs of your buckets using any Amazon S3-compatible tool to identify the cause of the problem: - [Rclone](https://rclone.org/docs/#logging) - [S3cmd](https://s3tools.org/usage) - [MinIO Client](https://min.io/docs/minio/linux/reference/minio-mc-admin/mc-admin-logs.html#mc-admin-logs) @@ -39,7 +39,7 @@ I am experiencing issues while trying to access my buckets and objects stored on ## Going further If you did not manage to identify the error and solve it by yourself, [open a support ticket](/console/account/how-to/open-a-support-ticket/), and provide as many details as possible, along with the necessary information below: - - S3 Endpoint (e.g. `s3.fr-par.scw.cloud`) + - Object Storage Endpoint (e.g. `s3.fr-par.scw.cloud`) - Bucket name - Object name (if the request concerns an object) - Request type (PUT, GET, etc.) diff --git a/storage/object/troubleshooting/cannot-delete-bucket.mdx b/storage/object/troubleshooting/cannot-delete-bucket.mdx index eb638de802..b45e84944c 100644 --- a/storage/object/troubleshooting/cannot-delete-bucket.mdx +++ b/storage/object/troubleshooting/cannot-delete-bucket.mdx @@ -40,7 +40,7 @@ I cannot delete my Scaleway Object Storage bucket. - Refer to the documentation on [how to delete a bucket](/storage/object/how-to/delete-a-bucket/) for more information. - If you did not manage to identify the error and solve it by yourself, [open a support ticket](/console/account/how-to/open-a-support-ticket/), and provide as many details as possible, along with the necessary information below: - - S3 Endpoint (e.g. `s3.fr-par.scw.cloud`) + - Object Storage Endpoint (e.g. `s3.fr-par.scw.cloud`) - Bucket name - Object name (if the request concerns an object) - Request type (PUT, GET, etc.) diff --git a/storage/object/troubleshooting/cannot-restore-glacier.mdx b/storage/object/troubleshooting/cannot-restore-glacier.mdx index f651beb259..c8cd4dd1eb 100644 --- a/storage/object/troubleshooting/cannot-restore-glacier.mdx +++ b/storage/object/troubleshooting/cannot-restore-glacier.mdx @@ -56,7 +56,7 @@ The `"Restore": "ongoing-request=\"true\"",` line indicates that the restore ope - Refer to the documentation on [how to restore objects from Glacier](/storage/object/how-to/restore-an-object-from-glacier/) for more information. - If you did not manage to identify the error and solve it by yourself, [open a support ticket](/console/account/how-to/open-a-support-ticket/), and provide as many details as possible, along with the necessary information below: - - S3 Endpoint (e.g. `s3.fr-par.scw.cloud`) + - Object Storage Endpoint (e.g. `s3.fr-par.scw.cloud`) - Bucket name - Object name (if the request concerns an object) - Request type (PUT, GET, etc.) diff --git a/storage/object/troubleshooting/lost-bucket-access-bucket-policy.mdx b/storage/object/troubleshooting/lost-bucket-access-bucket-policy.mdx index 6e449d0ebb..742c97fdf6 100644 --- a/storage/object/troubleshooting/lost-bucket-access-bucket-policy.mdx +++ b/storage/object/troubleshooting/lost-bucket-access-bucket-policy.mdx @@ -73,7 +73,7 @@ If you have the permission to apply a bucket policy, you can also delete it. To - Refer to the [bucket policies overview](/storage/object/api-cli/bucket-policy/) for more information on the different elements of a bucket policy. - If you did not manage to identify the error and solve it by yourself, [open a support ticket](/console/account/how-to/open-a-support-ticket/), and provide as many details as possible, along with the necessary information below: - - S3 Endpoint (e.g. `s3.fr-par.scw.cloud`) + - Object Storage Endpoint (e.g. `s3.fr-par.scw.cloud`) - Bucket name - Object name (if the request concerns an object) - Request type (PUT, GET, etc.) diff --git a/storage/object/troubleshooting/low-performance.mdx b/storage/object/troubleshooting/low-performance.mdx index ed54488bb7..f5e554baa7 100644 --- a/storage/object/troubleshooting/low-performance.mdx +++ b/storage/object/troubleshooting/low-performance.mdx @@ -26,7 +26,7 @@ I am noticing decreased throughputs, timeouts, high latency, and overall instabi - Go to the [Status page](https://status.scaleway.com/) to see if there is an ongoing incident on the Scaleway infrastructure. -- Retrieve the logs of your buckets using any S3-compatible tool to identify the cause of the problem: +- Retrieve the logs of your buckets using any Amazon S3-compatible tool to identify the cause of the problem: - [Rclone](https://rclone.org/docs/#logging) - [S3cmd](https://s3tools.org/usage) - [MinIO Client](https://min.io/docs/minio/linux/reference/minio-mc-admin/mc-admin-logs.html#mc-admin-logs) @@ -37,7 +37,7 @@ I am noticing decreased throughputs, timeouts, high latency, and overall instabi - Refer to the documentation on [how to optimize your Object Storage performance](/storage/object/reference-content/optimize-object-storage-performance/) for more information. - If you did not manage to identify the error and solve it by yourself, [open a support ticket](/console/account/how-to/open-a-support-ticket/), and provide as many details as possible, along with the necessary information below: - - S3 Endpoint (e.g. `s3.fr-par.scw.cloud`) + - Object Storage Endpoint (e.g. `s3.fr-par.scw.cloud`) - Bucket name - Object name (if the request concerns an object) - Request type (PUT, GET, etc.) diff --git a/styles/scw_styles/HeadingSentenceCase.yml b/styles/scw_styles/HeadingSentenceCase.yml index bc3fe0cc45..31cb9d6797 100644 --- a/styles/scw_styles/HeadingSentenceCase.yml +++ b/styles/scw_styles/HeadingSentenceCase.yml @@ -46,7 +46,7 @@ exceptions: - Object Storage - Glacier - Standard - - S3 + - Amazon S3 - Block Storage - Managed Database - Managed Databases diff --git a/tutorials/abort-multipart-upload-minio/index.mdx b/tutorials/abort-multipart-upload-minio/index.mdx index 0520f03bbf..b0b871f235 100644 --- a/tutorials/abort-multipart-upload-minio/index.mdx +++ b/tutorials/abort-multipart-upload-minio/index.mdx @@ -1,10 +1,10 @@ --- meta: - title: Aborting Incomplete S3 Multipart Uploads with MinIO Client - description: This page explains how to abort an incomplete S3 multipart upload with the MinIO client. + title: Aborting Incomplete Multipart Uploads with MinIO Client + description: This page explains how to abort an incomplete multipart upload with the MinIO client. content: - h1: Aborting Incomplete S3 Multipart Uploads with MinIO Client - paragraph: This page explains how to abort an incomplete S3 multipart upload with the MinIO client. + h1: Aborting Incomplete Multipart Uploads with MinIO Client + paragraph: This page explains how to abort an incomplete multipart upload with the MinIO client. tags: minio multipart-uploads categories: - object-storage @@ -13,13 +13,13 @@ dates: hero: assets/scaleway_minio.webp --- -## S3 Object Storage - Multipart Upload Overview +## Object Storage - Multipart Upload Overview [Multipart Uploads](/storage/object/api-cli/multipart-uploads/) allows you to upload large files (up to 5 TB) to the Object Storage platform in multiple parts. This allows faster, more flexible uploads. If you do not complete a multipart upload, all the uploaded parts will still be stored and counted as part of your storage usage. Multipart uploads can be aborted manually [via the API and CLI](/storage/object/api-cli/multipart-uploads/#aborting-a-multipart-upload) or automatically using a [Lifecycle rule](/storage/object/api-cli/lifecycle-rules-api/#setting-rules-for-incomplete-multipart-uploads). -If you use the API or the AWS CLI, you will have to abort each incomplete multipart upload independently. However, there is an easier and faster way to abort multipart uploads, using the open-source S3-compatible client [mc](https://github.com/minio/mc), from MinIO. In this tutorial, we show you how to use mc to abort and clean up all your incomplete multipart uploads at once. +If you use the API or the AWS CLI, you will have to abort each incomplete multipart upload independently. However, there is an easier and faster way to abort multipart uploads, using the open-source Amazon S3-compatible client [mc](https://github.com/minio/mc), from MinIO. In this tutorial, we show you how to use mc to abort and clean up all your incomplete multipart uploads at once. diff --git a/tutorials/ceph-cluster/index.mdx b/tutorials/ceph-cluster/index.mdx index 56244c9992..930fa7074d 100644 --- a/tutorials/ceph-cluster/index.mdx +++ b/tutorials/ceph-cluster/index.mdx @@ -193,7 +193,7 @@ Deploy the Ceph cluster on your machines by following these steps: ### Deploying a Ceph Object Gateway (RGW) -Deploy the Ceph Object Gateway (RGW) to access files using S3-compatible clients: +Deploy the Ceph Object Gateway (RGW) to access files using Amazon S3-compatible clients: 1. Run the following command on the admin machine: @@ -225,7 +225,7 @@ Deploy the Ceph Object Gateway (RGW) to access files using S3-compatible clients 3. Verify the installation by accessing `http://ceph-node-a:7480` in a web browser. -## Creating S3 credentials +## Creating Object Storage credentials On the gateway instance (`ceph-node-a`), run the following command to create a new user: @@ -233,7 +233,7 @@ On the gateway instance (`ceph-node-a`), run the following command to create a n sudo radosgw-admin user create --uid=johndoe --display-name="John Doe" --email=john@example.com ``` -- Note the `access_key` and `user_key`. Proceed to configure your S3 client, e.g., [aws-cli](/storage/object/api-cli/object-storage-aws-cli/). +- Note the `access_key` and `user_key`. Proceed to configure your Object Storage client, e.g., [aws-cli](/storage/object/api-cli/object-storage-aws-cli/). ## Configuring AWS-CLI @@ -286,4 +286,4 @@ Use AWS-CLI to manage objects in your Ceph storage cluster: ## Conclusion -You have successfully configured an S3-compatible storage cluster using Ceph and three [Dedibox dedicated servers](https://www.scaleway.com/en/dedibox/). You can now manage your data using any S3-compatible tool. For advanced configuration, refer to the official [Ceph documentation](https://docs.ceph.com/docs/master/). \ No newline at end of file +You have successfully configured an Amazon S3-compatible storage cluster using Ceph and three [Dedibox dedicated servers](https://www.scaleway.com/en/dedibox/). You can now manage your data using any Amazon S3-compatible tool. For advanced configuration, refer to the official [Ceph documentation](https://docs.ceph.com/docs/master/). \ No newline at end of file diff --git a/tutorials/cilicon-self-hosted-ci-on-apple-silicon/index.mdx b/tutorials/cilicon-self-hosted-ci-on-apple-silicon/index.mdx index ce462dabf6..3346b81f9a 100644 --- a/tutorials/cilicon-self-hosted-ci-on-apple-silicon/index.mdx +++ b/tutorials/cilicon-self-hosted-ci-on-apple-silicon/index.mdx @@ -95,7 +95,7 @@ provisioner: executor: # defaults to 'shell' maxNumberOfBuilds: # defaults to '1' downloadLatest: # defaults to 'true' - downloadURL: # defaults to GitLab official S3 bucket + downloadURL: # defaults to GitLab official Object Storage bucket configToml: > # Advanced config as custom config.toml file to be appended to the basic config and copied to the runner. ``` diff --git a/tutorials/configure-dvc-with-object-storage/index.mdx b/tutorials/configure-dvc-with-object-storage/index.mdx index 6bdcbf9901..2d928cac83 100644 --- a/tutorials/configure-dvc-with-object-storage/index.mdx +++ b/tutorials/configure-dvc-with-object-storage/index.mdx @@ -5,7 +5,7 @@ meta: content: h1: Configuring DVC with Object Storage paragraph: This page provides information on how to configure DVC with Scaleway Object Storage. -tags: s3 dvc machine-learning data-science +tags: amazon-s3 dvc machine-learning data-science categories: - object-storage dates: @@ -17,7 +17,7 @@ Git is unarguably the most popular and powerful version control system to store However, when it comes to large datasets, you might need to turn to third-party version control tools that are specifically designed to handle them. -Data Version Control (DVC) was specifically designed with this use case in mind. It works alongside Git and allows you to store your data in the remote storage of your choice (such as a Scaleway S3-enabled bucket) while storing only the metadata in a Git repository. +Data Version Control (DVC) was specifically designed with this use case in mind. It works alongside Git and allows you to store your data in the remote storage of your choice (such as a Scaleway Object Storage bucket) while storing only the metadata in a Git repository. In this tutorial, you learn how to use [Scaleway Object Storage](https://www.scaleway.com/en/object-storage/) as a remote storage for DVC. @@ -39,7 +39,7 @@ In this tutorial, you learn how to use [Scaleway Object Storage](https://www.sca pip3 install dvc ``` -2. Run the following command to install the S3 dependencies: +2. Run the following command to install the Amazon S3 dependencies: ```bash pip3 install "dvc[s3]" ``` @@ -93,7 +93,7 @@ In this tutorial, you learn how to use [Scaleway Object Storage](https://www.sca dvc remote add -d myremote s3://my-bucket/path ``` -2. Run the following command to set the S3 endpoint of your remote storage: +2. Run the following command to set the Object Storage endpoint of your remote storage: ```bash dvc remote modify myremote \ endpointurl https://s3.fr-par.scw.cloud diff --git a/tutorials/configure-plex-s3/index.mdx b/tutorials/configure-plex-s3/index.mdx index 1c51d5ef80..ab3409a18f 100644 --- a/tutorials/configure-plex-s3/index.mdx +++ b/tutorials/configure-plex-s3/index.mdx @@ -1,7 +1,7 @@ --- meta: title: Configuring Plex Media Server with Object Storage - description: This page shows how to set up an s3 media server with Plex and Object Storage + description: This page shows how to set up a media server with Plex and Object Storage content: h1: Configuring Plex Media Server with Object Storage paragraph: This page shows how to configure Plex media server with Object Storage @@ -167,7 +167,7 @@ Plex is a client/server media player system comprising two main components: - You can upload additional content to your server with any S3-compatible tool, like [Cyberduck](/tutorials/store-s3-cyberduck/). + You can upload additional content to your server with any Amazon S3-compatible tool, like [Cyberduck](/tutorials/store-s3-cyberduck/). 9. Click **Next** and then **Finish** to conclude the set-up. 10. Add media to your bucket and trigger a scan of your media folder in the Plex interface. Your media should display. If so, it is all set up. For more information about Plex, refer to their [official documentation](https://support.plex.tv/articles/). \ No newline at end of file diff --git a/tutorials/create-openwrt-image-for-scaleway/index.mdx b/tutorials/create-openwrt-image-for-scaleway/index.mdx index ba1c864690..7dfaf1b604 100644 --- a/tutorials/create-openwrt-image-for-scaleway/index.mdx +++ b/tutorials/create-openwrt-image-for-scaleway/index.mdx @@ -292,7 +292,7 @@ In this tutorial, we do not set up cloud-init, but use the same magic IP mechani ## Import the image -You can use the Scaleway console or your favorite S3 CLI to upload objects into a bucket. +You can use the Scaleway console or your favorite Amazon S3-compatible CLI tool to upload objects into a bucket. In this example, we use the [AWS CLI](/storage/object/api-cli/object-storage-aws-cli/). diff --git a/tutorials/create-serverless-scraping/index.mdx b/tutorials/create-serverless-scraping/index.mdx index 30fb27d7b6..de4ea3bf09 100644 --- a/tutorials/create-serverless-scraping/index.mdx +++ b/tutorials/create-serverless-scraping/index.mdx @@ -407,5 +407,5 @@ While the volume of data processed in this example is quite small, thanks to the Here are some possible extensions to this basic example: - Replace the simple proposed logic with your own. What about counting how many times some keywords (e.g: copilot, serverless, microservice) appear in Hacker News articles? - Define multiple cron triggers for different websites and pass the website as an argument to the function. Or, create multiple functions that feed the same queue. - - Use a [Serverless Container](/serverless/containers/quickstart/) instead of the consumer function, and use a command line tool such as `htmldoc` or `pandoc` to convert the scraped articles to PDF and upload the result to a [Scaleway Object Storage](/storage/object/quickstart/) S3 bucket. + - Use a [Serverless Container](/serverless/containers/quickstart/) instead of the consumer function, and use a command line tool such as `htmldoc` or `pandoc` to convert the scraped articles to PDF and upload the result to a [Scaleway Object Storage bucket](/storage/object/quickstart/). - Replace the Managed Database for PostgreSQL with a [Scaleway Serverless Database](/serverless/sql-databases/quickstart/), so that all the infrastructure lives in the serverless ecosystem! *Note that at the moment there is no Terraform support for Serverless Database, hence the choice here to use Managed Database for PostgreSQL*. \ No newline at end of file diff --git a/tutorials/deploy-nextcloud-s3/index.mdx b/tutorials/deploy-nextcloud-s3/index.mdx index 83c1be5d02..ed32715dc8 100644 --- a/tutorials/deploy-nextcloud-s3/index.mdx +++ b/tutorials/deploy-nextcloud-s3/index.mdx @@ -143,7 +143,7 @@ NextCloud can use Object Storage as primary storage. This gives you the possibil ``` nano /var/www/nextcloud/config/config.php ``` -3. Add a configuration block for S3-compatible storage, as follows: +3. Add a configuration block for Amazon S3-compatible storage, as follows: ``` 'objectstore' => array( 'class' => '\\OC\\Files\\ObjectStore\\S3', diff --git a/tutorials/deploy-saas-application/index.mdx b/tutorials/deploy-saas-application/index.mdx index a7e1cf1ae8..bdce57cccf 100644 --- a/tutorials/deploy-saas-application/index.mdx +++ b/tutorials/deploy-saas-application/index.mdx @@ -41,7 +41,7 @@ You will learn how to store environment variables with Kubernetes secrets and us In all applications, you have to define settings, usually based on environment variables, so that you can adapt the behavior of your application depending on their values. Having used Django to create your SaaS application, the settings you need can be found in a file called `settings.py`. In the following steps, we will modify `settings.py` to connect our private Object Storage bucket to our application. As noted in the requirements for this tutorial, you should have already [created a private Object Storage bucket](/storage/object/how-to/create-a-bucket/) before continuing. -1. Take a look at your Django application's `settings.py` file. Natively, Django does not manage the S3 protocol for storing static files, and it will provide you with a basic configuration at the end of this file: +1. Take a look at your Django application's `settings.py` file. Natively, Django does not manage the Amazon S3 protocol for storing static files, and it will provide you with a basic configuration at the end of this file: ``` STATIC_URL = '/static/' STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles') @@ -91,13 +91,13 @@ In all applications, you have to define settings, usually based on environment v - `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` are the [access key and secret key for your Scaleway account](/identity-and-access-management/iam/how-to/create-api-keys/) - `AWS_STORAGE_BUCKET_NAME` is the name you gave your [Object Storage bucket](/storage/object/how-to/create-a-bucket/), e.g. `my_awesome_bucket` - `AWS_S3_REGION_NAME` is the region/zone of your Object Storage Bucket - - `AWS_S3_HOST` and `AWS_S3_ENDPOINT_URL` are the URLs needed to access your S3 bucket. They are composed of the previously defined variables. - - `AWS_LOCATION` is the folder that will be created in our S3 bucket for our static files + - `AWS_S3_HOST` and `AWS_S3_ENDPOINT_URL` are the URLs needed to access your Object Storage bucket. They are composed of the previously defined variables. + - `AWS_LOCATION` is the folder that will be created in our Object Storage bucket for our static files - `STATIC_URL` has changed - - `STATICFILES_STORAGE` defines the new storage class that we want to use, here standard S3 protocol storage. We now need to give values to our environment values, so that they can be correctly found by `settings.py` via `os.getenv('MY_VAR_NAME')`. + - `STATICFILES_STORAGE` defines the new storage class that we want to use, here standard Amazon S3 protocol storage. We now need to give values to our environment values, so that they can be correctly found by `settings.py` via `os.getenv('MY_VAR_NAME')`. - Remember that S3 is a standard protocol. Even though the `boto3` library asks us to prefix variables with `AWS`, it nonetheless works perfectly with Scaleway Object Storage. + Remember that Amazon S3 is a standard protocol. Even though the `boto3` library asks us to prefix variables with `AWS`, it nonetheless works perfectly with Scaleway Object Storage. Even though we added a lot of lines to `settings.py`, only four environment variables are ultimately needed to use our Object Storage bucket: `ACCESS_KEY_ID`, `SECRET_ACCESS_KEY`, `AWS_S3_REGION_NAME` (eg `nl-ams`) and `AWS_STORAGE_BUCKET_NAME`. These variables are called using `os.getenv('MY_VAR_NAME')` so we now need to set these values. diff --git a/tutorials/deploying-a-documentation-website-with-docusaurus-on-scaleway/index.mdx b/tutorials/deploying-a-documentation-website-with-docusaurus-on-scaleway/index.mdx index 264a034efc..3adcec84f0 100644 --- a/tutorials/deploying-a-documentation-website-with-docusaurus-on-scaleway/index.mdx +++ b/tutorials/deploying-a-documentation-website-with-docusaurus-on-scaleway/index.mdx @@ -107,7 +107,7 @@ Docusaurus is available for most operating systems. In this tutorial, we describ 9. Click **Skip this and set up a workflow yourself**. 10. Copy the following code in the text editor, keep the default file name `main.yml` and click **Start commit**: ``` - name: Deploy Docusaurus to S3 + name: Deploy Docusaurus to Object Storage on: push: branches: diff --git a/tutorials/encode-videos-using-serverless-jobs/index.mdx b/tutorials/encode-videos-using-serverless-jobs/index.mdx index 3f68a4d1b4..1ee62f6e53 100644 --- a/tutorials/encode-videos-using-serverless-jobs/index.mdx +++ b/tutorials/encode-videos-using-serverless-jobs/index.mdx @@ -15,7 +15,7 @@ dates: posted: 2024-05-15 --- -This tutorial demonstrates the process of encoding videos retrieved from Object Storage using Serverless Jobs: media encoding is a resource-intensive task over prolonged durations, making it suitable for Serverless Jobs. The job takes a video file as its input, encodes it using a Docker image based on [FFMPEG](https://ffmpeg.org/), then uploads the encoded video back to the S3 bucket. +This tutorial demonstrates the process of encoding videos retrieved from Object Storage using Serverless Jobs: media encoding is a resource-intensive task over prolonged durations, making it suitable for Serverless Jobs. The job takes a video file as its input, encodes it using a Docker image based on [FFMPEG](https://ffmpeg.org/), then uploads the encoded video back to the Object Storage bucket. @@ -28,14 +28,14 @@ This tutorial demonstrates the process of encoding videos retrieved from Object ## Creating the job image -The initial step involves defining a Docker image for interacting with the S3 Object Storage using [MinIO](https://min.io/) and performing a video encoding task using [FFMPEG](https://ffmpeg.org/). +The initial step involves defining a Docker image for interacting with the Object Storage using [MinIO](https://min.io/) and performing a video encoding task using [FFMPEG](https://ffmpeg.org/). 1. Create a bash script `encode.sh` with the following content: ```bash #!/bin/sh set -e - echo "Configuring S3 access for MinIO" + echo "Configuring Object Storage access for MinIO" mc config host add scw "https://$JOB_S3_ENDPOINT/" "$JOB_S3_ACCESS_KEY" "$JOB_S3_SECRET_KEY" echo "Downloading the file from S3" @@ -48,7 +48,7 @@ The initial step involves defining a Docker image for interacting with the S3 Ob mc cp "/tmp/$JOB_OUTPUT_FILENAME" "scw/$JOB_OUTPUT_PATH/$JOB_OUTPUT_FILENAME" ``` - That bash script downloads a video from an S3 bucket, encodes that video using FFMPEG, and then uploads the encoded video into the bucket, by leveraging a couple of environment variables which will be detailed in the following sections. + That bash script downloads a video from an Object Storage bucket, encodes that video using FFMPEG, and then uploads the encoded video into the bucket, by leveraging a couple of environment variables which will be detailed in the following sections. For illustration purposes, this script encodes a video using the x264 video codec and the AAC audio codec. Encoding settings can be modified using command-line parameters to FFMPEG. @@ -58,7 +58,7 @@ The initial step involves defining a Docker image for interacting with the S3 Ob ```dockerfile FROM linuxserver/ffmpeg:amd64-latest - # Install the MinIO S3 client + # Install the MinIO client RUN curl https://dl.min.io/client/mc/release/linux-amd64/mc -o /usr/local/bin/mc RUN chmod +x /usr/local/bin/mc @@ -69,7 +69,7 @@ The initial step involves defining a Docker image for interacting with the S3 Ob ENTRYPOINT /encode.sh ``` - This Dockerfile uses `linuxserver/ffmpeg` as a base image bundled with FFMPEG along with a variety of encoding codecs and installs [MinIO](https://min.io/) as a command-line S3 client to copy files over Object Storage. + This Dockerfile uses `linuxserver/ffmpeg` as a base image bundled with FFMPEG along with a variety of encoding codecs and installs [MinIO](https://min.io/) as a command-line client to copy files over Object Storage. 3. Build and [push the image](/containers/container-registry/how-to/push-images/) to your Container Registry: ``` @@ -94,7 +94,7 @@ The initial step involves defining a Docker image for interacting with the S3 Ob 4. Toggle the **Advanced options** section and add 3 environment variables: - - `JOB_S3_ENDPOINT` is your S3 endpoint (e.g. `s3.nl-ams.scw.cloud`). + - `JOB_S3_ENDPOINT` is your Object Storage endpoint (e.g. `s3.nl-ams.scw.cloud`). - `JOB_S3_ACCESS_KEY` is your API access key. - `JOB_S3_SECRET_KEY` is your API secret key. @@ -104,14 +104,14 @@ The initial step involves defining a Docker image for interacting with the S3 Ob ## Triggering the serverless job -Ensure that your S3 bucket contains at least one video that can be encoded. +Ensure that your Object Storage bucket contains at least one video that can be encoded. 1. In the Scaleway Console, go to **Serverless Jobs** and click on the name of your job. The job **Overview** tab displays. 2. Click the **Actions** button, then click **Run job with options** in the drop-down menu. 3. Add 4 environment variables: - - `JOB_INPUT_PATH` is the folder containing the video to encode, including your S3 bucket name. + - `JOB_INPUT_PATH` is the folder containing the video to encode, including your Object Storage bucket name. - `JOB_INPUT_FILENAME` is the file name of the video to encode, including the file extension. - - `JOB_OUTPUT_PATH` is the folder containing the encoded video that will be uploaded, including your S3 bucket name. + - `JOB_OUTPUT_PATH` is the folder containing the encoded video that will be uploaded, including your Object Storage bucket name. - `JOB_OUTPUT_FILENAME` is the file name of the encoded video that will be uploaded. @@ -120,7 +120,7 @@ Ensure that your S3 bucket contains at least one video that can be encoded. The progress and details for your Job run can be viewed in the **Job runs** section of the job **Overview** tab in the [Scaleway console](https://console.scaleway.com). You can also access the detailed logs of your job in [Cockpit](/observability/cockpit/quickstart/). -Once the run status is **Succeeded**, the encoded video can be found in your S3 bucket under the folder and file name specified above in the environment variables. +Once the run status is **Succeeded**, the encoded video can be found in your Object Storage bucket under the folder and file name specified above in the environment variables. Your job can also be triggered through the [Scaleway API](https://www.scaleway.com/en/developers/api/serverless-jobs/#path-job-definitions-run-an-existing-job-definition-by-its-unique-identifier-this-will-create-a-new-job-run) using the same environment variables: diff --git a/tutorials/encrypt-s3-data-rclone/index.mdx b/tutorials/encrypt-s3-data-rclone/index.mdx index 21cca69e15..5b79243e56 100644 --- a/tutorials/encrypt-s3-data-rclone/index.mdx +++ b/tutorials/encrypt-s3-data-rclone/index.mdx @@ -7,7 +7,7 @@ content: paragraph: In this tutorial, you will learn how to encrypt your data using Rclone before uploading it to Scaleway Object Storage. categories: - object-storage -tags: encryption s3 rclone +tags: encryption amazon-s3 rclone dates: validation: 2024-09-16 posted: 2020-06-10 @@ -19,7 +19,7 @@ Offering virtual backends, Rclone facilitates encryption, caching, chunking, and Compatible with Windows, macOS X, and various Linux distributions, Rclone addresses a wide user base seeking efficient file management solutions. -In this tutorial, we will explore the capabilities of the **Rclone crypt** module, which empowers users to encrypt their data seamlessly before transmitting it to Scaleway Object Storage via the S3 protocol. +In this tutorial, we will explore the capabilities of the **Rclone crypt** module, which empowers users to encrypt their data seamlessly before transmitting it to Scaleway Object Storage via the Amazon S3 protocol. @@ -65,13 +65,13 @@ brew install rclone sudo mandb ``` -## Configuring an S3 remote endpoint +## Configuring an Object Storage remote endpoint You need to have your [API key](/identity-and-access-management/iam/how-to/create-api-keys/) ready for the `rclone` configuration. -Before encrypting your data, create a new remote S3 endpoint in Rclone using the `rclone config` command: +Before encrypting your data, create a new remote Object Storage endpoint in Rclone using the `rclone config` command: ``` No remotes found - make a new one @@ -187,7 +187,7 @@ e/n/d/r/c/s/q> q `rclone crypt` will use the previously configured endpoint to store the encrypted files. Configure it by running `rclone config` again. -In the config below we define the Object Storage bucket at the `remote` prompt. In our example, we use our S3 endpoint `scaleway` with the bucket `myobjectstoragebucket`. +In the config below we define the Object Storage bucket at the `remote` prompt. In our example, we use our Object Storage endpoint `scaleway` with the bucket `myobjectstoragebucket`. Edit these values towards your configuration. A long passphrase is recommended for security reasons, or you can use a random one. diff --git a/tutorials/getting-started-with-kops-on-scaleway/index.mdx b/tutorials/getting-started-with-kops-on-scaleway/index.mdx index 3554068e73..640c656eeb 100644 --- a/tutorials/getting-started-with-kops-on-scaleway/index.mdx +++ b/tutorials/getting-started-with-kops-on-scaleway/index.mdx @@ -41,11 +41,11 @@ export SCW_SECRET_KEY="my-secret-key" export SCW_DEFAULT_PROJECT_ID="my-project-id" # Configure the bucket name to store kops state export KOPS_STATE_STORE=scw:// # where is the name of the bucket you set earlier -# Scaleway Object Storage is S3 compatible so we just override some S3 configurations to talk to our bucket +# Scaleway Object Storage is Amazon S3-compatible so we just override some configurations to talk to our bucket export S3_REGION=fr-par # or another scaleway region providing Object Storage export S3_ENDPOINT=s3.$S3_REGION.scw.cloud # define provider endpoint -export S3_ACCESS_KEY_ID="my-access-key" # where is the S3 API access key for your bucket -export S3_SECRET_ACCESS_KEY="my-secret-key" # where is the S3 API secret key for your bucket +export S3_ACCESS_KEY_ID="my-access-key" # where is the API access key for your bucket +export S3_SECRET_ACCESS_KEY="my-secret-key" # where is the API secret key for your bucket # this is required since Scaleway support is currently in alpha so it is feature gated export KOPS_FEATURE_FLAGS="Scaleway" ``` diff --git a/tutorials/how-to-implement-rag-generativeapis/index.mdx b/tutorials/how-to-implement-rag-generativeapis/index.mdx index 570689a1fc..053bee87d2 100644 --- a/tutorials/how-to-implement-rag-generativeapis/index.mdx +++ b/tutorials/how-to-implement-rag-generativeapis/index.mdx @@ -59,11 +59,11 @@ Create a .env file and add the following variables. These will store your API ke SCW_DB_HOST=your_scaleway_managed_db_host # The IP address of your database instance SCW_DB_PORT=your_scaleway_managed_db_port # The port number for your database instance - # Scaleway S3 bucket configuration + # Scaleway Object Storage bucket configuration ## Will be used to store your proprietary data (PDF, CSV etc) SCW_BUCKET_NAME=your_scaleway_bucket_name SCW_REGION=fr-par - SCW_BUCKET_ENDPOINT="https://s3.{{SCW_REGION}}.scw.cloud" # S3 main endpoint, e.g., https://s3.fr-par.scw.cloud + SCW_BUCKET_ENDPOINT="https://s3.{{SCW_REGION}}.scw.cloud" # Object Storage main endpoint, e.g., https://s3.fr-par.scw.cloud # Scaleway Generative APIs endpoint ## LLM and Embedding model are served through this base URL @@ -196,7 +196,7 @@ page_iterator = paginator.paginate(Bucket=os.getenv("SCW_BUCKET_NAME", "")) In this code sample, we: - Set up a Boto3 session: we initialize a Boto3 session, which is the AWS SDK for Python, fully compatible with Scaleway Object Storage. This session manages configuration, including credentials and settings, that Boto3 uses for API requests. -- Create an S3 client: we establish an S3 client to interact with the Scaleway Object Storage service. +- Create an Amazon S3 client: we establish an Amazon client to interact with the Scaleway Object Storage service. - Set up pagination for listing objects: we prepare pagination to handle potentially large lists of objects efficiently. - Iterate through the bucket: this initiates the pagination process, allowing us to list all objects within the specified Scaleway Object bucket seamlessly. diff --git a/tutorials/how-to-implement-rag/index.mdx b/tutorials/how-to-implement-rag/index.mdx index d6197c4d74..2512c9b477 100644 --- a/tutorials/how-to-implement-rag/index.mdx +++ b/tutorials/how-to-implement-rag/index.mdx @@ -59,9 +59,9 @@ Create a .env file and add the following variables. These will store your API ke SCW_DB_HOST=your_scaleway_managed_db_host # The IP address of your database instance SCW_DB_PORT=your_scaleway_managed_db_port # The port number for your database instance - # Scaleway S3 bucket configuration + # Scaleway Object Storage bucket configuration SCW_BUCKET_NAME=your_scaleway_bucket_name - SCW_BUCKET_ENDPOINT="https://s3.{{SCW_REGION}}.scw.cloud" # S3 endpoint, e.g., https://s3.fr-par.scw.cloud + SCW_BUCKET_ENDPOINT="https://s3.{{SCW_REGION}}.scw.cloud" # Object Storage endpoint, e.g., https://s3.fr-par.scw.cloud # Scaleway Inference API configuration (Embeddings) SCW_INFERENCE_EMBEDDINGS_ENDPOINT="https://{{SCW_INFERENCE_EMBEDDINGS_DEPLOYMENT_ID}}.ifr.fr-par.scaleway.com/v1" # Endpoint for sentence-transformers/sentence-t5-xxl deployment @@ -207,7 +207,7 @@ page_iterator = paginator.paginate(Bucket=BUCKET_NAME) In this code sample we: - Set up a Boto3 session: We initialize a Boto3 session, which is the AWS SDK for Python, fully compatible with Scaleway Object Storage. This session manages configuration, including credentials and settings, that Boto3 uses for API requests. -- Create an S3 client: We establish an S3 client to interact with the Scaleway Object Storage service. +- Create an Amazon S3 client: We establish an Amazon S3 client to interact with the Scaleway Object Storage service. - Set up pagination for listing objects: We prepare pagination to handle potentially large lists of objects efficiently. - Iterate through the bucket: This initiates the pagination process, allowing us to list all objects within the specified Scaleway Object bucket seamlessly. diff --git a/tutorials/k8s-velero-backup/index.mdx b/tutorials/k8s-velero-backup/index.mdx index f88c4673ea..846850de0a 100644 --- a/tutorials/k8s-velero-backup/index.mdx +++ b/tutorials/k8s-velero-backup/index.mdx @@ -14,7 +14,7 @@ dates: posted: 2023-06-02 --- -Velero is an open-source utility designed to facilitate the backup, restoration, and migration of Kubernetes cluster resources and persistent volumes on S3-compatible Object Storage. Originally developed by Heptio, it became part of VMware following an acquisition. Velero offers a straightforward and effective approach to protecting your Kubernetes applications and data through regular backups and supporting disaster recovery measures. +Velero is an open-source utility designed to facilitate the backup, restoration, and migration of Kubernetes cluster resources and persistent volumes on Amazon S3-compatible Object Storage. Originally developed by Heptio, it became part of VMware following an acquisition. Velero offers a straightforward and effective approach to protecting your Kubernetes applications and data through regular backups and supporting disaster recovery measures. With Velero, users can generate either scheduled or on-demand backups encompassing the entire cluster or specific namespaces. These backups comprehensively capture the state of all resources within the cluster, including deployments, services, config maps, secrets, and persistent volumes. Velero ensures the preservation of associated metadata and labels, guaranteeing the completeness and accuracy of the backups for potential restoration. diff --git a/tutorials/large-messages/index.mdx b/tutorials/large-messages/index.mdx index 4f851c6a4f..f559e1a46d 100644 --- a/tutorials/large-messages/index.mdx +++ b/tutorials/large-messages/index.mdx @@ -296,7 +296,7 @@ We continue using the Scaleway ecosystem and deploy the consumer using a Serverl secret_access_key = os.getenv("SECRET_ACCESS_KEY") ``` -10. Get the input file name from the body, define the PDF file name from this, and set up the s3 client to upload the file with Scaleway credentials. +10. Get the input file name from the body, define the PDF file name from this, and set up the Amazon S3 client to upload the file with Scaleway credentials. ```python input_file = event['body'] output_file = os.path.splitext(input_file)[0] + ".pdf" @@ -318,7 +318,7 @@ We continue using the Scaleway ecosystem and deploy the consumer using a Serverl print("Successfully made pdf file") ``` -12. Download the image from the bucket using the s3 client. +12. Download the image from the bucket using the Amazon S3 client. ```python s3.download_file(bucket_name, input_file, input_file) print("Object " + input_file + " downloaded") @@ -331,7 +331,7 @@ We continue using the Scaleway ecosystem and deploy the consumer using a Serverl print("Object " + input_file + " uploaded") ``` -14. Put a `try/except` around the code to gracefully handle any errors coming from the S3 client. +14. Put a `try/except` around the code to gracefully handle any errors coming from the Object Storage client. ```python try: s3.download_file(bucket_name, input_file, input_file) diff --git a/tutorials/mastodon-community/index.mdx b/tutorials/mastodon-community/index.mdx index 5fdbaa3d8d..cc50fd453b 100644 --- a/tutorials/mastodon-community/index.mdx +++ b/tutorials/mastodon-community/index.mdx @@ -18,7 +18,7 @@ Mastodon is an open-source, self-hosted, social media and social networking serv As there is no central server, you can choose whether to join or leave an instance according to its policy without actually leaving Mastodon Social Network. Mastodon is a part of [Fediverse](https://fediverse.party/), allowing users to interact with users on other platforms that support the same protocol for example: [PeerTube](https://joinpeertube.org/en/), [Friendica](https://friendi.ca/) and [GNU Social](https://gnu.io/social/). -Mastodon provides the possibility of using [S3 compatible Object Storage](/storage/object/how-to/create-a-bucket/) to store media content uploaded to Instances, making it flexible and scalable. +Mastodon provides the possibility of using [Amazon S3-compatible Object Storage](/storage/object/how-to/create-a-bucket/) to store media content uploaded to Instances, making it flexible and scalable. @@ -338,7 +338,7 @@ Mastodon requires access to a PostgreSQL database to store its configuration and ``` Provider Amazon S3 - S3 bucket name: [scaleway_bucket_name] + Object Storage bucket name: [scaleway_bucket_name] S3 region: fr-par S3 hostname: s3.fr-par.scw.cloud S3 access key: [scaleway_access_key] diff --git a/tutorials/migrate-data-minio-client/index.mdx b/tutorials/migrate-data-minio-client/index.mdx index fa37399a0e..273400d861 100644 --- a/tutorials/migrate-data-minio-client/index.mdx +++ b/tutorials/migrate-data-minio-client/index.mdx @@ -14,7 +14,7 @@ dates: posted: 2019-03-20 --- -The [MinIO Client](https://min.io/docs/minio/linux/reference/minio-mc.html) provides a modern alternative to UNIX commands like ls, cat, cp, mirror, diff, etc. It can communicate with any S3-compatible cloud storage provider and can be used to migrate data from one region to another. +The [MinIO Client](https://min.io/docs/minio/linux/reference/minio-mc.html) provides a modern alternative to UNIX commands like ls, cat, cp, mirror, diff, etc. It can communicate with any Amazon S3-compatible cloud storage provider and can be used to migrate data from one region to another. @@ -53,7 +53,7 @@ The [MinIO Client](https://min.io/docs/minio/linux/reference/minio-mc.html) prov 2. Optionally, add other providers: - For S3-compatible storage: + For Amazon S3-compatible storage: ``` mc config host add s3 --api S3v4 ``` @@ -74,7 +74,7 @@ The [MinIO Client](https://min.io/docs/minio/linux/reference/minio-mc.html) prov ``` The commands above: - 1. Migrates data from a S3 compatible Object Storage to Scaleway's **fr-par** Object Storage + 1. Migrates data from an Amazon S3-compatible Object Storage to Scaleway's **fr-par** Object Storage 2. Migrates data from GCS Object Storage to Scaleway's **nl-ams** Object Storage diff --git a/tutorials/migrate-data-rclone/index.mdx b/tutorials/migrate-data-rclone/index.mdx index 7d105cfba9..fd14d7138f 100644 --- a/tutorials/migrate-data-rclone/index.mdx +++ b/tutorials/migrate-data-rclone/index.mdx @@ -14,7 +14,7 @@ dates: posted: 2019-03-20 --- -Rclone provides a modern alternative to `rsync`. The tool communicates with any S3-compatible cloud storage provider as well as other storage platforms and can be used to migrate data from one bucket to another, even if those buckets are in different regions. +Rclone provides a modern alternative to `rsync`. The tool communicates with any Amazon S3-compatible cloud storage provider as well as other storage platforms and can be used to migrate data from one bucket to another, even if those buckets are in different regions. diff --git a/tutorials/nvidia-triton/index.mdx b/tutorials/nvidia-triton/index.mdx index c5a956604e..75967228b7 100644 --- a/tutorials/nvidia-triton/index.mdx +++ b/tutorials/nvidia-triton/index.mdx @@ -46,9 +46,9 @@ For this tutorial, we will use a pre-trained model available in the Triton Infer ./fetch_models.sh ``` 5. Navigate to the `server/docs/examples/model_repository` directory within the cloned repository. -6. Upload the example model folder to your bucket in Scaleway Object Storage. You can use the [Scaleway Object Storage API](/storage/object/api-cli/using-api-call-list/), any S3 compatible tool, or web interface to upload the model folder. +6. Upload the example model folder to your bucket in Scaleway Object Storage. You can use the [Scaleway Object Storage API](/storage/object/api-cli/using-api-call-list/), any Amazon S3-compatible tool, or web interface to upload the model folder. - You can use the `s3cmd` [command-line tool](/tutorials/s3cmd/) or any other S3-compatible tool to upload your data. + You can use the `s3cmd` [command-line tool](/tutorials/s3cmd/) or any other Amazon S3-compatible tool to upload your data. ## Configuring Triton Inference Server diff --git a/tutorials/object-storage-s3fs/index.mdx b/tutorials/object-storage-s3fs/index.mdx index f105cc58bd..9074b3f0e6 100644 --- a/tutorials/object-storage-s3fs/index.mdx +++ b/tutorials/object-storage-s3fs/index.mdx @@ -13,7 +13,7 @@ dates: posted: 2018-07-16 --- -In this tutorial you learn how to use [s3fs](https://github.com/s3fs-fuse/s3fs-fuse) as a client for [Scaleway Object Storage](/storage/object/concepts/#object-storage). `s3fs` is a FUSE-backed file interface for S3, allowing you to mount your S3 buckets on your local Linux or macOS operating system. `s3fs` preserves the native object format for files, so they can be used with other tools including AWS CLI. +In this tutorial you learn how to use [s3fs](https://github.com/s3fs-fuse/s3fs-fuse) as a client for [Scaleway Object Storage](/storage/object/concepts/#object-storage). `s3fs` is a FUSE-backed file interface for S3, allowing you to mount your Object Storage buckets on your local Linux or macOS operating system. `s3fs` preserves the native object format for files, so they can be used with other tools including AWS CLI. The version of `s3fs` available for installation using the systems package manager does not support files larger than 10 GB. It is therefore recommended to compile a version, including the required corrections, from the s3fs source code repository. This tutorial will guide you through that process. Note that even with the source code compiled version of s3fs, there is a [maximum file size of 128 GB](#configuring-s3fs) when using s3fs with Scaleway Object Storage. @@ -92,7 +92,7 @@ Next, download and install `s3fs-fuse` itself: ## Configuring s3fs -1. Execute the following commands to enter your S3 credentials (separated by a `:`) in a file `$HOME/.passwd-s3fs` and set owner-only permissions. This presumes that you have set your [API credentials](/identity-and-access-management/iam/how-to/create-api-keys/) as environment variables named `ACCESS_KEY` and `SECRET_KEY`: +1. Execute the following commands to enter your credentials (separated by a `:`) in a file `$HOME/.passwd-s3fs` and set owner-only permissions. This presumes that you have set your [API credentials](/identity-and-access-management/iam/how-to/create-api-keys/) as environment variables named `ACCESS_KEY` and `SECRET_KEY`: ``` echo $ACCESS_KEY:$SECRET_KEY > $HOME/.passwd-s3fs chmod 600 $HOME/.passwd-s3fs @@ -123,7 +123,7 @@ Next, download and install `s3fs-fuse` itself: The file system of the mounted bucket will appear in your OS like a local file system. This means you can access the files as if they were on your hard drive. -Note that there are some limitations when using S3 as a file system: +Note that there are some limitations when using Object Storage as a file system: - Random writes or appends to files require rewriting the entire file - Metadata operations such as listing directories have poor performance due to network latency diff --git a/tutorials/restic-s3-backup/index.mdx b/tutorials/restic-s3-backup/index.mdx index db93f9a8d0..9b6ae1227b 100644 --- a/tutorials/restic-s3-backup/index.mdx +++ b/tutorials/restic-s3-backup/index.mdx @@ -15,7 +15,7 @@ dates: posted: 2022-04-04 --- -Restic is a backup tool that allows you to back up your Linux, Windows, Mac, or BSD machines and send your backups to repositories via [different storage protocols](https://restic.readthedocs.io/en/stable/030_preparing_a_new_repo.html), including S3 (Object Storage). +Restic is a backup tool that allows you to back up your Linux, Windows, Mac, or BSD machines and send your backups to repositories via [different storage protocols](https://restic.readthedocs.io/en/stable/030_preparing_a_new_repo.html), including Object Storage. In this tutorial, you learn how to backup a Scaleway Instance running on Ubuntu 20.04 using Restic and Object Storage. @@ -48,7 +48,7 @@ In this tutorial, you learn how to backup a Scaleway Instance running on Ubuntu restic version ``` -## Setting up the S3 repository +## Setting up the Object Storage repository A repository is the storage space where your backups will be hosted. In this tutorial, we will use Scaleway Object Storage buckets to host our backups. diff --git a/tutorials/s3-customize-url-cname/index.mdx b/tutorials/s3-customize-url-cname/index.mdx index 8ed7e55216..3f22787a1c 100644 --- a/tutorials/s3-customize-url-cname/index.mdx +++ b/tutorials/s3-customize-url-cname/index.mdx @@ -1,15 +1,15 @@ --- meta: - title: S3 Object Storage - Customizing URLs with CNAME + title: Object Storage - Customizing URLs with CNAME description: This page shows how to use a customized domain name with Object Storage buckets content: - h1: S3 Object Storage - Customizing URLs with CNAME + h1: Object Storage - Customizing URLs with CNAME paragraph: This page shows how to use a customized domain name with Object Storage buckets categories: - storage - object-storage - domains-and-dns -tags: Object-Storage CNAME domain S3 +tags: Object-Storage CNAME domain amazon-S3 dates: validation: 2024-07-16 posted: 2019-05-21 diff --git a/tutorials/setup-nginx-reverse-proxy-s3/index.mdx b/tutorials/setup-nginx-reverse-proxy-s3/index.mdx index fa38b87eeb..091f7480fc 100644 --- a/tutorials/setup-nginx-reverse-proxy-s3/index.mdx +++ b/tutorials/setup-nginx-reverse-proxy-s3/index.mdx @@ -1,11 +1,11 @@ --- meta: - title: Setting up Nginx as a reverse proxy with S3 Object Storage - description: Learn how to configure an Nginx reverse proxy with Scaleway Object Storage (S3) for optimized access and caching. + title: Setting up Nginx as a reverse proxy with Object Storage + description: Learn how to configure an Nginx reverse proxy with Scaleway Object Storage for optimized access and caching. content: - h1: Setting up Nginx as a reverse proxy with S3 Object Storage - paragraph: This guide shows you how to configure an Nginx reverse proxy with Scaleway S3 Object Storage for optimized access and caching. -tags: Object-Storage, S3, reverse-proxy, nginx + h1: Setting up Nginx as a reverse proxy with Object Storage + paragraph: This guide shows you how to configure an Nginx reverse proxy with Scaleway Object Storage for optimized access and caching. +tags: Object-Storage amazon-S3 reverse-proxy nginx categories: - instances - object-storage @@ -156,7 +156,7 @@ You can now access the files of your bucket by going directly to `http://s3proxy ## Configuring Nginx as a reverse proxy for HTTPS -Connections to your S3 proxy are currently available in plain, unencrypted HTTP only. It is possible to encrypt the connection between the client and the Nginx proxy by configuring HTTPS. To do so, we will obtain a free SSL certificate issued by [Let's Encrypt](https://letsencrypt.org/) using [Certbot](https://certbot.eff.org/), a tool to obtain, manage and renew Let's Encrypt certificates automatically. +Connections to your proxy are currently available in plain, unencrypted HTTP only. It is possible to encrypt the connection between the client and the Nginx proxy by configuring HTTPS. To do so, we will obtain a free SSL certificate issued by [Let's Encrypt](https://letsencrypt.org/) using [Certbot](https://certbot.eff.org/), a tool to obtain, manage and renew Let's Encrypt certificates automatically. 1. Add the Certbot repository to apt to download the latest release of the software. Certbot is in active development and the packages included in Ubuntu may be already outdated. ``` diff --git a/tutorials/terraform-quickstart/index.mdx b/tutorials/terraform-quickstart/index.mdx index 004d9066df..8b5b215f3e 100644 --- a/tutorials/terraform-quickstart/index.mdx +++ b/tutorials/terraform-quickstart/index.mdx @@ -590,7 +590,7 @@ Apply the new configuration using `terraform apply`. Terraform will add an Elast ## Storing the Terraform state in the cloud -Optionally, you can use the S3 Backend to store your Terraform state in a [Scaleway Object Storage](https://www.scaleway.com/en/object-storage/). Configure your backend as follows: +Optionally, you can store your Terraform state with [Scaleway Object Storage](https://www.scaleway.com/en/object-storage/). Configure your backend as follows: ```json terraform { diff --git a/tutorials/transform-bucket-images-triggers-functions-deploy/index.mdx b/tutorials/transform-bucket-images-triggers-functions-deploy/index.mdx index 68b922460d..6fa010894c 100644 --- a/tutorials/transform-bucket-images-triggers-functions-deploy/index.mdx +++ b/tutorials/transform-bucket-images-triggers-functions-deploy/index.mdx @@ -1,10 +1,10 @@ --- meta: - title: Transforming images in an S3 bucket using Serverless Functions and Triggers - Deployment - description: This page shows you how to create and deploy functions to transform images in an S3 bucket using Serverless Functions and Triggers + title: Transforming images in an Object Storage bucket using Serverless Functions and Triggers - Deployment + description: This page shows you how to create and deploy functions to transform images in an Object Storage bucket using Serverless Functions and Triggers content: - h1: Transforming images in an S3 bucket using Serverless Functions and Triggers - Deployment - paragraph: This page shows you how to create and deploy functions to transform images in an S3 bucket using Serverless Functions and Triggers + h1: Transforming images in an Object Storage bucket using Serverless Functions and Triggers - Deployment + paragraph: This page shows you how to create and deploy functions to transform images in an Object Storage bucket using Serverless Functions and Triggers categories: - functions - messaging @@ -52,7 +52,7 @@ You will now learn how to deploy Serverless Functions and connect them using tri const SQS_ENDPOINT = process.env.SQS_ENDPOINT; const S3_ENDPOINT = `https://s3.${S3_REGION}.scw.cloud`; - // Create S3 service object + // Create Object Storage service object const s3Client = new S3Client({ credentials: { accessKeyId: S3_ACCESS_KEY_ID, @@ -174,7 +174,7 @@ You will now learn how to deploy Serverless Functions and connect them using tri width = 200; } - // Create S3 service object + // Create Object Storage service object const s3Client = new S3Client({ credentials: { accessKeyId: S3_ACCESS_KEY_ID, @@ -222,7 +222,7 @@ You will now learn how to deploy Serverless Functions and connect them using tri }; }; - // Download the image from the S3 source bucket. + // Download the image from the Object Storage source bucket. try { const input = { Bucket: SOURCE_BUCKET, diff --git a/tutorials/transform-bucket-images-triggers-functions-set-up/index.mdx b/tutorials/transform-bucket-images-triggers-functions-set-up/index.mdx index 0314dc3f5c..1960b96af4 100644 --- a/tutorials/transform-bucket-images-triggers-functions-set-up/index.mdx +++ b/tutorials/transform-bucket-images-triggers-functions-set-up/index.mdx @@ -1,10 +1,10 @@ --- meta: - title: Transforming images in an S3 bucket using Serverless Functions and Triggers - Set up - description: This page shows you how to set up your environment to transform images in an S3 bucket using Serverless Functions and Triggers + title: Transforming images in an Object Storage bucket using Serverless Functions and Triggers - Set up + description: This page shows you how to set up your environment to transform images in an Object Storage bucket using Serverless Functions and Triggers content: - h1: Transforming images in an S3 bucket using Serverless Functions and Triggers - Set up - paragraph: This page shows you how to set up your environment to transform images in an S3 bucket using Serverless Functions and Triggers + h1: Transforming images in an Object Storage bucket using Serverless Functions and Triggers - Set up + paragraph: This page shows you how to set up your environment to transform images in an Object Storage bucket using Serverless Functions and Triggers categories: - messaging - functions diff --git a/tutorials/veeam-backup-replication-s3/index.mdx b/tutorials/veeam-backup-replication-s3/index.mdx index e160419678..31ac4ddfea 100644 --- a/tutorials/veeam-backup-replication-s3/index.mdx +++ b/tutorials/veeam-backup-replication-s3/index.mdx @@ -18,7 +18,7 @@ dates: The solution provides backup, restore, and replication functionality for virtual machines, physical servers, and workstations as well as cloud-based workloads. -A native S3 interface for Veeam Backup & Replication is part of the Release 9.5 update 4, available in General Availability since January 22nd, 2019. It allows to push backups to an S3-compatible service to maximize backup capacity. +A native Object Storage interface for Veeam Backup & Replication is part of the Release 9.5 update 4, available in General Availability since January 22nd, 2019. It allows to push backups to an Amazon S3-compatible service to maximize backup capacity. The following schema represents the functionality of Veeam Backup and Restore which acts as an intermediate agent to manage primary data storage and secondary and archival storage: @@ -78,7 +78,7 @@ The following schema represents the functionality of Veeam Backup and Restore wh For a bucket located in the Amsterdam region, the service point is `s3.nl-ams.scw.cloud` and the region is `nl-ams`. -11. Veeam will connect to the S3 infrastructure and download the list of Object Storage Buckets. Choose the bucket to be used with Veeam from the drop-down list, click **Browse**, and create and select the folder for storing backups. Then click **Next**: +11. Veeam will connect to the Object Storage infrastructure and download the list of buckets. Choose the bucket to be used with Veeam from the drop-down list, click **Browse**, and create and select the folder for storing backups. Then click **Next**: @@ -87,7 +87,7 @@ The following schema represents the functionality of Veeam Backup and Restore wh ### Configuring a local backup repository -1. As Veeam cannot currently push backups directly to S3, a local backup repository is required which will be configured as **Storage Tier** with Object Storage in a later step. Click **Add Repository**: +1. As Veeam cannot currently push backups directly to an Amazon S3-compatible system, a local backup repository is required which will be configured as **Storage Tier** with Object Storage in a later step. Click **Add Repository**: 2. Choose **Direct Attached Storage** from the provided options: @@ -175,7 +175,7 @@ This section is designed to help you solve common issues encountered while perfo #### Cause -The application cannot access the S3 resource. +The application cannot access the Object Storage resource. #### Solution @@ -200,7 +200,7 @@ Scaleway Object Storage applies a rate limit on PUT operations for safety reason #### Solution -You can limit the number of concurrent tasks and update the timeout duration of S3 requests on the Veeam Backup & Replication server managing the backup copy operation by adding the elements below: +You can limit the number of concurrent tasks and update the timeout duration of Object Storage requests on the Veeam Backup & Replication server managing the backup copy operation by adding the elements below: ``` HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication @@ -228,7 +228,7 @@ You may experience reduced throughput due to the limitation. If you did not manage to identify the error and solve it by yourself, [open a support ticket](/console/account/how-to/open-a-support-ticket/), and provide as many details as possible, along with the necessary information below: -- S3 Endpoint (e.g. `s3.fr-par.scw.cloud`) +- Object Storage Endpoint (e.g. `s3.fr-par.scw.cloud`) - Bucket name - Object name (if the request concerns an object) - Request type (PUT, GET, etc.)