From 52630787243eba0eca753c41737a5795c6bd7064 Mon Sep 17 00:00:00 2001 From: jcirino Date: Thu, 24 Apr 2025 18:30:21 +0200 Subject: [PATCH 1/2] docs(gen): fix typos and harmonizing content --- .../account/how-to/open-a-support-ticket.mdx | 5 +-- pages/audit-trail/quickstart.mdx | 5 +-- .../configure-alerts-for-scw-resources.mdx | 2 +- pages/cockpit/index.mdx | 2 +- .../how-to/track-monthly-footprint.mdx | 2 +- .../troubleshooting/fixing-common-issues.mdx | 34 +++++++------- .../gpu/how-to/use-nvidia-mig-technology.mdx | 6 +-- .../gpu-instances-bandwidth-overview.mdx | 3 +- pages/instances/api-cli/using-cloud-init.mdx | 6 +-- pages/instances/api-cli/using-routed-ips.mdx | 6 +-- pages/instances/faq.mdx | 6 +-- .../choosing-shared-vs-dedicated-cpus.mdx | 4 +- ...-lost-ip-connectivity-on-debian-buster.mdx | 2 +- pages/key-manager/concepts.mdx | 8 ++-- pages/kubernetes/concepts.mdx | 10 ++--- .../kubernetes/how-to/edit-kosmos-cluster.mdx | 4 +- .../reference-content/multi-az-clusters.mdx | 10 ++--- ...derstanding-differences-kapsule-kosmos.mdx | 12 ++--- pages/load-balancer/index.mdx | 4 +- pages/managed-inference/faq.mdx | 10 ++--- pages/messaging/faq.mdx | 6 +-- .../object-storage/how-to/create-a-bucket.mdx | 2 +- .../use-obj-stor-with-private-networks.mdx | 4 +- pages/object-storage/quickstart.mdx | 4 +- .../secret-manager/how-to/create-version.mdx | 5 +-- .../secret-manager/how-to/delete-version.mdx | 8 ++-- pages/troubleshooting/index.mdx | 2 +- tutorials/configure-graphite/index.mdx | 6 +-- tutorials/configure-slack-alerting/index.mdx | 11 +++-- .../index.mdx | 44 +++++++++---------- tutorials/deploy-nextcloud-s3/index.mdx | 10 ++--- tutorials/discourse-forum/index.mdx | 6 +-- tutorials/erpnext-13/index.mdx | 4 +- tutorials/foreman-puppet/index.mdx | 4 +- tutorials/install-cassandra/index.mdx | 2 +- tutorials/install-parse-server/index.mdx | 4 +- tutorials/k8s-gitlab/index.mdx | 2 +- .../index.mdx | 4 +- tutorials/mastodon-community/index.mdx | 4 +- tutorials/matomo-analytics/index.mdx | 2 +- .../index.mdx | 8 ++-- .../index.mdx | 6 +-- tutorials/postman-api/index.mdx | 4 +- .../index.mdx | 4 +- tutorials/remote-desktop-with-xrdp/index.mdx | 6 +-- tutorials/setup-k8s-cluster-rancher/index.mdx | 16 +++---- tutorials/setup-nomad-cluster/index.mdx | 2 +- tutorials/snapshot-instances-jobs/index.mdx | 10 ++--- .../index.mdx | 6 +-- .../index.mdx | 2 +- .../index.mdx | 8 ++-- tutorials/wordpress-lemp-stack/index.mdx | 2 +- tutorials/zammad-ticketing/index.mdx | 10 ++--- tutorials/zulip/index.mdx | 4 +- 54 files changed, 174 insertions(+), 189 deletions(-) diff --git a/pages/account/how-to/open-a-support-ticket.mdx b/pages/account/how-to/open-a-support-ticket.mdx index 7ced6dd8a1..f8862ba3a6 100644 --- a/pages/account/how-to/open-a-support-ticket.mdx +++ b/pages/account/how-to/open-a-support-ticket.mdx @@ -52,7 +52,7 @@ Providing a clear subject and description will help us resolve your issue faster Example: “The issue occurs when attempting to start an Instance after applying a configuration update in the Scaleway console.” - **Expected behavior:** explain what you expected to happen. -Example: “The instance should start within 2 minutes without errors.” +Example: “The Instance should start within 2 minutes without errors.” - **Actual behavior:** describe what is happening instead. Example: “The Instance remains in "Starting" status for over 10 minutes and then switches to "Error". @@ -71,7 +71,6 @@ Examples: - Screenshot of the network tab of your browser’s Developer Tools (right-click anywhere on the page and select **Inspect**. Go to the **Network tab** in the Developer Tools panel.) - Logs - If you have lost access to the Scaleway console and want to create a ticket, you must first [follow this procedure](/account/how-to/use-2fa/#how-to-regain-access-to-your-account) to regain access to your account. - + \ No newline at end of file diff --git a/pages/audit-trail/quickstart.mdx b/pages/audit-trail/quickstart.mdx index d296ff5b33..d15184fb59 100644 --- a/pages/audit-trail/quickstart.mdx +++ b/pages/audit-trail/quickstart.mdx @@ -39,7 +39,4 @@ Refer to the [dedicated documentation page](/audit-trail/how-to/configure-audit- If no events display after you use the filter, try switching the region from the **Region** drop-down, or adjusting your search. Find out how to troubleshoot event issues in our [dedicated documentation](/audit-trail/troubleshooting/cannot-see-events/). - - - - + \ No newline at end of file diff --git a/pages/cockpit/how-to/configure-alerts-for-scw-resources.mdx b/pages/cockpit/how-to/configure-alerts-for-scw-resources.mdx index dc209314b9..53b877ee9a 100644 --- a/pages/cockpit/how-to/configure-alerts-for-scw-resources.mdx +++ b/pages/cockpit/how-to/configure-alerts-for-scw-resources.mdx @@ -52,5 +52,5 @@ This page shows you how to configure alerts for Scaleway resources in Grafana us - Find out how to send Cockpit's alert notifications to Slack using a webkook URL in our [dedicated documentation](/tutorials/configure-slack-alerting/). + Find out how to send Cockpit's alert notifications to Slack using a webhook URL in our [dedicated documentation](/tutorials/configure-slack-alerting/). \ No newline at end of file diff --git a/pages/cockpit/index.mdx b/pages/cockpit/index.mdx index 6f8cfff7a2..3e0ea38d27 100644 --- a/pages/cockpit/index.mdx +++ b/pages/cockpit/index.mdx @@ -52,7 +52,7 @@ meta: diff --git a/pages/environmental-footprint/how-to/track-monthly-footprint.mdx b/pages/environmental-footprint/how-to/track-monthly-footprint.mdx index f480d54a2b..aee8501162 100644 --- a/pages/environmental-footprint/how-to/track-monthly-footprint.mdx +++ b/pages/environmental-footprint/how-to/track-monthly-footprint.mdx @@ -42,7 +42,7 @@ categories: For a detailed description of how the water consumption is calculated, refer to the [Water Consumption section](/environmental-footprint/additional-content/environmental-footprint-calculator/#water-consumption) of the Environmental Footprint calculation breakdown documentation page. - **5.** The total water consumption and carbon footprint of each of your Projects. - - **6.** The total water consumption and carbon footprint per geographical location (Region and Availability Zone) + - **6.** The total water consumption and carbon footprint per geographical location (region and Availability Zone) - **7.** The total water consumption and carbon footprint of each of your products. For both the carbon emissions, and the water consumption, the power consumption of your active resources is used in the calculation. The way you use your resources has a direct impact on power consumption. Therefore, results may vary greatly from one month to another. diff --git a/pages/generative-apis/troubleshooting/fixing-common-issues.mdx b/pages/generative-apis/troubleshooting/fixing-common-issues.mdx index f24c25791d..1b7fbd2484 100644 --- a/pages/generative-apis/troubleshooting/fixing-common-issues.mdx +++ b/pages/generative-apis/troubleshooting/fixing-common-issues.mdx @@ -16,15 +16,15 @@ Below are common issues that you may encounter when using Generative APIs, their ## 400: Bad Request - You exceeded maximum context window for this model ### Cause -- You provided an input exceeding the maximum context window (also known as context length) for the model you are using. -- You provided a long input and requested a long input (in `max_completion_tokens` field), which added together, exceed the maximum context window of the model you are using. +- You provided an input exceeding the maximum context window (also known as context length) for the model you are using. +- You provided a long input and requested a long input (in `max_completion_tokens` field), which added together, exceeds the maximum context window of the model you are using. ### Solution -- Reduce your input size below what is [supported by the model](/generative-apis/reference-content/supported-models/). +- Reduce your input size below what is [supported by the model](/generative-apis/reference-content/supported-models/). - Use a model supporting longer context window values. - Use [Managed Inference](/managed-inference/), where the context window can be increased for [several configurations with additional GPU vRAM](/managed-inference/reference-content/supported-models/). For instance, `llama-3.3-70b-instruct` model in `fp8` quantization can be served with: - - `15k` tokens context window on `H100` instances - - `128k` tokens context window on `H100-2` instances. + - `15k` tokens context window on `H100` Instances + - `128k` tokens context window on `H100-2` Instances ## 403: Forbidden - Insufficient permissions to access the resource @@ -46,7 +46,7 @@ Below are common issues that you may encounter when using Generative APIs, their - You provided a value for `max_completion_tokens` that is too high and not supported by the model you are using. ### Solution -- Remove `max_completion_tokens` field from your request or client library, or reduce its value below what is [supported by the model](https://www.scaleway.com/en/docs/generative-apis/reference-content/supported-models/). +- Remove `max_completion_tokens` field from your request or client library, or reduce its value below what is [supported by the model](https://www.scaleway.com/en/docs/generative-apis/reference-content/supported-models/). - As an example, when using the [init_chat_model from Langchain](https://python.langchain.com/api_reference/_modules/langchain/chat_models/base.html#init_chat_model), you should edit the `max_tokens` value in the following configuration: ```python llm = init_chat_model("llama-3.3-70b-instruct", max_tokens="8000", model_provider="openai", base_url="https://api.scaleway.ai/v1", temperature=0.7) @@ -57,16 +57,16 @@ Below are common issues that you may encounter when using Generative APIs, their ## 416: Range Not Satisfiable - max_completion_tokens is limited for this model ### Cause -- You provided `max_completion_tokens` value too high, that is not supported by the model you are using. +- You provided `max_completion_tokens` value too high, which is not supported by the model you are using. ### Solution -- Remove the `max_completion_tokens` field from your request or client library, or reduce its value below what is [supported by the model](https://www.scaleway.com/en/docs/generative-apis/reference-content/supported-models/). +- Remove the `max_completion_tokens` field from your request or client library, or reduce its value below what is [supported by the model](https://www.scaleway.com/en/docs/generative-apis/reference-content/supported-models/). - As an example, when using the [init_chat_model from Langchain](https://python.langchain.com/api_reference/_modules/langchain/chat_models/base.html#init_chat_model), you should edit the `max_tokens` value in the following configuration: ```python llm = init_chat_model("llama-3.3-70b-instruct", max_tokens="8000", model_provider="openai", base_url="https://api.scaleway.ai/v1", temperature=0.7) ``` - Use a model supporting a higher `max_completion_tokens` value. -- Use [Managed Inference](/managed-inference/), where these limits on completion tokens do not apply (your completion tokens amount will still be limited by the maximum context window supported by the model). +- Use [Managed Inference](/managed-inference/), where these limits on completion tokens do not apply (your completion tokens amount will still be limited by the maximum context window supported by the model). ## 429: Too Many Requests - You exceeded your current quota of requests/tokens per minute @@ -79,7 +79,7 @@ Below are common issues that you may encounter when using Generative APIs, their - [Add a payment method](/billing/how-to/add-payment-method/#how-to-add-a-credit-card) and [validate your identity](/account/how-to/verify-identity/) to increase automatically your quotas [based on standard limits](/organizations-and-projects/additional-content/organization-quotas/#generative-apis). - [Ask our support](https://console.scaleway.com/support/tickets/create) to raise your quota. - Reduce the size of the input or output tokens processed by your API requests. -- Use [Managed Inference](/managed-inference/), where these quota do not apply (your throughput will be only limited by the amount of Inference Deployment your provision) +- Use [Managed Inference](/managed-inference/), where these quotas do not apply (your throughput will be only limited by the amount of Inference Deployment your provision) ## 429: Too Many Requests - You exceeded your current threshold of concurrent requests @@ -87,7 +87,7 @@ Below are common issues that you may encounter when using Generative APIs, their - You kept too many API requests opened at the same time (number of HTTP sessions opened in parallel) ### Solution -- Smooth out your API requests rate by limiting the number of API requests you perform at the same time (eg. requests which did not receive a complete response and are still opened) so that you remain below your [organization quotas for Generative APIs](/organizations-and-projects/additional-content/organization-quotas/#generative-apis). +- Smooth out your API requests rate by limiting the number of API requests you perform at the same time (eg. requests which did not receive a complete response and are still opened) so that you remain below your [Organization quotas for Generative APIs](/organizations-and-projects/additional-content/organization-quotas/#generative-apis). - Use [Managed Inference](/managed-inference/), where concurrent request limit do not apply. Note that exceeding the number of concurrent requests your Inference Deployment can handle may impact performance metrics. @@ -162,15 +162,15 @@ Below are common issues that you may encounter when using Generative APIs, their - Counter for **Tokens Processed** or **API Requests** should display a correct value (different from 0) - Graph across time should be empty -## Embeddings vectors cannot be stored in database or used with a third-party library +## Embeddings vectors cannot be stored in a database or used with a third-party library ### Cause The embedding model you are using generates vector representations with a fixed dimension number, which is too high for your database or third-party library. - For example, the embedding model `bge-multilingual-gemma2` generates vector representations with `3584` dimensions. However, when storing vectors using PostgreSQL `pgvector` extensions, indexes (in `hnsw` or `ivvflat` formats) only support up to `2000` dimensions. ### Solution -- Use a vector store supporting higher dimensions number, such as [Qdrant](https://www.scaleway.com/en/docs/tutorials/deploying-qdrant-vectordb-kubernetes/). -- Do not use indexes for vectors or disable them from your third-party library. This may limit performance in vector similarity search for significant volumes. +- Use a vector store supporting higher dimensions numbers, such as [Qdrant](https://www.scaleway.com/en/docs/tutorials/deploying-qdrant-vectordb-kubernetes/). +- Do not use indexes for vectors or disable them from your third-party library. This may limit performance in vector similarity search for significant volumes. - When using [Langchain PGVector method](https://python.langchain.com/docs/integrations/vectorstores/pgvector/), this method does not create an index by default and should not raise errors. - When using the [Mastra](https://mastra.ai/) library with `vectorStoreName: "pgvector"`, specify indexConfig type as `flat` to avoid creating any index on vector dimensions. ```typescript @@ -180,7 +180,7 @@ The embedding model you are using generates vector representations with a fixed indexConfig: {"type":"flat"}, }); ``` -- Use a model with a lower number of dimensions. Using [Managed Inference](https://console.scaleway.com/inference/deployments), you can deploy for instance the`sentence-t5-xxl` model, which represents vectors with `768` dimensions. +- Use a model with a lower number of dimensions. Using [Managed Inference](https://console.scaleway.com/inference/deployments), you can deploy for instance the`sentence-t5-xxl` model, which represents vectors with `768` dimensions. ## Previous messages are not taken into account by the model @@ -219,7 +219,7 @@ response = client.chat.completions.create( print(response.choices[0].message.content) ``` This snippet will output the model response, which is `4`. -- When exceeding maximum context window, you should receive a `400 - BadRequestError` detailing context length value you exceeded. In this case, you should reduce the size of the content you send to the API. +- When exceeding the maximum context window, you should receive a `400 - BadRequestError` detailing the context length value you exceeded. In this case, you should reduce the size of the content you send to the API. ## Best practices for optimizing model performance @@ -234,4 +234,4 @@ This snippet will output the model response, which is `4`. ### Debugging silent errors - For cases where no explicit error is returned: - Verify all fields in the API request are correctly named and formatted. - - Test the request with smaller and simpler inputs to isolate potential issues. + - Test the request with smaller and simpler inputs to isolate potential issues. \ No newline at end of file diff --git a/pages/gpu/how-to/use-nvidia-mig-technology.mdx b/pages/gpu/how-to/use-nvidia-mig-technology.mdx index 0e942d64d8..4b694fd1e3 100644 --- a/pages/gpu/how-to/use-nvidia-mig-technology.mdx +++ b/pages/gpu/how-to/use-nvidia-mig-technology.mdx @@ -15,7 +15,7 @@ categories: * Scaleway offers MIG-compatible GPU Instances such as H100 PCIe GPU Instances - * NVIDIA uses the term *GPU instance* to designate a MIG partition of a GPU (MIG= Multi-Instance GPU) + * NVIDIA uses the term *GPU instance* to designate an MIG partition of a GPU (MIG= Multi-Instance GPU) * To avoid confusion, we will use the term GPU Instance in this document to designate the Scaleway GPU Instance, and *MIG partition* in the context of the MIG feature. @@ -151,10 +151,10 @@ Refer to the official documentation for more information about the supported [MI * `-cgi 9,19,19,19`: this flag specifies the MIG partition configuration. The numbers following the flag represent the MIG partitions for each of the four MIG device slices. In this case, there are four slices with configurations 9, 19, 19, and 19 compute instances each. These numbers correspond to the profile IDs retrieved previously. Note that you can use either of the following: * Profile ID (e.g. 9, 14, 5) * Short name of the profile (e.g. `3g.40gb`) - * Full profile name of the instance (e.g. `MIG 3g.40gb`) + * Full profile name of the Instance (e.g. `MIG 3g.40gb`) * `-C`: this flag automatically creates the corresponding compute instances for the MIG partitions. - The command instructs the `nvidia-smi` tool to set up a MIG configuration where the GPU is divided into four slices, each containing different numbers of MIG partition configurations as specified: an MIG 3g.40gb (Profile ID 9) for the first slice, and an MIG 1g.10gb (Profile ID 19) for each of the remaining three slices. + The command instructs the `nvidia-smi` tool to set up an MIG configuration where the GPU is divided into four slices, each containing different numbers of MIG partition configurations as specified: an MIG 3g.40gb (Profile ID 9) for the first slice, and an MIG 1g.10gb (Profile ID 19) for each of the remaining three slices. - Running CUDA workloads on the GPU requires the creation of MIG partitions along with their corresponding compute instances. Just enabling MIG mode on the GPU is not enough to achieve this. diff --git a/pages/gpu/reference-content/gpu-instances-bandwidth-overview.mdx b/pages/gpu/reference-content/gpu-instances-bandwidth-overview.mdx index f1ca1d8402..e975735113 100644 --- a/pages/gpu/reference-content/gpu-instances-bandwidth-overview.mdx +++ b/pages/gpu/reference-content/gpu-instances-bandwidth-overview.mdx @@ -13,9 +13,8 @@ categories: - compute --- - Scaleway GPU Instances are designed to deliver **high-performance computing** for AI/ML workloads, rendering, scientific simulations, and visualization tasks. -This guide provides a detailed overview of their **internet and Block Storage bandwidth capabilities** to help you choose the right instance for your GPU-powered workloads. +This guide provides a detailed overview of their **internet and Block Storage bandwidth capabilities** to help you choose the right Instance for your GPU-powered workloads. ### Why bandwidth matters for GPU Instances diff --git a/pages/instances/api-cli/using-cloud-init.mdx b/pages/instances/api-cli/using-cloud-init.mdx index ce7a736947..f1f6ed3c14 100644 --- a/pages/instances/api-cli/using-cloud-init.mdx +++ b/pages/instances/api-cli/using-cloud-init.mdx @@ -30,7 +30,7 @@ Cloud-config files are special scripts designed to be run by the cloud-init proc You can give provisioning instructions to cloud-init using the `cloud-init` key of the `user_data` facility. -For `user_data` to be effective, it has to be added prior to the creation of the instance since `cloud-init` gets activated early in the first phases of the boot process. +For `user_data` to be effective, it has to be added prior to the creation of the Instance since `cloud-init` gets activated early in the first phases of the boot process. * **Server ID** refers to the unique identification string of your server. It will be displayed when you create your server. You can also recover it from the list of your servers, by typing `scw instance server list`. @@ -88,6 +88,4 @@ Subcommands: ```` -For detailed information on cloud-init, refer to the official cloud-init [documentation](http://cloudinit.readthedocs.io/en/latest/index.html). - - +For detailed information on cloud-init, refer to the official cloud-init [documentation](http://cloudinit.readthedocs.io/en/latest/index.html). \ No newline at end of file diff --git a/pages/instances/api-cli/using-routed-ips.mdx b/pages/instances/api-cli/using-routed-ips.mdx index d8f61c7b83..2c80ddffb3 100644 --- a/pages/instances/api-cli/using-routed-ips.mdx +++ b/pages/instances/api-cli/using-routed-ips.mdx @@ -491,7 +491,7 @@ Then you can create a new Instance using those IPs through the `public_ips` fiel ❯ http post $API_URL/servers $HEADERS - In order to create Instance you have to add `"routed_ip_enabled": true` to your payload. + To create an Instance, you must add `"routed_ip_enabled": true` to your payload. @@ -648,7 +648,7 @@ You can use a specific server action to move an existing (legacy network) Instan ❯ http post $API_URL/servers/$SERVER_ID/action $HEADERS action=enable_routed_ip ``` - Your instance *will* reboot during this action. + Your Instance *will* reboot during this action. @@ -1002,5 +1002,3 @@ You can verify if your Instance is enabled for routed IPs through the `/servers` ``` - - diff --git a/pages/instances/faq.mdx b/pages/instances/faq.mdx index 98b358b05f..2ba8dedfa9 100644 --- a/pages/instances/faq.mdx +++ b/pages/instances/faq.mdx @@ -282,11 +282,11 @@ Scaleway offers different Instance ranges in all regions: Paris (France), Amster Check the [Instances availability guide](/account/reference-content/products-availability/) to discover where each Instance type is available. -### What makes FR-PAR-2 a sustainable Region? +### What makes FR-PAR-2 a sustainable region? `FR-PAR-2` is our sustainable and environmentally efficient Availability Zone (AZ) in Paris. -This Region is entirely powered by renewable (hydraulic) energy. It also has an energetic footprint 30-40% lower than traditional data centers, thanks to the fact that it does not require air conditioning. Learn more about [our environmental commitment](https://www.scaleway.com/en/environmental-leadership/). +This region is entirely powered by renewable (hydraulic) energy. It also has an energetic footprint 30-40% lower than traditional data centers, thanks to the fact that it does not require air conditioning. Learn more about [our environmental commitment](https://www.scaleway.com/en/environmental-leadership/). ## Network @@ -326,4 +326,4 @@ Yes, they can communicate with each other using their public IPs. Currently, additional routed IPv6 addresses do not autoconfigure on CentOS 7, 8, 9, Alma 8, 9, Rocky 8, 9 after migration. Additional routed IPv4 and IPv6 addresses are not autoconfigured post-migration on Ubuntu 20.04 Focal. However, the primary IPv6 continues to be configured via SLAAC. These limitations are currently being addressed. -For detailed migration steps, refer to our [migration guide](/instances/how-to/migrate-routed-ips/). If you encounter connectivity issues with Ubuntu Focal Instances having multiple public IPs, consult our [troubleshooting guide](/instances/troubleshooting/fix-unreachable-focal-with-two-public-ips/). +For detailed migration steps, refer to our [migration guide](/instances/how-to/migrate-routed-ips/). If you encounter connectivity issues with Ubuntu Focal Instances having multiple public IPs, consult our [troubleshooting guide](/instances/troubleshooting/fix-unreachable-focal-with-two-public-ips/). \ No newline at end of file diff --git a/pages/instances/reference-content/choosing-shared-vs-dedicated-cpus.mdx b/pages/instances/reference-content/choosing-shared-vs-dedicated-cpus.mdx index 12d55ce15c..9ee3061c48 100644 --- a/pages/instances/reference-content/choosing-shared-vs-dedicated-cpus.mdx +++ b/pages/instances/reference-content/choosing-shared-vs-dedicated-cpus.mdx @@ -40,7 +40,7 @@ While physical CPU threads are shared between Instances, vCPUs are dedicated to ### Typical use cases - Development and staging environments -- Small and non critical production environments +- Small and non-critical production environments - Low to medium-traffic websites - Personal blogs and forums - Applications tolerant to occasional performance variability @@ -86,4 +86,4 @@ Choose **dedicated vCPU** Instances if: Consider your needs and workload requirements to choose the best vCPU provisioning option for your Scaleway Instance. -For more details about available instance types, refer to [Choosing the best Scaleway Instance type for your workload](/instances/reference-content/choosing-instance-type/). +For more details about available Instance types, refer to [Choosing the best Scaleway Instance type for your workload](/instances/reference-content/choosing-instance-type/). \ No newline at end of file diff --git a/pages/instances/troubleshooting/fix-lost-ip-connectivity-on-debian-buster.mdx b/pages/instances/troubleshooting/fix-lost-ip-connectivity-on-debian-buster.mdx index 2ba1bb94a5..35f6de2a70 100644 --- a/pages/instances/troubleshooting/fix-lost-ip-connectivity-on-debian-buster.mdx +++ b/pages/instances/troubleshooting/fix-lost-ip-connectivity-on-debian-buster.mdx @@ -13,7 +13,7 @@ categories: - compute --- -On older Debian Buster images, the installed custom version of `cloud-init` may interfere with IPv6 connectivity when the instance transitions to using routed IP. To avoid this, you should install a newer version of `cloud-init` before the migration to routed IP. This procedure also recovers connectivity for an instance already using routed IP. +On older Debian Buster images, the installed custom version of `cloud-init` may interfere with IPv6 connectivity when the Instance transitions to using routed IP. To avoid this, you should install a newer version of `cloud-init` before the migration to routed IP. This procedure also recovers connectivity for an Instance already using routed IP. This guide addresses specific issues related to older Debian Buster Instances transitioned to routed IP. For general information on routed IPs and migration procedures, refer to our [main migration guide](/instances/how-to/migrate-routed-ips/) and the [related FAQ](/instances/faq/#are-there-any-limitations-on-ip-autoconfiguration-with-the-routed-ip-feature). diff --git a/pages/key-manager/concepts.mdx b/pages/key-manager/concepts.mdx index 2c8758df45..850bc57d4e 100644 --- a/pages/key-manager/concepts.mdx +++ b/pages/key-manager/concepts.mdx @@ -97,12 +97,12 @@ For example, in the `AES-256-GCM` encryption scheme: A key encryption key (KEK) is a type of key that has a single purpose: encrypting and decrypting [data encryption keys](#data-encryption-key-dek). -The KEK is permanently stored in Scaleway's Key Manager and never leaves it. It cannot be accessed by anyone, and should be [rotated](/key-manager/api-cli/rotate-keys-api-cli/) regularly. +The KEK is permanently stored in Scaleway's Key Manager and never leaves it. It cannot be accessed by anyone and should be [rotated](/key-manager/api-cli/rotate-keys-api-cli/) regularly. ## Key management Key management is the process of handling keys used in cryptographic systems to ensure the security and integrity of your cryptographic operations. This includes the generation, exchange, storage, usage, and disposal of these keys. -Although strong cipher algorithms allow you to protect your information with secret keys, your data is only protected as long as your encryption keys are kept secret from non-authorized individuals. +Although strong cipher algorithms allow you to protect your information with secret keys, your data is only protected as long as your encryption keys are kept secret from unauthorized individuals. ## Key protection @@ -139,7 +139,7 @@ Plaintext refers to unencrypted, readable data. In the context of key management ## Region -A Region refers to the **geographical location** in which your key will be created. **Each region contains multiple Availability Zones**. Your key will be duplicated on **all Availability Zones** of the selected region. Scaleway is available in the Paris, Amsterdam, and Warsaw regions. +A region refers to the **geographical location** in which your key will be created. **Each region contains multiple Availability Zones**. Your key will be duplicated on **all Availability Zones** of the selected region. Scaleway is available in the Paris, Amsterdam, and Warsaw regions. ## Root encryption key (REK) @@ -151,4 +151,4 @@ Symmetric encryption is a fundamental type of cryptographic method where the sam Because symmetric encryption relies on a single key, it is generally fast and ideal for encrypting large volumes of data. However, its security depends entirely on keeping the key confidential. -Symmetric encryption algorithms like AES are widely used in scenarios where speed and efficiency are critical. As of now, Key Manager only supports the `AES_256_GCM` symmetric encryption algorithm. +Symmetric encryption algorithms like AES are widely used in scenarios where speed and efficiency are critical. As of now, Key Manager only supports the `AES_256_GCM` symmetric encryption algorithm. \ No newline at end of file diff --git a/pages/kubernetes/concepts.mdx b/pages/kubernetes/concepts.mdx index 202b14c2e3..fbc01b938f 100644 --- a/pages/kubernetes/concepts.mdx +++ b/pages/kubernetes/concepts.mdx @@ -26,7 +26,7 @@ Auto-upgrade allows you to schedule a maintenance window for your cluster to be ## Cluster -A cluster is a set of machines, called nodes, running containerized applications managed by [Kubernetes](#kubernetes). A cluster has several worker nodes and at least one control plane. Refer to the [control plane](#control-plane) concept for more information regarding the cluster's limitations. Scaleway allows you to create two configurations of clusters: [Kapsule](#kubernetes-kapsule), for clusters comprising Scaleway Instances, and [Kosmos](#kubernetes-kosmos), for multi-cloud clusters. Clusters can be tailored to availability requirements depending on the cluster availability types: zonal (single-zone or multi-AZ) and regional. +A cluster is a set of machines, called nodes, running containerized applications managed by [Kubernetes](#kubernetes). A cluster has several worker nodes and at least one control plane. Refer to the [control plane](#control-plane) concept for more information regarding the cluster's limitations. Scaleway allows you to create two configurations of clusters: [Kapsule](#kubernetes-kapsule), for clusters comprising Scaleway Instances, and [Kosmos](#kubernetes-kosmos), for multi-cloud clusters. Clusters can be tailored to availability requirements depending on the cluster availability types: zonal (single-zone or multi-AZ) and regional. ## Cluster types @@ -36,11 +36,11 @@ A single-zone cluster has its control plane operating in one zone, managing work ### Multi-AZ clusters -A Multi-AZ cluster features a single control plane in one zone but has nodes running across multiple zones. In case of a control plane outage or during cluster upgrades, workloads continue to run. However, the cluster and its workloads cannot be configured until the control plane is restored. Multi-zonal clusters offer a balance between availability and cost for stable workloads. During a zonal outage, workloads in that zone are disrupted, but they remain available in other zones. Multi AZ clusters have [technical limitations](/kubernetes/reference-content/multi-az-clusters/#limitations). For maintaining high availability, consider using a regional cluster. +A Multi-AZ cluster features a single control plane in one zone but has nodes running across multiple zones. In case of a control plane outage or during cluster upgrades, workloads continue to run. However, the cluster and its workloads cannot be configured until the control plane is restored. Multi-zonal clusters offer a balance between availability and cost for stable workloads. During a zonal outage, workloads in that zone are disrupted, but they remain available in other zones. Multi-AZ clusters have [technical limitations](/kubernetes/reference-content/multi-az-clusters/#limitations). For maintaining high availability, consider using a regional cluster. ### Regional clusters -A regional cluster has multiple replicas of the control plane distributed across multiple zones within a single region. Such cluster is only available with HA Dedicated Control Planes. Nodes can also be spread across multiple zones or restricted to a single zone, based on configuration. By default, Scaleway does not replicate each node pool across all zones of the control plane's region. You can customize this by specifying the zones for the cluster's nodes. Regional clusters are ideal for running production workloads due to their higher availability compared to zonal clusters. Regional clusters still have [technical limitations](/kubernetes/reference-content/multi-az-clusters/#limitations). +A regional cluster has multiple replicas of the control plane distributed across multiple zones within a single region. Such cluster is only available with HA Dedicated control planes. Nodes can also be spread across multiple zones or restricted to a single zone, based on configuration. By default, Scaleway does not replicate each node pool across all zones of the control plane's region. You can customize this by specifying the zones for the cluster's nodes. Regional clusters are ideal for running production workloads due to their higher availability compared to zonal clusters. Regional clusters still have [technical limitations](/kubernetes/reference-content/multi-az-clusters/#limitations). ## Container Network Interface (CNI) @@ -121,9 +121,9 @@ A pod is the smallest and simplest unit in the Kubernetes object model. Containe ## Pool -The Pool resource is a group of Scaleway Instances, organized by type (e.g., GP1-S, GP1-M). It represents the computing power of the cluster and contains the Kubernetes nodes, on which the containers run. Consider the following when creating a Pool: +The pool resource is a group of Scaleway Instances, organized by type (e.g., GP1-S, GP1-M). It represents the computing power of the cluster and contains the Kubernetes nodes, on which the containers run. Consider the following when creating a pool: -- Containers require a minimum of one Instance in the Pool. +- Containers require a minimum of one Instance in the pool. - A pool belongs to only one cluster, in the same region. ## ReplicaSet diff --git a/pages/kubernetes/how-to/edit-kosmos-cluster.mdx b/pages/kubernetes/how-to/edit-kosmos-cluster.mdx index 49e76a167e..f4473512b4 100644 --- a/pages/kubernetes/how-to/edit-kosmos-cluster.mdx +++ b/pages/kubernetes/how-to/edit-kosmos-cluster.mdx @@ -40,7 +40,7 @@ A multi-cloud pool allows you to attach external Instances and servers to your c 4. Click the **Pools** tab. 5. Click the **+ Add pool** button. The pool creation wizard displays. 6. Complete the following steps of the wizard: - * Choose a **pool type**. This can be a Scaleway Kubernetes Kapsule Pool or a Kubernetes multi-cloud Pool. This document concerns the addition of a multi-cloud pool. + * Choose a **pool type**. This can be a Scaleway Kubernetes Kapsule pool or a Kubernetes multi-cloud pool. This document concerns the addition of a multi-cloud pool. * A **name** for the pool and, optionally, a description and tags. 7. Click **Add pool** to finish. @@ -83,7 +83,7 @@ In order to add external nodes to your multi-cloud cluster, you must first [crea export POOL_ID= POOL_REGION= SCW_SECRET_KEY= ``` -4. Execute the program to attach the node to the multi cloud pool: +4. Execute the program to attach the node to the multi-cloud pool: ```bash sudo -E ./node-agent_linux_amd64 -loglevel 0 -no-controller diff --git a/pages/kubernetes/reference-content/multi-az-clusters.mdx b/pages/kubernetes/reference-content/multi-az-clusters.mdx index 90cbcb4cb1..f71d9021f4 100644 --- a/pages/kubernetes/reference-content/multi-az-clusters.mdx +++ b/pages/kubernetes/reference-content/multi-az-clusters.mdx @@ -40,13 +40,13 @@ For more information, refer to the [official Kubernetes best practices for runni ## Limitations -- Kapsule's Control Plane network access is managed by a Load Balancer in the primary zone of each region. If this zone fails globally, the Control Plane will be unreachable, even if the cluster spans multiple zones. This limitation also applies to HA Dedicated Control Planes. +- Kapsule's control plane network access is managed by a Load Balancer in the primary zone of each region. If this zone fails globally, the control plane will be unreachable, even if the cluster spans multiple zones. This limitation also applies to HA Dedicated control planes. - Persistent volumes are limited to their Availability Zone (AZ). Applications must replicate data across persistent volumes in different AZs to maintain high availability in case of zone failures. -- In "controlled isolation" mode, nodes access the Control Plane via their public IPs. If two AZs can not communicate (split-brain scenario), nodes will not appear unhealthy from Kubernetes' perspective, but communication between nodes in different AZs will be disrupted. Applications must handle this scenario if they use components across multiple AZs. -- In "full isolation" mode, nodes also use the Public Gateway to access the Control Plane. If nodes cannot reach the Public Gateway (e.g. because of Private Network failure in an AZ), they will become unhealthy. As there is only one Public Gateway per Private Network, losing the AZ with the Public Gateway results in the loss of all nodes in all private pools across all AZs. +- In "controlled isolation" mode, nodes access the control plane via their public IPs. If two AZs can not communicate (split-brain scenario), nodes will not appear unhealthy from Kubernetes' perspective, but communication between nodes in different AZs will be disrupted. Applications must handle this scenario if they use components across multiple AZs. +- In "full isolation" mode, nodes also use the Public Gateway to access the control plane. If nodes cannot reach the Public Gateway (e.g. because of Private Network failure in an AZ), they will become unhealthy. As there is only one Public Gateway per Private Network, losing the AZ with the Public Gateway results in the loss of all nodes in all private pools across all AZs. -It is important to note that the scalability and reliability of Kubernetes does not automatically ensure the scalability and reliability of an application hosted on it. While Kubernetes is a robust and scalable platform, each application must independently implement measures to achieve scalability and reliability, ensuring it avoids bottlenecks and single points of failure. Therefore, although Kubernetes itself remains responsive, the responsiveness of your application relies on your design and deployment choices. +It is important to note that the scalability and reliability of Kubernetes do not automatically ensure the scalability and reliability of an application hosted on it. While Kubernetes is a robust and scalable platform, each application must independently implement measures to achieve scalability and reliability, ensuring it avoids bottlenecks and single points of failure. Therefore, although Kubernetes itself remains responsive, the responsiveness of your application relies on your design and deployment choices. ## Kubernetes Kapsule infrastructure setup @@ -270,4 +270,4 @@ This method is an important point to maintain system resilience and operational * Tutorial [Deploying a multi-AZ Kubernetes cluster with Terraform/OpenTofu and Kapsule](/tutorials/k8s-kapsule-multi-az/) * Complete [Terraform/OpenTofu configuration files to deploy a multi-AZ cluster](https://github.com/scaleway/kapsule-terraform-multi-az-tutorial/) -* [Official Kubernetes best practices for running clusters in multiple zones](https://kubernetes.io/docs/setup/best-practices/multiple-zones/) +* [Official Kubernetes best practices for running clusters in multiple zones](https://kubernetes.io/docs/setup/best-practices/multiple-zones/) \ No newline at end of file diff --git a/pages/kubernetes/reference-content/understanding-differences-kapsule-kosmos.mdx b/pages/kubernetes/reference-content/understanding-differences-kapsule-kosmos.mdx index ee24d5558e..50d98596d8 100644 --- a/pages/kubernetes/reference-content/understanding-differences-kapsule-kosmos.mdx +++ b/pages/kubernetes/reference-content/understanding-differences-kapsule-kosmos.mdx @@ -23,7 +23,7 @@ Kapsule is Scaleway's **fully managed Kubernetes service**, enabling users to de - Available in multiple Scaleway regions (PAR, AMS, WAW), allowing users to deploy applications closer to their target audience for enhanced performance. - Allows users to deploy, manage, and scale containerized applications using Kubernetes without having to manage the underlying infrastructure. - Automatic scaling, rolling updates, and seamless integration with other Scaleway services like Load Balancers and Object Storage. - - Users can manage their Kubernetes clusters through the Kubernetes API, the intuitive Scaleway console or the Scaleway developer tools (namely [Scaleway CLI](/scaleway-cli/quickstart/) or the [Terraform/OpenTofu provider](/terraform/quickstart/). + - Users can manage their Kubernetes clusters through the Kubernetes API, the intuitive Scaleway console or the Scaleway developer tools (namely [Scaleway CLI](/scaleway-cli/quickstart/) or the [Terraform/OpenTofu provider](/terraform/quickstart/)). **Kapsule is ideal for:** Developers and organizations seeking to deploy containerized applications with Kubernetes without the operational overhead of managing Kubernetes infrastructure. @@ -42,15 +42,15 @@ Kosmos is Scaleway's **multi-cloud Kubernetes solution**, designed to operate ac | | Kapsule | Kosmos | |:-----------------------------------:|:----------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------:| -| Cloud | Scaleway only | Multi-Cloud | +| Cloud | Scaleway only | Multi-cloud | | Regions | PAR, AMS, WAW | PAR, AMS, WAW | -| Node pools | Scaleway Instances only, x86 or ARM | *Internal pools*: Scaleway Instances, x86 or ARM
*External pools*: Scaleway Elastic Metal, Scaleway RiscV, External providers.. | -| Node pools locations | Mono-AZ or Multi-AZ (same region) | Mono-AZ, Multi-AZ, Multi-Region, Multi-Cloud | +| Node pools | Scaleway Instances only, x86 or ARM | *Internal pools*: Scaleway Instances, x86 or ARM
*External pools*: Scaleway Elastic Metal, Scaleway RiscV, External providers... | +| Node pools locations | Mono-AZ or Multi-AZ (same region) | Mono-AZ, Multi-AZ, Multi-region, Multi-cloud | | Container Network Interface plugins | Cilium or Calico | Kilo | | Auto healing | ✔️ | Scaleway Instances only | | Auto scaling | ✔️ | Scaleway Instances only | | Container Storage Interface | ✔️ Persistent volumes (Block Storage) on Scaleway Instances | Scaleway Instances only | -| Dedicated Control Plane options | ✔️ | ✔️ | +| Dedicated control plane options | ✔️ | ✔️ | | Scaleway VPC | ✔️ Controlled isolation or full isolation | ✘ No integration | | Scaleway Cockpit | ✔️ | ✔️ | -| Node pools upgrades | Handled by Kapsule | *Internal pools*: Handled by Kapsule
*External pools*: Must be carried out manually per node | +| Node pools upgrades | Handled by Kapsule | *Internal pools*: Handled by Kapsule
*External pools*: Must be carried out manually per node | \ No newline at end of file diff --git a/pages/load-balancer/index.mdx b/pages/load-balancer/index.mdx index d0aeeac17e..69418dd837 100644 --- a/pages/load-balancer/index.mdx +++ b/pages/load-balancer/index.mdx @@ -1,7 +1,7 @@ --- meta: title: Load Balancer Documentation - description: Dive into Scaleway Load Balancers with our quickstart guides, how-tos, tutorials and more. + description: Dive into Scaleway Load Balancers with our quickstart guides, how-tos, tutorials, and more. --- diff --git a/pages/managed-inference/faq.mdx b/pages/managed-inference/faq.mdx index 38995b0110..69286971d7 100644 --- a/pages/managed-inference/faq.mdx +++ b/pages/managed-inference/faq.mdx @@ -32,11 +32,11 @@ Managed Inference aims to achieve seamless compatibility with OpenAI APIs. Find We are currently working on defining our SLAs for Managed Inference. We will provide more information on this topic soon. ## What are the performance guarantees (vs. Generative APIs)? -Managed Inference provides dedicated resources, ensuring predictable performance and lower latency compared to Generative APIs, which are a shared, serverless offering optimized for infrequent traffic with moderate peak loads. Managed Inference is ideal for workloads that require consistent response times, high availability, custom hardware configurations or generate extreme peak loads during a narrow period of time. -Compared to Generative APIs, no usage quota is applied to the number of tokens per second generated, since the output is limited by the GPU Instances size and number of your Managed Inference Deployment. +Managed Inference provides dedicated resources, ensuring predictable performance and lower latency compared to Generative APIs, which are a shared, serverless offering optimized for infrequent traffic with moderate peak loads. Managed Inference is ideal for workloads that require consistent response times, high availability, custom hardware configurations, or generate extreme peak loads during a narrow period. +Compared to Generative APIs, no usage quota is applied to the number of tokens per second generated, since the output is limited by the GPU Instance size and number of your Managed Inference Deployment. ## How can I monitor performance? -Managed Inference metrics and logs are available in [Scaleway Cockpit](https://console.scaleway.com/cockpit/overview). You can follow your deployment metrics in realtime, such as tokens throughput, requests latency, GPU power usage and GPU VRAM usage. +Managed Inference metrics and logs are available in [Scaleway Cockpit](https://console.scaleway.com/cockpit/overview). You can follow your deployment metrics in real-time, such as tokens throughput, requests latency, GPU power usage, and GPU VRAM usage. ## What types of models can I deploy with Managed Inference? You can deploy a variety of models, including: @@ -60,7 +60,7 @@ You can select the Instance type based on your model’s computational needs and Billing is based on the Instance type and usage duration. Unlike [Generative APIs](/generative-apis/quickstart/), which are billed per token, Managed Inference provides predictable costs based on the allocated infrastructure. Pricing details can be found on the [Scaleway pricing page](https://www.scaleway.com/en/pricing/model-as-a-service/#managed-inference). -## Can I pause Managed Inference billing when the instance is not in use ? +## Can I pause Managed Inference billing when the Instance is not in use? When a Managed Inference deployment is running, corresponding resources are provisioned and thus billed. Resources can therefore not be paused. However, you can still optimize your Managed Inference deployment to fit within specific time ranges (such as during working hours). To do so, you can automate deployment creation and deletion using the [Managed Inference API](https://www.scaleway.com/en/developers/api/inference/), [Terraform](https://registry.terraform.io/providers/scaleway/scaleway/latest/docs/resources/inference_deployment) or [Scaleway SDKs](https://www.scaleway.com/en/docs/scaleway-sdk/). These actions can be programmed using [Serverless Jobs](/serverless-jobs/) to be automatically carried out periodically. @@ -79,4 +79,4 @@ Absolutely. Managed Inference integrates seamlessly with other Scaleway services ## Do model licenses apply when using Managed Inference? Yes, model licenses need to be complied with when using Managed Inference. Applicable licenses are available for [each model in our documentation](/managed-inference/reference-content/). - For models provided in the Scaleway catalog, you need to accept licenses (including potential EULA) before creating any Managed Inference deployment. -- For custom models you choose to import on Scaleway, you are responsible for complying with model licenses (as with any software you choose to install on a GPU Instance for example). +- For custom models you choose to import on Scaleway, you are responsible for complying with model licenses (as with any software you choose to install on a GPU Instance for example). \ No newline at end of file diff --git a/pages/messaging/faq.mdx b/pages/messaging/faq.mdx index 29fdf41f40..9d0fe5fad9 100644 --- a/pages/messaging/faq.mdx +++ b/pages/messaging/faq.mdx @@ -14,13 +14,13 @@ productIcon: NatsProductIcon These are three distinct managed message broker tools offered by Scaleway, based on the NATS, SQS and SNS protocols respectively. Previously, these products were grouped together as 'Messaging and Queuing', but have now become three separate products in their own right. -## What are NATS, SNS and SQS? +## What are NATS, SNS, and SQS? -NATS, SNS and SQS are all messaging protocols used by the Scaleway NATS, Queues, and Topics and Events products. You can find out more about these protocols, and other essential concepts, on our dedicated [concepts page](/messaging/concepts/). +NATS, SNS, and SQS are all messaging protocols used by the Scaleway NATS, Queues, and Topics and Events products. You can find out more about these protocols, and other essential concepts, on our dedicated [concepts page](/messaging/concepts/). ## Is the Scaleway Queues gateway compatible with my application, framework or tool? -We currently implement the API endpoints listed [here](/messaging/reference-content/sqs-support/), which makes Scaleway Queues compatible with the AWS SDK as well as many other tools and frameworks including KEDA and Symfony. Note that you need to specify both Regions and URL to ensure compatibility. +We currently implement the API endpoints listed [here](/messaging/reference-content/sqs-support/), which makes Scaleway Queues compatible with the AWS SDK as well as many other tools and frameworks including KEDA and Symfony. Note that you need to specify both regions and URL to ensure compatibility. ## Does Scaleway Topics and Events support all SNS features? diff --git a/pages/object-storage/how-to/create-a-bucket.mdx b/pages/object-storage/how-to/create-a-bucket.mdx index f0ea5ca14d..2745f5a311 100644 --- a/pages/object-storage/how-to/create-a-bucket.mdx +++ b/pages/object-storage/how-to/create-a-bucket.mdx @@ -45,4 +45,4 @@ To get started with Object Storage, you must first create a bucket. Objects are 9. Optionally, you can use the cost estimator to simulate your Object Storage costs. 10. Click **Create bucket** to confirm. A list of your buckets displays, showing the newly created bucket. -You can find more information about your bucket by clicking on its name in the **Buckets** list, and then on the **Bucket settings** tab. +You can find more information about your bucket by clicking on its name in the **Buckets** list, and then on the **Bucket settings** tab. \ No newline at end of file diff --git a/pages/object-storage/how-to/use-obj-stor-with-private-networks.mdx b/pages/object-storage/how-to/use-obj-stor-with-private-networks.mdx index f0861791c3..f83df42ced 100644 --- a/pages/object-storage/how-to/use-obj-stor-with-private-networks.mdx +++ b/pages/object-storage/how-to/use-obj-stor-with-private-networks.mdx @@ -36,7 +36,7 @@ You must create an Instance without a flexible IP using the following specificat ## How to create a Private Network and attach the Instance -1. Follow the instructions for [creating a Private Network](/vpc/how-to/create-private-network/). Make sure you create it in the Region that encompasses the Availability Zone of the Instance you previously created. +1. Follow the instructions for [creating a Private Network](/vpc/how-to/create-private-network/). Make sure you create it in the region that encompasses the Availability Zone of the Instance you previously created. 2. Follow the instructions to [attach your Instance to the Private Network](/vpc/how-to/attach-resources-to-pn/). ## How to create a Public Gateway and attach the Private Network @@ -78,4 +78,4 @@ You must create an Instance without a flexible IP using the following specificat ## Conclusion -You have now configured an Instance with a Private Network to communicate with Scaleway's Object Storage platform using a Public Gateways. The gateway ensures the exchange of data between your Private Network and the public Internet. +You have now configured an Instance with a Private Network to communicate with Scaleway's Object Storage platform using a Public Gateway. The gateway ensures the exchange of data between your Private Network and the public Internet. \ No newline at end of file diff --git a/pages/object-storage/quickstart.mdx b/pages/object-storage/quickstart.mdx index 3a51bdd227..77b72c1f72 100644 --- a/pages/object-storage/quickstart.mdx +++ b/pages/object-storage/quickstart.mdx @@ -27,7 +27,7 @@ To get started with Object Storage, you must first create a bucket. Objects are 1. Click **Object Storage** on the left side menu of the console. The Object Storage dashboard displays. 2. Click **+ Create bucket**. The bucket creation page displays. -3. Select the geographical location in which to create your bucket. Scaleway Object Storage is currently available in three Regions. +3. Select the geographical location in which to create your bucket. Scaleway Object Storage is currently available in three regions. - Amsterdam, The Netherlands: - Region: `nl-ams` - Paris, France: @@ -88,4 +88,4 @@ Once the bucket is deleted, it disappears from your bucket list. For operational reasons, you have to wait 24 hours before creating a bucket with the same name as the one you have just deleted. - +
\ No newline at end of file diff --git a/pages/secret-manager/how-to/create-version.mdx b/pages/secret-manager/how-to/create-version.mdx index e8ddb3f0dc..2a80d4f38e 100644 --- a/pages/secret-manager/how-to/create-version.mdx +++ b/pages/secret-manager/how-to/create-version.mdx @@ -42,7 +42,4 @@ This page explains how to add more [versions](/secret-manager/concepts/#version) 8. Click the icon if you want to [enable](/secret-manager/concepts/#enabling-a-version) the version. -9. Click **Create version**. - - - +9. Click **Create version**. \ No newline at end of file diff --git a/pages/secret-manager/how-to/delete-version.mdx b/pages/secret-manager/how-to/delete-version.mdx index ff807279eb..a4de62e080 100644 --- a/pages/secret-manager/how-to/delete-version.mdx +++ b/pages/secret-manager/how-to/delete-version.mdx @@ -29,11 +29,9 @@ Once you schedule a version for deletion, it enters a 7-day pending deletion per 3. Access the secret for which you want to delete the version. Your secret's **Overview** tab displays. 4. Click the **Versions** tab. 5. Click next to the version you want to delete. -6. Click **Delete**. A pop-up displays informating you that the action schedules the deletion of your version. -7. Type **DELETE** and click **Delete version**. Your version enters the **Scheduled for deletion** status for a period of 7 days before being permanently deleted. +6. Click **Delete**. A pop-up displays informing you that the action schedules the deletion of your version. +7. Type **DELETE** and click **Delete version**. Your version enters the **Scheduled for deletion** status for 7 days before being permanently deleted. Deleting a version is permanent. You will not be able to use the version again if you do not [recover it](/secret-manager/how-to/recover-version/) before the end of the retention period. - - - + \ No newline at end of file diff --git a/pages/troubleshooting/index.mdx b/pages/troubleshooting/index.mdx index a0198a6577..fa71d40307 100644 --- a/pages/troubleshooting/index.mdx +++ b/pages/troubleshooting/index.mdx @@ -59,7 +59,7 @@ content: - [My account has been locked](/account/faq/#my-account-is-locked-what-do-i-do) - - [I can't connect to my instance using SSH](/instances/troubleshooting/cant-connect-ssh/#warning-remote-host-identification-has-changed) + - [I can't connect to my Instance using SSH](/instances/troubleshooting/cant-connect-ssh/#warning-remote-host-identification-has-changed) - [I can't connect to my Mac Mini using VNC](/apple-silicon/troubleshooting/cant-connect-using-vnc/) - [How to add a payment method](/billing/how-to/add-payment-method/) - [How to rename an Organization](/account/faq/#can-i-change-the-name-of-my-organization) diff --git a/tutorials/configure-graphite/index.mdx b/tutorials/configure-graphite/index.mdx index fab0e6dc03..7b1a30de4a 100644 --- a/tutorials/configure-graphite/index.mdx +++ b/tutorials/configure-graphite/index.mdx @@ -27,7 +27,7 @@ This tutorial provides the steps needed to install and configure Graphite on **U - A **Scaleway account** logged into the [console](https://console.scaleway.com) - **Owner** status or **IAM permissions** that allow performing actions in the intended Organization - An **SSH key** for server access -- An **Ubuntu 22.04 LTS** instance up and running +- An **Ubuntu 22.04 LTS** Instance up and running - **API key** for interacting with Scaleway’s services - **`sudo` privileges** or root user access to the system @@ -159,7 +159,7 @@ To access the Graphite web interface, you need a web server. Here, we'll use **A sudo cp /usr/share/graphite-web/apache2-graphite.conf /etc/apache2/sites-available/ ``` -4.Enable the Graphite site: +4. Enable the Graphite site: ```bash sudo a2ensite apache2-graphite ``` @@ -207,4 +207,4 @@ You have now successfully installed and configured Graphite on **Ubuntu 22.04**. For production environments, consider using tools to automate data collection, as sending metrics via the terminal is not recommended for long-term use. -For more details, refer to the [official Graphite documentation](https://graphite.readthedocs.io/en/latest/). +For more details, refer to the [official Graphite documentation](https://graphite.readthedocs.io/en/latest/). \ No newline at end of file diff --git a/tutorials/configure-slack-alerting/index.mdx b/tutorials/configure-slack-alerting/index.mdx index 29eb8d9b0e..a99985eee3 100644 --- a/tutorials/configure-slack-alerting/index.mdx +++ b/tutorials/configure-slack-alerting/index.mdx @@ -1,9 +1,9 @@ --- meta: - title: Sending Cockpit's alert notifications to Slack using a webkook URL + title: Sending Cockpit's alert notifications to Slack using a webhook URL description: Learn how to send your Cockpit alert notifications to your Slack channels for more efficient monitoring. content: - h1: Sending Cockpit's alert notifications to Slack using a webkook URL + h1: Sending Cockpit's alert notifications to Slack using a webhook URL paragraph: Learn how to send your Cockpit alert notifications to your Slack channels for more efficient monitoring. categories: - cockpit @@ -25,7 +25,7 @@ As **we do not support Grafana managed alerts**, this documentation only shows y - [Enabled](/cockpit/how-to/enable-alert-manager/) the alert manager - [Retrieved](/cockpit/how-to/retrieve-grafana-credentials/) your Grafana credentials - [Configured](/cockpit/how-to/configure-alerts-for-scw-resources/) alerts for your resources (preconfigured or custom) - - [Created](https://slack.com/help/articles/206845317-Create-a-Slack-workspace) a Slack workspace in which you want to receive the alert notifications + - [Created](https://slack.com/help/articles/206845317-Create-a-Slack-workspace) a Slack workspace in which you want to receive alert notifications ## Creating a Slack app @@ -33,7 +33,7 @@ As **we do not support Grafana managed alerts**, this documentation only shows y 2. Click **From scratch**. 3. Enter a name for your app. For the purpose of this documentation, we are naming the app `Scaleway alerts`. 4. Pick the workspace you want to receive alerts in from the drop-down. -5. Click **Create App** to confirm. You app's **Basic information** page displays. +5. Click **Create App** to confirm. Your app's **Basic information** page displays. 6. Optionally, scroll down to **Display information** to customize the way your app will display in Slack. For more information, refer to the [App Detail Guidelines](https://api.slack.com/slack-marketplace/guidelines). For example, you can: - Add a short description in the **Short description** field - Add an icon @@ -82,5 +82,4 @@ If you have created multiple contact points in Grafana, the default contact poin 9. In the **Contact point** field, select the contact point you have configured for Slack. 10. Click **Save policy**. Your nested policy displays. You should now get notified on Slack. - - + \ No newline at end of file diff --git a/tutorials/configuring-loadbalancer-wordpress/index.mdx b/tutorials/configuring-loadbalancer-wordpress/index.mdx index 4810e8884c..a54b482727 100644 --- a/tutorials/configuring-loadbalancer-wordpress/index.mdx +++ b/tutorials/configuring-loadbalancer-wordpress/index.mdx @@ -1,10 +1,10 @@ --- meta: - title: Setting up a load balanced WordPress - description: This page shows you how to set up a load balanced WordPress for increased availability + title: Setting up a load-balanced WordPress + description: This page shows you how to set up a load-balanced WordPress for increased availability content: - h1: Setting up a load balanced WordPress - paragraph: This page shows you how to set up a load balanced WordPress for increased availability + h1: Setting up a load-balanced WordPress + paragraph: This page shows you how to set up a load-balanced WordPress for increased availability categories: - load-balancer - instances @@ -14,7 +14,7 @@ dates: posted: 2019-04-08 --- -The capacity of a single server is limited. Once a website gains more and more attraction the instance serving the site comes to a point where it can not handle any more users. The website starts to slow down or even become unavailable as the server goes down from the traffic. +The capacity of a single server is limited. Once a website gains more and more attraction the Instance serving the site comes to a point where it can not handle any more users. The website starts to slow down or even become unavailable as the server goes down from the traffic. This is the point where a Load Balancer enters the game. It allows spreading the "load" that all those visitors and their requests create to be "balanced" over a series of different Instances. @@ -40,8 +40,8 @@ In this tutorial, you learn how to set up a Scaleway-managed Load Balancer with - `51.51.51.51` for the Load Balancer front-end IP Load Balancer supports private IPs of Scaleway Instances for backend servers, allowing you to deploy Instances without public IPv4. -1. Follow [this tutorial](/tutorials/wordpress-lemp-stack/) to start an Ubuntu Instances and to install WordPress with LEMP on both of them. -2. Set up a third instance with a MariaDB database as explained in [this tutorial](/tutorials/mariadb-ubuntu-bionic/). +1. Follow [this tutorial](/tutorials/wordpress-lemp-stack/) to start two Ubuntu Instances and install WordPress with LEMP on both of them. +2. Set up a third Instance with a MariaDB database as explained in [this tutorial](/tutorials/mariadb-ubuntu-bionic/). ## Configuring a Load Balancer @@ -49,8 +49,8 @@ Load Balancer supports private IPs of Scaleway Instances for backend servers, al 2. Enter the **Name** of the Load Balancer, optionally you can enter a description and tags to simplify the management of them. Choose the **Region** for the Load Balancer (it should be the same region as the geographical region of your Instances), and a new IP address is allocated automatically.1. Click **Load Balancer** in the menu on the left, to enter the Load Balancer section, then click **+ Create a Load Balancer**: -2. Enter the **Name** of the Load Balancer, optionally you can enter a description and tags to simplify the management of them. Choose the **Region** for the Load Balancer (it should be the same region as the geographical region of your Instances), and a new IP address is allocated automatically. -4. Configure a backend rule, this rule defines the backend infrastructure that will be load balanced. +3. Enter the **Name** of the Load Balancer, optionally you can enter a description and tags to simplify the management of them. Choose the **region** for the Load Balancer (it should be the same region as the geographical region of your Instances), and a new IP address is allocated automatically. +4. Configure a backend rule, this rule defines the backend infrastructure that will be load-balanced. The following parameters should be configured in the backend rule: @@ -58,7 +58,7 @@ Load Balancer supports private IPs of Scaleway Instances for backend servers, al |----------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------| |**Backend Name**|A name for the backend rule (e.g. `wordpress-backend-rule`)| |**Protocol**|The protocol to use. Set this value to `HTTP`, to have access to HTTP-specific features of the Load Balancer| - |**Port**|The port on which the backend application listens, with a standard configuration is **port 80** for a web application. It is also possible to use [SSL to encrypt backend connections](/tutorials/nginx-reverse-proxy/), in this case set the port to 443.| + |**Port**|The port on which the backend application listens, with a standard configuration is **port 80** for a web application. It is also possible to use [SSL to encrypt backend connections](/tutorials/nginx-reverse-proxy/), in this case, set the port to 443.| |**Proxy**|This enables or disables PROXY protocol version 2 (must be supported by backend servers). It is not required for this tutorial, keep it off.| |**TLS encryption**|Enable Transport Layer Security (TLS) to encrypt connections between the Load Balancer and the backend server(s).| |**Health Check Type**|The health check type to use. To check the health of a web application, set this to `HTTP`.| @@ -75,7 +75,7 @@ It is possible to check the status of the Load Balancer with an API call. It will provide you with information about the status of the Load Balancer and if the health check was successful. - Retrieve your organization ID and generate [API key](/iam/how-to/create-api-keys/) from your [management console](https://console.scaleway.com/project/credentials) before you continue. + Retrieve your Organization ID and generate [API key](/iam/how-to/create-api-keys/) from your [management console](https://console.scaleway.com/project/credentials) before you continue. 1. Set the required variables to make the API call easier: @@ -128,10 +128,10 @@ It will provide you with information about the status of the Load Balancer and i } ``` - As two Instances are configured in the backend, the JSON list contains four entries. This is due to the high availability feature of the Load Balancer. Should the master instance experience a failure, it switches automatically to the backup one. + As two Instances are configured in the backend, the JSON list contains four entries. This is due to the high availability feature of the Load Balancer. Should the master Instance experience a failure, it switches automatically to the backup one. - In the status of the `running` instance, the health check status (`last_health_check_status`) has `passed`. This means that the backend instance replied well to the request sent to it in the health check. Requests to WordPress are load balanced between the two Instances. -4. Connect to the first WordPress instance (`10.45.2.3`) and stop the web server application running on it: + In the status of the `running` instance, the health check status (`last_health_check_status`) has `passed`. This means that the backend Instance replied well to the request sent to it in the health check. Requests to WordPress are load-balanced between the two Instances. +4. Connect to the first WordPress Instance (`10.45.2.3`) and stop the web server application running on it: ```bash systemctl stop nginx.service ``` @@ -148,14 +148,14 @@ It will provide you with information about the status of the Load Balancer and i } ``` - When you navigate to the load balanced IP (`http://51.51.51.51`) in your browser, your WordPress displays. The Load Balancer has automatically detected that the Nginx server on the first instance (`10.45.2.3`) is not running anymore and redirects all traffic to the second instance (`10.45.2.3`). + When you navigate to the load-balanced IP (`http://51.51.51.51`) in your browser, your WordPress displays. The Load Balancer has automatically detected that the Nginx server on the first Instance (`10.45.2.3`) is not running anymore and redirects all traffic to the second Instance (`10.45.2.3`). 7. Restart the web server application and re-run the command mentioned in step 3. The `last_health_check_status` will change again into `passed` and requests are load balanced again between the two Instances. ## Configuring WordPress -Update the configuration of each instance now, so they will use the database on the dedicated MariaDB instance instead of using a local database. +Update the configuration of each Instance now, so they will use the database on the dedicated MariaDB Instance instead of using a local database. -1. Connect to the instance via SSH as `root` +1. Connect to the Instance via SSH as `root`. 2. Open the file `/var/www/wp-config.php` in a text editor and edit the database section as follows: ```bash // ** MySQL settings - You can get this info from your web host ** // @@ -180,14 +180,14 @@ Update the configuration of each instance now, so they will use the database on Once edited, save the file and exit the text editor. - When connecting to the instance from a web browser, the content is taken from the database on the MariaDB server. -4. Log into the WordPress Admin interface and click **Settings** to configure WordPress. + When connecting to the Instance from a web browser, the content is taken from the database on the MariaDB server. +3. Log into the WordPress Admin interface and click **Settings** to configure WordPress. - Enter the IP of one of your WordPress Instances (`10.45.2.3`) in the field **WordPress Address (URL)** - Enter the **Load Balanced IP** (`51.51.51.51`) or your domain name in the field **Site Address (URL)** Save the form. -5. Redo this step on the second instance. -6. Type the **Load Balanced IP** or your domain name in your browser, and WordPress will appear on the load balanced IP: +4. Redo this step on the second instance. +5. Type the **Load Balanced IP** or your domain name in your browser, and WordPress will appear on the load balanced IP: -The Load Balancer is now automatically distributing the load between your Instances. To increase the computing power of the Load Balancer, simply snapshot an instance and spin up a new one. \ No newline at end of file +The Load Balancer is now automatically distributing the load between your Instances. To increase the computing power of the Load Balancer, simply snapshot an Instance and spin up a new one. \ No newline at end of file diff --git a/tutorials/deploy-nextcloud-s3/index.mdx b/tutorials/deploy-nextcloud-s3/index.mdx index 3d7a1ed1f2..5ce5548485 100644 --- a/tutorials/deploy-nextcloud-s3/index.mdx +++ b/tutorials/deploy-nextcloud-s3/index.mdx @@ -86,12 +86,12 @@ Combining NextCloud with Scaleway Object Storage gives you infinite storage spac ``` -8. Enable the configuration and reload Apache to activate the site: +3. Enable the configuration and reload Apache to activate the site: ``` a2ensite nextcloud.conf systemctl reload apache2.service ``` -9. Enable SSL. Apache provides a self-signed certificate to encrypt the connection to your server. You can activate it with the following commands: +4. Enable SSL. Apache provides a self-signed certificate to encrypt the connection to your server. You can activate it with the following commands: ``` a2enmod ssl a2ensite default-ssl @@ -101,7 +101,7 @@ Combining NextCloud with Scaleway Object Storage gives you infinite storage spac A self-signed certificate may have some drawbacks if you want to make your NextCloud installation publicly available, a warning may appear in the browser. If required, you can request a free signed certificate from [Let's Encrypt](https://letsencrypt.org). -10. Set the file permissions to the Apache user: +5. Set the file permissions to the Apache user: ``` chown -R www-data:www-data /var/www/nextcloud/ ``` @@ -135,7 +135,7 @@ Combining NextCloud with Scaleway Object Storage gives you infinite storage spac NextCloud can use Object Storage as primary storage. This gives you the possibility to store infinite data in your personal cloud. - Configuring Object Storage as primary storage on an existing NextCloud instance will make all existing files on the instance inaccessible. It is therefore recommended to configure Object Storage on a fresh installation. + Configuring Object Storage as primary storage on an existing NextCloud Instance will make all existing files on the Instance inaccessible. It is therefore recommended to configure Object Storage on a fresh installation. 1. Retrieve your `ACCESS-KEY` and `SECRET-KEY` from the [Scaleway console](https://console.scaleway.com/project/credentials/). @@ -207,7 +207,7 @@ NextCloud can use Object Storage as primary storage. This gives you the possibil ### Configuring Object Storage as external storage in NextCloud -You can use NextCloud as a client for Object Storage while using the Local Storage as primary storage. This can be useful for the migration of an existing NextCloud instance to Object Storage. +You can use NextCloud as a client for Object Storage while using Local Storage as primary storage. This can be useful for the migration of an existing NextCloud Instance to Object Storage. 1. Log into your NextCloud to configure the Object Storage bucket. 2. From the NextCloud interface, click **Apps** in the drop-down menu to access the list of available apps: diff --git a/tutorials/discourse-forum/index.mdx b/tutorials/discourse-forum/index.mdx index 68be18caca..ca75247a18 100644 --- a/tutorials/discourse-forum/index.mdx +++ b/tutorials/discourse-forum/index.mdx @@ -34,8 +34,8 @@ For those looking to set up Discourse, using the official [Docker image](https:/ ## Installing Discourse -1. Log into your instance using [SSH](/instances/how-to/connect-to-instance/). -2. Update the `apt` package cache and upgrade the software already installed on the instance to the latest version available in Ubuntu's Repositories: +1. Log into your Instance using [SSH](/instances/how-to/connect-to-instance/). +2. Update the `apt` package cache and upgrade the software already installed on the Instance to the latest version available in Ubuntu's repositories: ``` apt update && apt ugprade -y ``` @@ -74,7 +74,7 @@ For those looking to set up Discourse, using the official [Docker image](https:/ 8. Once all details are entered, an `app.yml` configuration file is generated on your behalf before bootstrapping the installation. - The installation of Discourse may take up to 10 minutes depending on your instance type. + The installation of Discourse may take up to 10 minutes depending on your Instance type. ## Configuring the admin user diff --git a/tutorials/erpnext-13/index.mdx b/tutorials/erpnext-13/index.mdx index 4dec7d747a..07d9f5471b 100644 --- a/tutorials/erpnext-13/index.mdx +++ b/tutorials/erpnext-13/index.mdx @@ -72,8 +72,8 @@ Start by configuring the system's keyboard mapping for the console as well as th ``` Save the file and exit the text editor. -7. Reboot your instance using the `reboot` command. -8. Wait a minute for the reboot to finish, then SSH back into your instance with the following command: +7. Reboot your Instance using the `reboot` command. +8. Wait a minute for the reboot to finish, then SSH back into your Instance with the following command: ``` ssh root@ ``` diff --git a/tutorials/foreman-puppet/index.mdx b/tutorials/foreman-puppet/index.mdx index 45b6b8f2aa..e511ccc105 100644 --- a/tutorials/foreman-puppet/index.mdx +++ b/tutorials/foreman-puppet/index.mdx @@ -15,7 +15,7 @@ dates: Foreman is a tool that helps system administrators manage servers throughout their lifecycle, from provisioning and configuration to orchestration and monitoring. In short, it is a complete lifecycle management tool for physical and virtual servers. Foreman, available as open source software, becomes even more powerful when integrated with other open source projects such as [Puppet](https://puppet.com/ecosystem/devx/), [Chef](/tutorials/configure-chef-ubuntu-xenial/), [Salt](https://docs.saltstack.com/en/latest/), and [Ansible](/tutorials/ansible-bionic-beaver/). -Foreman helps to automatize the OS installation. After that, through its integration with Puppet, the new system will be configured to the desired state. Finally, Puppet will send facts about the system to Foreman which helps to monitor the whole system over its complete lifecycle. With a discovery plugin, Foreman can also discover new machines in the network based on their mac address. +Foreman helps to automatize the OS installation. After that, through its integration with Puppet, the new system will be configured to the desired state. Finally, Puppet will send facts about the system to Foreman which helps to monitor the whole system over its complete lifecycle. With a discovery plugin, Foreman can also discover new machines in the network based on their MAC address. This tutorial assumes that Foreman is being installed on a fresh Instance, which will also act as the Puppet primary server. @@ -223,7 +223,7 @@ Click the **YAML** button, to see the information provided to Puppet when an age ## Running the Puppet agent -Run the Puppet agent on the Foreman instance to apply all the changes that were made above. +Run the Puppet agent on the Foreman Instance to apply all the changes that were made above. ``` sudo puppet agent --test diff --git a/tutorials/install-cassandra/index.mdx b/tutorials/install-cassandra/index.mdx index 44888d879a..34d2f24f95 100644 --- a/tutorials/install-cassandra/index.mdx +++ b/tutorials/install-cassandra/index.mdx @@ -26,7 +26,7 @@ dates: ## Installing Cassandra -1. Connect to your instance via SSH or by using PuTTY. +1. Connect to your Instance via SSH or by using PuTTY. 2. Add the Apache Cassandra repository: ``` echo "deb http://www.apache.org/dist/cassandra/debian 50x main" | tee /etc/apt/sources.list.d/cassandra.list diff --git a/tutorials/install-parse-server/index.mdx b/tutorials/install-parse-server/index.mdx index 868cb0702c..48b1490e13 100644 --- a/tutorials/install-parse-server/index.mdx +++ b/tutorials/install-parse-server/index.mdx @@ -147,7 +147,7 @@ These steps should install Node.js efficiently on your system. Additionally, you 4. Save and close the file. 5. Start the Parse server. - The `nohup` command allows you to manually start the Parse server. However, the downside of this procedure is that if the instance on which the Parse server is installed fails, the Parse server will not restart automatically. + The `nohup` command allows you to manually start the Parse server. However, the downside of this procedure is that if the Instance on which the Parse server is installed fails, the Parse server will not restart automatically. ``` @@ -267,7 +267,7 @@ Parse server comes with a Dashboard for managing your Parse server applications. 4. Save and close the file. 5. Start the Parse server Dashboard. - The `nohup` command allows you to manually start the Parse server. However, the downside of this procedure is that if the instance on which the Parse server is installed fails, the Parse server will not restart automatically. + The `nohup` command allows you to manually start the Parse server. However, the downside of this procedure is that if the Instance on which the Parse server is installed fails, the Parse server will not restart automatically. ``` diff --git a/tutorials/k8s-gitlab/index.mdx b/tutorials/k8s-gitlab/index.mdx index dae26f36ab..970baab8cc 100644 --- a/tutorials/k8s-gitlab/index.mdx +++ b/tutorials/k8s-gitlab/index.mdx @@ -88,7 +88,7 @@ In this part of the tutorial we customize the `value.yaml` to fit our needs and create: true serviceAccountName: default ``` - Ensure you replace `` and `` with your actual GitLab instance URL and registration token. + Ensure you replace `` and `` with your actual GitLab Instance URL and registration token. By default, the gitlabUrl and the registration token lines are written as a comment in the `values.yaml`file. Make sure you have deleted the `#` before saving. diff --git a/tutorials/manage-container-registry-images/index.mdx b/tutorials/manage-container-registry-images/index.mdx index cdb08d4357..a8bf4713f8 100644 --- a/tutorials/manage-container-registry-images/index.mdx +++ b/tutorials/manage-container-registry-images/index.mdx @@ -108,7 +108,7 @@ This tutorial will show you how to periodically remove older images with a speci 12. Define the following **Environment Variables** for your Function: | Key | Value | |--------------|------------------------------------------------| - | REGION | The Region in which your Registry is located. (eg: `fr-par`) | + | REGION | The region in which your Registry is located. (eg: `fr-par`) | | TAGS-TO-KEEP | The number of Image tags you want to keep | 13. Define the following **Secrets** as secret environment variables for your Function: | Key | Value | @@ -129,7 +129,7 @@ A trigger allows you to invoke functions by handling events coming from other so * Select the **CRON** type. * Enter the UNIX schedule for the CRON. - For example 0 2 * * * will make the function run every day at 2 am. + For example, 0 2 * * * will make the function run every day at 2 am. * Paste your JSON arguments. 5. Click **Create new trigger** to create it. The newly created trigger displays in the list of your triggers. diff --git a/tutorials/mastodon-community/index.mdx b/tutorials/mastodon-community/index.mdx index 55e6be8548..33d125d7ac 100644 --- a/tutorials/mastodon-community/index.mdx +++ b/tutorials/mastodon-community/index.mdx @@ -16,7 +16,7 @@ dates: Mastodon is an open-source, self-hosted, social media and social networking service. It allows you to host your Instances which may have their own code of conduct, terms of service, and moderation policies. There is no central server and Mastodon Instances are connected as a federated social network, allowing users from different Instances to interact with each other. The platform provides privacy features allowing users to adjust the privacy settings of each of their posts. -As there is no central server, you can choose whether to join or leave an instance according to its policy without actually leaving Mastodon Social Network. Mastodon is a part of [Fediverse](https://fediverse.party/), allowing users to interact with users on other platforms that support the same protocol for example: [PeerTube](https://joinpeertube.org/en/), [Friendica](https://friendi.ca/) and [GNU Social](https://gnu.io/social/). +As there is no central server, you can choose whether to join or leave an Instance according to its policy without actually leaving Mastodon Social Network. Mastodon is a part of [Fediverse](https://fediverse.party/), allowing users to interact with users on other platforms that support the same protocol for example: [PeerTube](https://joinpeertube.org/en/), [Friendica](https://friendi.ca/) and [GNU Social](https://gnu.io/social/). Mastodon provides the possibility of using [Amazon S3-compatible Object Storage](/object-storage/how-to/create-a-bucket/) to store media content uploaded to Instances, making it flexible and scalable. @@ -119,7 +119,7 @@ Mastodon provides the possibility of using [Amazon S3-compatible Object Storage] Mastodon requires access to a PostgreSQL database to store its configuration and user data. -1. Change into the `postgres` user account, run `psql` and create a database: +1. Change into the `postgres` user account, run `psql`, and create a database: ``` sudo -u postgres psql ``` diff --git a/tutorials/matomo-analytics/index.mdx b/tutorials/matomo-analytics/index.mdx index 0fd83010c3..fb2fad1874 100644 --- a/tutorials/matomo-analytics/index.mdx +++ b/tutorials/matomo-analytics/index.mdx @@ -29,7 +29,7 @@ The tool is written in PHP and stores its data in a MySQL/MariaDB database. ## Installing the LEMP-stack -Matomo requires a web server, such as [Nginx](http://nginx.org/), to operate. Ensure that a functional [LEMP stack](/tutorials/installation-lemp-ubuntu-bionic/) is installed on your instance before running Matomo. +Matomo requires a web server, such as [Nginx](http://nginx.org/), to operate. Ensure that a functional [LEMP stack](/tutorials/installation-lemp-ubuntu-bionic/) is installed on your Instance before running Matomo. 1. Update the APT package cache and upgrade the packets already installed on the Instance. ``` diff --git a/tutorials/migrating-from-another-managed-kubernetes-service-to-scaleway-kapsule/index.mdx b/tutorials/migrating-from-another-managed-kubernetes-service-to-scaleway-kapsule/index.mdx index e1251d987f..b84d3f1a29 100644 --- a/tutorials/migrating-from-another-managed-kubernetes-service-to-scaleway-kapsule/index.mdx +++ b/tutorials/migrating-from-another-managed-kubernetes-service-to-scaleway-kapsule/index.mdx @@ -111,12 +111,12 @@ Your new cluster will need access to your container images. 5. Click **Create a Namespace**. - Refer to the dedicated documentation [How to create a namespace](/container-registry/how-to/create-namespace/) for detailed information how to create a Scaleway Container Registry namespace. + Refer to the dedicated documentation [How to create a namespace](/container-registry/how-to/create-namespace/) for detailed information on how to create a Scaleway Container Registry namespace. ### 3.2 Authenticate Docker with Scaleway Registry -Use the following command to login to your Scaleway Registry using Docker: +Use the following command to log in to your Scaleway Registry using Docker: ```sh docker login rg..scw.cloud @@ -126,7 +126,7 @@ docker login rg..scw.cloud Use your **Scaleway credentials** or generate a dedicated token. -### 3.3 Pull images from existing registry and push to Scaleway +### 3.3 Pull images from an existing registry and push them to Scaleway For each image, you need to migrate: @@ -307,7 +307,7 @@ Your existing manifests may contain cloud-provider-specific settings that need a ## Step 7: Migrate persistent Data and storage -### 7.1 Backup data from existing cluster +### 7.1 Backup data from an existing cluster - Use appropriate tools to back up data from Persistent Volumes. - Methods include: diff --git a/tutorials/mutli-node-rocket-chat-community-private-network/index.mdx b/tutorials/mutli-node-rocket-chat-community-private-network/index.mdx index f915d2b3dd..38191b23c9 100644 --- a/tutorials/mutli-node-rocket-chat-community-private-network/index.mdx +++ b/tutorials/mutli-node-rocket-chat-community-private-network/index.mdx @@ -61,7 +61,7 @@ To reach the goal of this tutorial, you will use four [Production-Optimized Inst ``` - In the example above, the Private Network interface is named `ens4`. This name may vary depending on your instance type and operating system. The private interface can be identified by its MAC address, which always begins with `02:00:00:xx:yy:zz`. + In the example above, the Private Network interface is named `ens4`. This name may vary depending on your Instance type and operating system. The private interface can be identified by its MAC address, which always begins with `02:00:00:xx:yy:zz`. 7. To facilitate the configuration, give a more convenient name (e.g. `priv0`) to the Private Network interface. Configure the new interface name as follows: ``` @@ -101,7 +101,7 @@ To reach the goal of this tutorial, you will use four [Production-Optimized Inst ## Installing MongoDB -1. Log into your MongoDB instance using SSH. +1. Log into your MongoDB Instance using SSH. ``` ssh root@ ``` @@ -262,7 +262,7 @@ To reach the goal of this tutorial, you will use four [Production-Optimized Inst ## Configuring the NGINX reverse proxy -1. Log into your NGINX reverse proxy instance using SSH. +1. Log into your NGINX reverse proxy Instance using SSH. ``` ssh root@ ``` diff --git a/tutorials/postman-api/index.mdx b/tutorials/postman-api/index.mdx index ed83c621bb..ba7a61d038 100644 --- a/tutorials/postman-api/index.mdx +++ b/tutorials/postman-api/index.mdx @@ -73,7 +73,7 @@ You can set up authorization at the collection, request category, and/or request 1. Go to the **Collections** tab. 2. Click on the Instance API collection. The **Authorization** tab displays by default. - + 3. Select your environment in the top right menu. 4. Select **API key** as the authorization type. @@ -99,7 +99,7 @@ In this tutorial, we will create an Instance through an API request operated wit If you always create resources in the same region, you can [set up a variable](/tutorials/postman-api/#setting-up-an-environment) for the zone in your environment. -5. Edit the parameters in the request with your preferences following the example below. In this tutorial, we will create a GP1-M instance with the default local volume. +5. Edit the parameters in the request with your preferences following the example below. In this tutorial, we will create a GP1-M Instance with the default local volume. - For more information on how to fill out the parameters, refer to the [Scaleway Developers website](https://www.scaleway.com/en/developers/api/instance/#path-instances-create-an-instance). - To find the image UUID, [follow this procedure](/instances/faq/#what-is-a-marketplace-image). diff --git a/tutorials/prometheus-monitoring-grafana-dashboard/index.mdx b/tutorials/prometheus-monitoring-grafana-dashboard/index.mdx index 3c2d2297ac..47a9a9fe52 100644 --- a/tutorials/prometheus-monitoring-grafana-dashboard/index.mdx +++ b/tutorials/prometheus-monitoring-grafana-dashboard/index.mdx @@ -1,10 +1,10 @@ --- meta: title: Configuring a Prometheus monitoring Instance with a Grafana dashboard - description: Learn to configure a Prometheus monitoring instance and set up a Grafana dashboard. Follow this step-by-step guide to connect Prometheus to Grafana for efficient monitoring. + description: Learn to configure a Prometheus monitoring Instance and set up a Grafana dashboard. Follow this step-by-step guide to connect Prometheus to Grafana for efficient monitoring. content: h1: Configuring a Prometheus monitoring Instance with a Grafana dashboard - paragraph: Learn to configure a Prometheus monitoring instance and set up a Grafana dashboard. Follow this step-by-step guide to connect Prometheus to Grafana for efficient monitoring. + paragraph: Learn to configure a Prometheus monitoring Instance and set up a Grafana dashboard. Follow this step-by-step guide to connect Prometheus to Grafana for efficient monitoring. tags: monitoring Grafana Prometheus hero: assets/scaleway-grafana-prometheus.webp categories: diff --git a/tutorials/remote-desktop-with-xrdp/index.mdx b/tutorials/remote-desktop-with-xrdp/index.mdx index d91eeb6ae8..7b6762e1c5 100644 --- a/tutorials/remote-desktop-with-xrdp/index.mdx +++ b/tutorials/remote-desktop-with-xrdp/index.mdx @@ -48,7 +48,7 @@ In this tutorial, you will learn how to install the [xRDP](http://xrdp.org/) ser ## Installing xRDP -1. The xRDP server is available in the default Ubuntu repositories, and it can be installed easily using `apt`. The following command installs the packages `xrdp` and `ufw` a firewall to protect your instance from unauthorized access: +1. The xRDP server is available in the default Ubuntu repositories, and it can be installed easily using `apt`. The following command installs the packages `xrdp` and `ufw` a firewall to protect your Instance from unauthorized access: ``` apt install xrdp ufw ``` @@ -121,9 +121,9 @@ For security reasons, it is recommended to create a regular user to connect to t 1. Download and install an RDP client of your choice. We will use the [Microsoft Remote Desktop Client](https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/clients/remote-desktop-clients) in this tutorial. 2. Start the RDP client and click **Add Desktop** to add a new connection: -3. Enter the connection details. You need to enter at least your Instance's IP in the **PC name**. Optionally you can add a **Friendly Name** to facilitate the identification of your Instance in case you configure several RDP connections. +3. Enter the connection details. You need to enter at least your Instance's IP in the **PC name**. Optionally, you can add a **Friendly Name** to facilitate the identification of your Instance in case you configure several RDP connections. 4. Double-click the connection icon. During the first connection, you may be asked to validate the fingerprint of the Instance. Click **OK** to confirm the connection. 5. Enter the identifier and password of your regular user when prompted. -6. You are now connected and the remote desktop displays. You can launch applications on the remote machine, as you would do locally. +6. You are now connected, and the remote desktop displays. You can launch applications on the remote machine, as you would do locally. \ No newline at end of file diff --git a/tutorials/setup-k8s-cluster-rancher/index.mdx b/tutorials/setup-k8s-cluster-rancher/index.mdx index 6fa8364a4b..bb29d77bf6 100644 --- a/tutorials/setup-k8s-cluster-rancher/index.mdx +++ b/tutorials/setup-k8s-cluster-rancher/index.mdx @@ -40,7 +40,7 @@ The Rancher UI makes it easy to manage secrets, roles, and permissions. It allow ## Installing Rancher 1. Log into the first Instance (`rancher1`) via [SSH](/instances/how-to/connect-to-instance/). -2. Run the following command to fetch the Docker image `rancher/rancher` and run it in a container. This setup ensures that the Rancher container will restart automatically in case of failure. Make sure to replace `rancher.example.com` with your actual domain name pointing to the first instance to enable automatic Let's Encrypt SSL certificate generation: +2. Run the following command to fetch the Docker image `rancher/rancher` and run it in a container. This setup ensures that the Rancher container will restart automatically in case of failure. Make sure to replace `rancher.example.com` with your actual domain name pointing to the first Instance to enable automatic Let's Encrypt SSL certificate generation: ```bash docker run -d --restart=unless-stopped -p 80:80 -p 443:443 -v /host/rancher:/var/lib/rancher rancher/rancher --acme-domain rancher.example.com ``` @@ -61,13 +61,13 @@ The Rancher UI makes it easy to manage secrets, roles, and permissions. It allow 3. Name the cluster, choose the desired Kubernetes version, and select **None** for the cloud provider (since this is a custom setup). -4. Assign roles for each instance in the cluster: - - **Control Plane**: Manages the state and configuration of the cluster. - - **etcd**: Stores the state of the entire cluster (recommended to run 3 instances for redundancy). +4. Assign roles for each Instance in the cluster: + - **Control plane**: Manages the state and configuration of the cluster. + - **etcd**: Stores the state of the entire cluster (recommended to run 3 Instances for redundancy). - **Worker**: Runs your containers/pods and handles the workload. - Once the roles are assigned, run the command shown on the page to install the necessary software on each instance. + Once the roles are assigned, run the command shown on the page to install the necessary software on each Instance. -5. Once all instances are ready, click **Done** to initialize the cluster. +5. Once all Instances are ready, click **Done** to initialize the cluster. 6. When the cluster is initialized, the dashboard will display: @@ -102,7 +102,7 @@ Currently, the Nginx demo app is running on a single pod. Let’s scale it to mu 3. Click **Save**. Rancher will update the Kubernetes deployment to create 3 replicas of the pod. -4. To access the application running on the second instance, visit `http://:30000/` in your browser. The Nginx demo application should display. +4. To access the application running on the second Instance, visit `http://:30000/` in your browser. The Nginx demo application should display. ## Security considerations and best practices @@ -110,7 +110,7 @@ Currently, the Nginx demo app is running on a single pod. Let’s scale it to mu - Cluster security: It is a good practice to follow Kubernetes security guidelines for RBAC (Role-Based Access Control) and network policies when deploying to a production environment. For example, configure namespaces, enforce least-privilege access, and use network policies to control traffic between pods. - Backup & recovery: Regularly backup your Rancher configurations and Kubernetes data (e.g., etcd) to ensure you can restore your cluster in case of failure. -### Further reading +### Going further For more detailed documentation on Rancher and Kubernetes, check out the official docs: - [Rancher Documentation](https://ranchermanager.docs.rancher.com/) diff --git a/tutorials/setup-nomad-cluster/index.mdx b/tutorials/setup-nomad-cluster/index.mdx index a1ad4da6a4..1102cc0abf 100644 --- a/tutorials/setup-nomad-cluster/index.mdx +++ b/tutorials/setup-nomad-cluster/index.mdx @@ -44,7 +44,7 @@ cd learn-nomad-cluster-setup/scaleway In the `scaleway` directory, you will find the following files: -- `image.pkr.hcl` - The Packer configuration file used to build the Nomad instance image. +- `image.pkr.hcl` - The Packer configuration file used to build the Nomad Instance image. - `main.tf` - The Terraform/OpenTofu configuration file used to deploy the Nomad cluster. - `variables.tf` - The Terraform/OpenTofu variables file used to configure the Nomad cluster. - `post-setup.sh` - The script used to bootstrap the Nomad cluster. diff --git a/tutorials/snapshot-instances-jobs/index.mdx b/tutorials/snapshot-instances-jobs/index.mdx index 5ba45ef1f7..6bdf58052b 100644 --- a/tutorials/snapshot-instances-jobs/index.mdx +++ b/tutorials/snapshot-instances-jobs/index.mdx @@ -138,7 +138,7 @@ You can also download the work files by cloning our [Scaleway Serverless example require gopkg.in/yaml.v2 v2.4.0 // indirect ``` -3. Run the follwing command to download the required dependencies: +3. Run the following command to download the required dependencies: ```go go get @@ -146,7 +146,7 @@ You can also download the work files by cloning our [Scaleway Serverless example ## Building and pushing the image to Container Registry -Serverless Jobs rely on containers to run in the cloud, and therefore require a [container image](/serverless-jobs/concepts/#container-image) hosted in the cloud using [Scaleway Container Registry](/container-registry/). +Serverless Jobs rely on containers to run in the cloud and therefore require a [container image](/serverless-jobs/concepts/#container-image) hosted in the cloud using [Scaleway Container Registry](/container-registry/). 1. Create a `Dockerfile`, and add the following code to it: @@ -209,11 +209,11 @@ Your image and its tag now appear in the [Container Registry in the Scaleway con 7. Set the **cron schedule** to `0 2 * * *` and select the relevant time zone to run the job every day at 2:00 a.m. Refer to the [cron schedules documentation](/serverless-jobs/reference-content/cron-schedules/) for more information. 8. Define the following environment variables: - - `INSTANCE_ID`: the ID of the instance you want to snapshot + - `INSTANCE_ID`: the ID of the Instance you want to snapshot - `INSTANCE_ZONE`: the [Availabilitiy Zone](/instances/concepts/#availability-zone) of your Instance (e.g. `fr-par-2`) - `SCW_ACCESS_KEY`: your API access key - `SCW_SECRET_KEY`: your API secret key - - `SCW_DEFAULT_ORGANIZATION_ID`: your Oganization ID + - `SCW_DEFAULT_ORGANIZATION_ID`: your Organization ID 9. Click **Create job**. @@ -221,7 +221,7 @@ Your image and its tag now appear in the [Container Registry in the Scaleway con From the **Overview** tab of the Serverless job you just created, click, **Actions**, then select **Run job** from the contextual menu. -The execution appears in the **Job runs** section. You can access the logs of your job by clicking next to the job run ID, and selecting **See on cockpit**. +The execution appears in the **Job runs** section. You can access the logs of your job by clicking next to the job run ID, and selecting **See on Cockpit**. ## Possible improvements diff --git a/tutorials/transform-bucket-images-triggers-functions-set-up/index.mdx b/tutorials/transform-bucket-images-triggers-functions-set-up/index.mdx index 2b34e17d34..6281754b19 100644 --- a/tutorials/transform-bucket-images-triggers-functions-set-up/index.mdx +++ b/tutorials/transform-bucket-images-triggers-functions-set-up/index.mdx @@ -16,7 +16,7 @@ dates: validation_frequency: 24 --- -Serverless Functions are an asynchronous microservices architecture where event sources are decorrelated from event consumers. +Serverless Functions are an asynchronous microservices architecture where event sources are separated from event consumers. They work best when they are triggered by specific events, such as cron schedules, which means they can be edited without having to modify every microservice in the process. @@ -46,7 +46,7 @@ In this tutorial, you will create the following resources to use your functions: 1. Click **Object Storage** under **Storage** on the left side menu of the console. The Object Storage dashboard displays. 2. Click **+ Create a bucket**. 3. Name it `source-images-`. -4. Choose the **AMSTERDAM** Region. +4. Choose the **Amsterdam** region. 5. Set the bucket visibility to **Private**. 6. Click **Create bucket**. The bucket's **Files** tab displays. 7. Upload your images to the bucket and select the **Standard** storage class. @@ -56,7 +56,7 @@ In this tutorial, you will create the following resources to use your functions: 1. Click **Storage**, then **Object Storage** on the left side menu of the console. The Object Storage dashboard displays. 2. Click **+ Create a bucket**. 3. Name it `dest-images-`. -4. Choose the **AMSTERDAM** Region. +4. Choose the **Amsterdam** region. 5. Set the bucket visibility to **Private**. 6. Click **Create Bucket**. diff --git a/tutorials/using-secret-manager-with-github-action/index.mdx b/tutorials/using-secret-manager-with-github-action/index.mdx index 3a8d66f3af..0824888af9 100644 --- a/tutorials/using-secret-manager-with-github-action/index.mdx +++ b/tutorials/using-secret-manager-with-github-action/index.mdx @@ -32,7 +32,7 @@ You need to create the following secrets in your GitHub repository: - `SCW_ACCESS_KEY`: your API access key - `SCW_SECRET_KEY`: your API secret key -- `SCW_DEFAULT_ORGANIZATION_ID`: your organization ID +- `SCW_DEFAULT_ORGANIZATION_ID`: your Organization ID - `SCW_DEFAULT_PROJECT_ID`: the project ID where you have your secrets 1. Navigate to **Settings** > **Secrets and Variables** > **Actions** of your GitHub repository. diff --git a/tutorials/wordpress-lemp-stack-ubuntu-jammy-jellyfish-22-04/index.mdx b/tutorials/wordpress-lemp-stack-ubuntu-jammy-jellyfish-22-04/index.mdx index e5dbc29b3e..0c89f5f2c9 100644 --- a/tutorials/wordpress-lemp-stack-ubuntu-jammy-jellyfish-22-04/index.mdx +++ b/tutorials/wordpress-lemp-stack-ubuntu-jammy-jellyfish-22-04/index.mdx @@ -1,10 +1,10 @@ --- meta: title: Installing WordPress on a Scaleway Instance with Ubuntu 22.04 LTS (Jammy Jellyfish) and LEMP - description: This step-by-step guide walks you through the process of setting up WordPress on a Scaleway instance running Ubuntu 22.04 LTS (Jammy Jellyfish) using the LEMP stack. + description: This step-by-step guide walks you through the process of setting up WordPress on a Scaleway Instance running Ubuntu 22.04 LTS (Jammy Jellyfish) using the LEMP stack. content: h1: Installing WordPress on a Scaleway Instance with Ubuntu 22.04 LTS (Jammy Jellyfish) and LEMP - paragraph: This step-by-step guide walks you through the process of setting up WordPress on a Scaleway instance running Ubuntu 22.04 LTS (Jammy Jellyfish) using the LEMP stack. + paragraph: This step-by-step guide walks you through the process of setting up WordPress on a Scaleway Instance running Ubuntu 22.04 LTS (Jammy Jellyfish) using the LEMP stack. tags: WordPress cms php LEMP nginx mysql mariadb categories: - instances @@ -292,7 +292,7 @@ In this section, we will create the database user and tables. With the server configuration completed, you can now proceed to complete the installation via the web interface. -Open your preferred web browser and enter your Instances's domain name or public IP address in the address bar: +Open your preferred web browser and enter your Instance's domain name or public IP address in the address bar: `http://instance_domain_or_IP/wordpress/` @@ -330,4 +330,4 @@ Open your preferred web browser and enter your Instances's domain name or public 6. Complete the five-minute WordPress installation process. 7. Log in. Your dashboard displays. - + \ No newline at end of file diff --git a/tutorials/wordpress-lemp-stack/index.mdx b/tutorials/wordpress-lemp-stack/index.mdx index ea61cabc2b..4748c35807 100644 --- a/tutorials/wordpress-lemp-stack/index.mdx +++ b/tutorials/wordpress-lemp-stack/index.mdx @@ -16,7 +16,7 @@ dates: WordPress is a popular, free open-source blogging tool and content management system (CMS) based on PHP and MySQL. WordPress has seen incredible adoption rates and is a great choice for getting a website up and running quickly. After setup, almost all the administration can be done through the web frontend. -In this tutorial, you will learn how to install WordPress on a freshly created Ubuntu Bionic Beaver instance with LEMP (Linux + Nginx - pronounced "engine x" + MySQL + PHP). Nginx is an HTTP server that, compared to Apache, uses fewer resources and delivers pages much faster, especially static files. +In this tutorial, you will learn how to install WordPress on a freshly created Ubuntu Bionic Beaver Instance with LEMP (Linux + Nginx - pronounced "engine x" + MySQL + PHP). Nginx is an HTTP server that, compared to Apache, uses fewer resources and delivers pages much faster, especially static files. diff --git a/tutorials/zammad-ticketing/index.mdx b/tutorials/zammad-ticketing/index.mdx index 206281252c..5d62d897da 100644 --- a/tutorials/zammad-ticketing/index.mdx +++ b/tutorials/zammad-ticketing/index.mdx @@ -162,7 +162,7 @@ Zammad uses the [Nginx](http://nginx.org/) web server to serve the application. 4. Enter the required information about your organization. You can upload your company logo to customize the installation. Make sure the **System URL** corresponds with your domain name (`zammad.example.com` in our example): -5. Choose how you want to deliver your emails. You can either use a local mail server, installed on your instance or use an external mailing service by using SMTP: +5. Choose how you want to deliver your emails. You can either use a local mail server, installed on your Instance or use an external mailing service by using SMTP: Enter the SMTP details of either the local MTA or the information you received from your messaging service. @@ -181,7 +181,7 @@ Zammad proposes three default roles for user accounts: - **Customer:** Users who create tickets and ask for help. - **Agent:** Your agents who deal with the requests made by your customers. -- **Admin:** Admin accounts have full control over the Zammad instance and can manage the system. +- **Admin:** Admin accounts have full control over the Zammad Instance and can manage the system. If required, you can create more detailed roles by adding more and allowing or disallowing features on a more finely granulated level. @@ -190,17 +190,17 @@ If required, you can create more detailed roles by adding more and allowing or d 3. Enter the account information of the new user and tick the checkbox corresponding to its role on the application. Once all information is provided, click **Submit** to create the new user account. - To create a large number of user accounts, you can also use the bulk upload feature to upload a file in the comma separated values (CSV) format. It must be saved as UTF-8. + To create a large number of user accounts, you can also use the bulk upload feature to upload a file in the comma-separated values (CSV) format. It must be saved as UTF-8. ## Creating additional channels -For now, your Zammad instance allows you to communicate with your users by e-mail, but the application provides a wide range of connectors to interact with your customers. You can configure modules for: +For now, your Zammad Instance allows you to communicate with your users by e-mail, but the application provides a wide range of connectors to interact with your customers. You can configure modules for: - Twitter - Facebook - Telegram - Chat -- SMS (via twilio) +- SMS (via Twilio) And much more. 1. Click the cogwheel icon (1) to enter the management section of Zammad. diff --git a/tutorials/zulip/index.mdx b/tutorials/zulip/index.mdx index 8d3ff23b24..c16fbac9ec 100644 --- a/tutorials/zulip/index.mdx +++ b/tutorials/zulip/index.mdx @@ -133,7 +133,7 @@ Your Zulip is now running, however, it cannot send any email notifications in it ## Configuring Zulip -Your Zulip instance is now ready for basic use. However, there are many additional features you can configure for your organization. To do so, click the menu button and then on **Manage Organization**: +Your Zulip Instance is now ready for basic use. However, there are many additional features you can configure for your organization. To do so, click the menu button and then on **Manage Organization**: @@ -141,4 +141,4 @@ Your Zulip instance is now ready for basic use. However, there are many addition ## Conclusion -[Zulip](https://zulip.com/) provides a self-hosted and open-source alternative to commercial solutions like Slack. You have set up your instance with the Zulip application, configured a transactional email provider to send outgoing emails, and you can now invite your friends and colleagues to your new communications platform. For more information about Zulip, refer to the [official documentation](https://zulip.com/help/). \ No newline at end of file +[Zulip](https://zulip.com/) provides a self-hosted and open-source alternative to commercial solutions like Slack. You have set up your Instance with the Zulip application, configured a transactional email provider to send outgoing emails, and you can now invite your friends and colleagues to your new communications platform. For more information about Zulip, refer to the [official documentation](https://zulip.com/help/). \ No newline at end of file From 62a3d3b2f2a9d67fceafa7e160ad97daf43e0dc5 Mon Sep 17 00:00:00 2001 From: Jessica <113192637+jcirinosclwy@users.noreply.github.com> Date: Fri, 25 Apr 2025 12:45:25 +0200 Subject: [PATCH 2/2] Update pages/managed-inference/faq.mdx --- pages/managed-inference/faq.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/managed-inference/faq.mdx b/pages/managed-inference/faq.mdx index 69286971d7..9455f6aa90 100644 --- a/pages/managed-inference/faq.mdx +++ b/pages/managed-inference/faq.mdx @@ -60,7 +60,7 @@ You can select the Instance type based on your model’s computational needs and Billing is based on the Instance type and usage duration. Unlike [Generative APIs](/generative-apis/quickstart/), which are billed per token, Managed Inference provides predictable costs based on the allocated infrastructure. Pricing details can be found on the [Scaleway pricing page](https://www.scaleway.com/en/pricing/model-as-a-service/#managed-inference). -## Can I pause Managed Inference billing when the Instance is not in use? +## Can I pause Managed Inference billing when the instance is not in use? When a Managed Inference deployment is running, corresponding resources are provisioned and thus billed. Resources can therefore not be paused. However, you can still optimize your Managed Inference deployment to fit within specific time ranges (such as during working hours). To do so, you can automate deployment creation and deletion using the [Managed Inference API](https://www.scaleway.com/en/developers/api/inference/), [Terraform](https://registry.terraform.io/providers/scaleway/scaleway/latest/docs/resources/inference_deployment) or [Scaleway SDKs](https://www.scaleway.com/en/docs/scaleway-sdk/). These actions can be programmed using [Serverless Jobs](/serverless-jobs/) to be automatically carried out periodically.