diff --git a/tutorials/collecting-visualizing-logs-elastic-stack/index.mdx b/tutorials/collecting-visualizing-logs-elastic-stack/index.mdx
index da0e4c3d0b..c75dda7506 100644
--- a/tutorials/collecting-visualizing-logs-elastic-stack/index.mdx
+++ b/tutorials/collecting-visualizing-logs-elastic-stack/index.mdx
@@ -11,17 +11,16 @@ categories:
- instances
- elastic-metal
dates:
- validation: 2024-08-27
+ validation: 2025-03-06
posted: 2015-06-10
---
-The Elastic Stack, formerly known as the ELK Stack, is a powerful suite of open-source tools designed for real-time data search, analysis, and visualization. It offers comprehensive capabilities for collecting, processing, and visualizing large volumes of data.
-Its components are:
+The Elastic Stack, formerly known as the ELK Stack, is a powerful suite of open-source tools designed for real-time data search, analysis, and visualization. It offers comprehensive capabilities for collecting, processing, and visualizing large volumes of data. Its components are:
-- **[Elasticsearch](https://www.elastic.co/elasticsearch)** A distributed, RESTful search and analytics engine based on the Lucene library.
-- **[Logstash](https://www.elastic.co/logstash)** A flexible data collection, processing, and enrichment pipeline.
-- **[Kibana](https://www.elastic.co/kibana)** A visualization and exploration tool for analyzing and visualizing data stored in Elasticsearch.
-- **[Beats](https://www.elastic.co/beats/)** Lightweight data shippers for ingesting data into Elasticsearch or Logstash.
+- **[Elasticsearch](https://www.elastic.co/elasticsearch)**: A distributed, RESTful search and analytics engine based on the Lucene library.
+- **[Logstash](https://www.elastic.co/logstash)**: A flexible data collection, processing, and enrichment pipeline.
+- **[Kibana](https://www.elastic.co/kibana)**: A visualization and exploration tool for analyzing and visualizing data stored in Elasticsearch.
+- **[Beats](https://www.elastic.co/beats/)**: Lightweight data shippers for ingesting data into Elasticsearch or Logstash.
@@ -30,38 +29,66 @@ Its components are:
- An [SSH key](/organizations-and-projects/how-to/create-ssh-key/)
- An [Instance](/instances/how-to/create-an-instance/) or an [Elastic Metal server](/elastic-metal/how-to/create-server/) with at least 4 GB of RAM
-### Install Elasticsearch
+## Install Elasticsearch
1. Download and install the Elasticsearch signing key:
```bash
curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | gpg --dearmor -o /usr/share/keyrings/elasticsearch-archive-keyring.gpg
```
-2. Add the Elasticsearch repository.
+
+2. Add the Elasticsearch repository:
```bash
echo "deb [signed-by=/usr/share/keyrings/elasticsearch-archive-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | tee /etc/apt/sources.list.d/elastic-8.x.list
```
-3. Update the `apt` package repositories.
+
+3. Update the `apt` package repositories:
```bash
apt update
```
-4. Install Elasticsearch using `apt`.
+
+4. Install Elasticsearch:
```bash
apt install elasticsearch
```
-5. Start and enable the Elasticsearch service.
+
+5. Start and enable the Elasticsearch service:
```bash
systemctl start elasticsearch
systemctl enable elasticsearch
```
+6. Configure Elasticsearch for production:
+ Modify the `elasticsearch.yml` file to optimize Elasticsearch for production use:
+ ```bash
+ nano /etc/elasticsearch/elasticsearch.yml
+ ```
+
+ Add the following:
+ ```yaml
+ cluster.name: "my-cluster"
+ node.name: "node-1"
+ network.host: 0.0.0.0
+ xpack.security.enabled: true
+ xpack.security.transport.ssl.enabled: true
+ xpack.security.http.ssl.enabled: true
+ xpack.security.http.ssl.keystore.path: /etc/elasticsearch/certs/keystore.p12
+ xpack.security.http.ssl.truststore.path: /etc/elasticsearch/certs/truststore.p12
+ ```
+
+
+ Make sure you have SSL certificates set up for secure communication.
+
+
+
## Install and configure Logstash
-1. Using the same repository added for Elasticsearch, you can simply install Logstash:
+1. Install Logstash using the same repository added for Elasticsearch:
```bash
apt install logstash
```
-2. Once installed, you can create and modify configuration files for Logstash to set up your data pipelines. These are typically found in `/etc/logstash/conf.d/`.
+2. Create and modify configuration files for Logstash:
+ The configuration files for Logstash are typically located in `/etc/logstash/conf.d/`. You can create pipelines to manage your data processing.
3. Start and enable the Logstash service:
```bash
@@ -71,7 +98,7 @@ Its components are:
## Install and configure Kibana
-1. Install Kibana using the repository:
+1. Install Kibana:
```bash
apt install kibana
```
@@ -82,25 +109,96 @@ Its components are:
systemctl enable kibana
```
-3. By default, Kibana is accessible on `http://localhost:5601`. If you need to access it from a remote machine, edit the Kibana configuration file `/etc/kibana/kibana.yml` and set the server host:
+3. Configure Kibana for remote access:
+ By default, Kibana is accessible on `http://localhost:5601`. To make Kibana accessible remotely, edit the Kibana configuration file:
+ ```bash
+ nano /etc/kibana/kibana.yml
```
+
+ Change the server host to:
+ ```yaml
server.host: "0.0.0.0"
```
-## Secure the Elastic stack
+4. Secure Kibana:
+ Ensure Kibana uses SSL to encrypt communications by adding SSL certificates in the `kibana.yml` file:
+ ```yaml
+ server.ssl.enabled: true
+ server.ssl.certificate: /etc/kibana/certs/kibana.crt
+ server.ssl.key: /etc/kibana/certs/kibana.key
+ elasticsearch.ssl.certificate: /etc/kibana/certs/kibana.crt
+ elasticsearch.ssl.key: /etc/kibana/certs/kibana.key
+ ```
+
+## Install and configure Filebeat
+
+1. Install Filebeat:
+ ```bash
+ apt install filebeat
+ ```
-It is important to secure your ELK Stack, especially if it is exposed to the public internet. You can complete your setup using the following additional resources:
+2. Configure Filebeat to ship logs to Elasticsearch:
+ Edit the Filebeat configuration file to point to your Elasticsearch instance:
+ ```bash
+ nano /etc/filebeat/filebeat.yml
+ ```
+
+ Set the output to Elasticsearch:
+ ```yaml
+ output.elasticsearch:
+ hosts: ["http://localhost:9200"]
+ ```
+
+ Alternatively, configure Filebeat to send logs to Logstash:
+ ```yaml
+ output.logstash:
+ hosts: ["localhost:5044"]
+ ```
+
+3. Start and enable the Filebeat service:
+ ```bash
+ systemctl enable filebeat
+ systemctl start filebeat
+ ```
+
+## Secure the Elastic Stack
+
+Securing your Elastic Stack is essential, especially if exposed to the internet. Following are some recommendations:
+
+- Enable built-in security features (as shown above in Elasticsearch and Kibana setup).
+
+- Use a firewall:
+ You can use `ufw` or `iptables` to restrict access to only the necessary IPs:
+ ```bash
+ ufw allow from to any port 9200
+ ufw allow from to any port 5601
+ ```
+
+- Set up an HTTPS reverse proxy:
+ You can secure Kibana by setting up an HTTPS reverse proxy with Nginx:
+ [Set up an HTTPS reverse proxy with Nginx](https://www.scaleway.com/docs/tutorials/nginx-reverse-proxy/).
+
+- Set up TLS/SSL for Elasticsearch and Kibana: Ensure communications are encrypted between components using SSL/TLS as shown above.
-- [Use a firewal](/tutorials/installation-uncomplicated-firewall/) like `ufw` or `iptables` to restrict access to your Instance.
-- [Secure Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-minimal-setup.html) using its built-in security features or with plugins.
-- Consider setting up an [HTTPS reverse proxy](/tutorials/nginx-reverse-proxy/) using a third-party web server like Nginx or Apache to access Kibana securely.
## Test the installation
-Make sure everything is working:
+After completing the setup, you can verify if everything is working:
+
+- Elasticsearch:
+ Run the following command to check Elasticsearch health:
+ ```bash
+ curl -X GET "localhost:9200/_cluster/health?pretty"
+ ```
+
+- Kibana:
+ Navigate to `http://your_server_ip:5601` in your web browser.
-- Elasticsearch Run the following command to test your Elasticsearch installation: `curl -X GET "localhost:9200/"`
-- Kibana: Navigate to `http://your_server_ip:5601` in your web browser.
+- Filebeat:
+ Ensure logs are being shipped by checking the status:
+ ```bash
+ curl -X GET "localhost:5601/api/status"
+ ```
Now, you should have a basic Elastic stack up and running! Adjust configurations as needed for your specific use case and further secure and optimize your setup for production use.
Refer to the [official Elastic documentation](https://www.elastic.co/guide/index.html) for the most accurate and up-to-date instructions and advanced configuration information.
\ No newline at end of file
diff --git a/tutorials/configure-graphite/index.mdx b/tutorials/configure-graphite/index.mdx
index 9ec0c12123..2d2ba1faa3 100644
--- a/tutorials/configure-graphite/index.mdx
+++ b/tutorials/configure-graphite/index.mdx
@@ -1,243 +1,209 @@
----
+```yaml
meta:
- title: Installing and Configuring Graphite on Ubuntu 20.04
- description: This tutorial explains how to install and configure Graphite on Ubuntu 20.04
+ title: Installing and Configuring Graphite on Ubuntu 22.04
+ description: This tutorial explains how to install and configure Graphite on Ubuntu 22.04
content:
- h1: Installing and Configuring Graphite on Ubuntu 20.04
- paragraph: This tutorial explains how to install and configure Graphite on Ubuntu 20.04
+ h1: Installing and Configuring Graphite on Ubuntu 22.04
+ paragraph: This tutorial explains how to install and configure Graphite on Ubuntu 22.04
tags: Graphite Ubuntu
- kubernetes
- instances
- elastic-metal
dates:
- validation: 2024-08-27
+ validation: 2025-03-06
posted: 2018-08-06
----
+```
-In this tutorial, you will find a comprehensive guide to understanding and using Graphite, a widely-used monitoring tool. Graphite serves two primary functions:
+Graphite is a popular tool for monitoring and visualizing time-series data. It performs two main functions:
-1. **Efficient Time-Series Data Storage:** Graphite excels in storing time-related data efficiently, ensuring minimal resource consumption while retaining data integrity.
-2. **Flexible Visualization and Data Manipulation:** It provides a robust interface for visualizing stored data and offers the flexibility to perform mathematical operations such as summation, grouping, and scaling in real time.
+1. **Efficient Time-Series Data Storage**: Graphite stores time-related data with minimal resource usage and ensures data integrity.
+2. **Flexible Visualization and Data Manipulation**: It allows you to visualize the stored data and perform mathematical operations (like sum, scaling, or grouping) in real-time.
-Throughout this tutorial, we will dive into the technical aspects of Graphite, allowing you to discover its capabilities effectively for your monitoring needs.
+This tutorial provides the steps needed to install and configure Graphite on **Ubuntu 22.04** and get started with monitoring and visualizing your metrics.
-- A Scaleway account logged into the [console](https://console.scaleway.com)
-- [Owner](/iam/concepts/#owner) status or [IAM permissions](/iam/concepts/#permission) allowing you to perform actions in the intended Organization
-- An [SSH key](/organizations-and-projects/how-to/create-ssh-key/)
-- An [Instance](/instances/how-to/create-an-instance/) running Ubuntu 20.04 LTS
-- A valid [API key](/iam/how-to/create-api-keys/)
-- `sudo` privileges or access to the root user
+- A **Scaleway account** logged into the [console](https://console.scaleway.com)
+- **Owner** status or **IAM permissions** that allow performing actions in the intended Organization
+- An **SSH key** for server access
+- An **Ubuntu 22.04 LTS** instance up and running
+- **API key** for interacting with Scaleway’s services
+- **`sudo` privileges** or root user access to the system
## Installing Graphite
-1. Graphite and the required tools can be installed easily via apt. Update the apt cache and upgrade the packages already installed on the system:
+1. Update the system to make sure all packages are up to date:
+ ```bash
+ sudo apt update && sudo apt upgrade -y
```
- apt-get update && apt-get upgrade -y
- ```
-2. Install Graphite:
- ```
- apt-get install graphite-web graphite-carbon
+
+2. Install the required packages for Graphite:
+ ```bash
+ sudo apt install -y graphite-web graphite-carbon
```
## Configuring the Graphite web application
-1. Open the configuration file in which you have to make the modifications explained afterward:
- ```
- nano /etc/graphite/local_settings.py
+1. **Edit the configuration file** for the web interface:
+ ```bash
+ sudo nano /etc/graphite/local_settings.py
```
-2. Set a `SECRET_KEY` that will be used as a salt when creating hashes. Uncomment the line and set it to secure key:
- ```
- SECRET_KEY = 'a_salty_string'
- ```
-3. Uncomment the `TIME_ZONE` parameter and configure it to your local [time-zone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones):
+
+2. Set a `SECRET_KEY` to secure Graphite (you can generate a secure key online or use `openssl`):
+ ```python
+ SECRET_KEY = 'your_generated_secure_key'
```
+
+3. Set the correct `TIME_ZONE` (adjust to your local timezone, e.g. `Europe/Paris`):
+ ```python
TIME_ZONE = 'Europe/Paris'
```
+
4. Save the file and quit the editor.
-5. Synchronize the database to create the required database layout:
- ```
- graphite-manage migrate
- ```
-## Configuring Carbon
+5. Migrate the database to set up the initial database schema:
+ ```bash
+ sudo graphite-manage migrate
+ ```
-As the database is ready now, continue with the configuration of Carbon, the storage backend of Graphite.
+## Configuring Carbon (storage backend)
-1. Open the service configuration file with a text editor:
- ```
- nano /etc/default/graphite-carbon
- ```
-2. There is only one configuration parameter in the file, to make sure Carbon starts at boot, change the value to `true`.
+1. Edit the Carbon configuration to ensure it starts on boot:
+ ```bash
+ sudo nano /etc/default/graphite-carbon
```
+
+2. Change the `CARBON_CACHE_ENABLED` option to `true`:
+ ```bash
CARBON_CACHE_ENABLED=true
```
-3. Save the file and quit the Editor once you have changed the value.
-4. Open the Carbon configuration file:
- ```
- nano /etc/carbon/carbon.conf
- ```
-5. Enable log rotation by setting the following value to true:
- ```
- ENABLE_LOGROTATION = True
- ```
-6. Save the file and quit the Editor once you have changed the value.
-## Configuring Storage Schemas
+3. Save and quit the editor.
-1. Open the storage schemas file. It contains information about how long and how detailed values should be stored:
- ```
- nano /etc/carbon/storage-schemas.conf
- ```
-2. The content of the file will look like this example:
+4. Configure Carbon settings for data retention:
+ ```bash
+ sudo nano /etc/carbon/carbon.conf
```
- [carbon]
- pattern = ^carbon\.
- retentions = 60:90d
- [default_1min_for_1day]
- pattern = .*
- retentions = 60s:1d
+5. Set **log rotation** to `True` for better system management:
+ ```bash
+ ENABLE_LOGROTATION = True
```
- By default two sections are defined in the file:
-
- - The first one is to decide what to do with data coming from Carbon itself, as it is configured by default to store some of its own performance metrics.
- - The second one is a catch-all section that applies to any data that hasn't been matched by any other section. It has to remain always the last section of the file.
+6. Save and quit.
- Each section is defined by section headers, namely the words in the brackets.
- Below each section header, a pattern definition and retention policy are defined.
+## Configuring storage schemas
- The pattern definition is a regular expression, used to match any information sent to Carbon. All information sent to Carbon includes a metric name, which is checked by the pattern definition. In the first section, all metrics in question start with the string "carbon.".
-
- The retention policy is defined by sets of numbers, consisting of a metric interval (defining how often a metric is recorded), followed by a colon and the storage duration of these values. It is possible to define multiple sets of retention policies, separated by commas.
-3. Define a new retention policy that is triggered by a "test." pattern and will be used later:
+1. Open the storage schemas file to adjust retention policies:
+ ```bash
+ sudo nano /etc/carbon/storage-schemas.conf
```
+
+2. Update the file to include your custom retention settings. Here’s an example:
+ ```bash
[test]
pattern = ^test\.
retentions = 10s:10m,1m:1h,10m:1d
```
-
- Remember to place the policy **before** the catch-all block in the configuration file.
-
-
- This section will store the data it collects three times with different levels of detail. The first collection `10s:10m` will create a data point every ten seconds and data is stored for only 10 minutes.
-
- The second collection `1m:1h` will create a data point every minute by gathering all the data from the past minute from the first collection. The information in the data point is aggregated by averaging the points (six points, as the first collection creates a point every 10 seconds). Data will be stored in this collection within one hour.
-
- The last collection `10m:1d` will make a data point every 10 minutes by aggregating the information gathered from the second collection in the same way. Data will be stored in this collection for one day.
+ This configuration stores the data with varying levels of detail, from 10-second intervals to 10-minute intervals, depending on the data's age.
- Graphite returns the data from the most detailed collection that measures the requested time frame when asking for information. This means:
- - If metrics for the past 5 minutes will be requested, Graphite will return information from the first collection.
- - If metrics for the past 50 minutes will be requested, Graphite will return information from the second collection.
-4. Save and close the file when you have finished editing it.
+3. Save and close the file.
## Defining storage aggregation methods
-To gather accurate metrics, it is essential to understand the way Carbon decides when it crunches detailed information into a generalized number.
-This happens whenever Graphite converts more detailed metrics into less detailed ones, like in the second and third collections in the test schema above.
+To accurately aggregate your data, you can modify how Graphite aggregates metrics:
-The default behavior is to get the average value when aggregating. This means unlike the most detailed information in the first collection, information is less accurate in the second and third collections.
-
-This may not always be useful though. For example, to count up the total number of times an event occurred over different periods of time it is not useful to average them but to count each event.
-
-It is possible to define the way Graphite aggregates metrics with a file called `storage-aggregation.conf`.
-
-1. Copy the file from the examples directory to the actual configuration directory:
- ```
- cp /usr/share/doc/graphite-carbon/examples/storage-aggregation.conf.example /etc/carbon/storage-aggregation.conf
+1. Copy the example configuration to the correct directory:
+ ```bash
+ sudo cp /usr/share/doc/graphite-carbon/examples/storage-aggregation.conf.example /etc/carbon/storage-aggregation.conf
```
-2. Open the file in a text editor:
- ```
- nano /etc/carbon/storage-aggregation.conf
- ```
-3. You will see a file similar to this content:
+
+2. Edit the aggregation file:
+ ```bash
+ sudo nano /etc/carbon/storage-aggregation.conf
```
+
+3. Adjust the `aggregationMethod` to suit your data collection needs (e.g., `sum` for event counts, `average` for metrics like temperature):
+ ```bash
[min]
pattern = \.min$
xFilesFactor = 0.1
aggregationMethod = min
```
- It looks similar to the previous file, and the section name and pattern are exactly the same as in the storage-schemas file.
-
- The `xFilesFactor` is what we will take a closer look at. It allows specifying the minimum percentage of values that Carbon must have to create an aggregated data-point. For example, if the value is set to 0.5, it requires that 50% of the more detailed data points be available to create an aggregated point.
+4. Save and close the file.
- This can be useful to avoid creating data-points misrepresenting the actual situation.
-
- `aggregationMethod` defines the way data is recorded. Possible values are `average`, `sum`, `last`, `max`, and `min`. It is important to choose the correct value to avoid that your data will be recorded inaccurately. The correct selection depends on the kind of metrics that you are tracking.
-4. Once you have edited the file to your needs, save and close the file.
-5. Start Carbon by typing:
- ```
- service carbon-cache start
+5. Start the Carbon service to begin storing and aggregating data:
+ ```bash
+ sudo service carbon-cache start
```
## Installing and configuring Apache
-To use the web interface of Graphite, a web server is required. The software comes with a pre-defined configuration for Apache, making configuration pretty easy.
+To access the Graphite web interface, you need a web server. Here, we'll use **Apache**.
-1. Install the Apache web server and the required module:
- ```
- apt-get install apache2 libapache2-mod-wsgi-py3
- ```
-2. Disable the default site of Apache, as it won't be needed:
- ```
- a2dissite 000-default
+1. Install Apache and the required modules for Python 3:
+ ```bash
+ sudo apt install -y apache2 libapache2-mod-wsgi-py3
```
-3. Copy the Apache configuration file of Graphite into the available sites directory of Apache:
- ```
- cp /usr/share/graphite-web/apache2-graphite.conf /etc/apache2/sites-available
- ```
-4. Enable the Graphite site:
+
+2. Disable the default site to prevent conflicts:
+ ```bash
+ sudo a2dissite 000-default
```
- a2ensite apache2-graphite
+
+3. Copy the Graphite Apache configuration:
+ ```bash
+ sudo cp /usr/share/graphite-web/apache2-graphite.conf /etc/apache2/sites-available/
```
-5. Reload the configuration of Apache:
+
+4.Enable the Graphite site:
+ ```bash
+ sudo a2ensite apache2-graphite
```
- service apache2 reload
+
+5. Reload Apache to apply changes:
+ ```bash
+ sudo service apache2 reload
```
-6. Check if the web interface is working by pointing your browser to `http://YOUR_SERVERS_IP`. The following interface will be visible:
-
+
+6. Verify the web interface by opening your browser and navigating to `http://YOUR_SERVER_IP`.
## Discovering the web interface
-1. Login by clicking on the **Login-Button** on the top of the page. Enter the username and password that you have set when you have synchronized the database. In case you do not remember these credentials or if you want to add another superuser, run the following command to create a new user:
+1. Log into Graphite using the credentials you set during migration, or create a new superuser if needed:
+ ```bash
+ sudo graphite-manage createsuperuser
```
- graphite-manage createsuperuser
- ```
-2. You will notice a menu on the left of the Screen. Click **carbon** to see the metrics that are collected by Graphite. Currently, you will only see the data the application is gathering about its performance:
-
-3. It is also possible to create dashboards from the data Graphite collects to have an overview of the different metrics:
-
-## Feeding data to graphite
+2. Access your **carbon metrics** from the menu on the left to view data collected by Graphite.
-Graphite is very flexible when it comes to the origin of data. There are three main methods for sending data to Graphite: [Plaintext](https://graphite.readthedocs.io/en/latest/feeding-carbon.html#the-plaintext-protocol), [Pickle](https://graphite.readthedocs.io/en/latest/feeding-carbon.html#the-pickle-protocol), and [AMQP](https://graphite.readthedocs.io/en/latest/feeding-carbon.html#using-amqp).
+3. Create **dashboards** to monitor various metrics and visualize your data.
-1. Previously a `test` block has been created in the storage-schemas file. It will be used now to send some data to Graphite from a terminal.
-2. Type the following command in a terminal. You can replace the value `42` with some different numbers to see what it does:
- ```
- echo "test.count 42 `date +%s`" | nc -q0 localhost 2003
+## Feeding data to Graphite
+
+Now that Graphite is set up, you can send data to it. For simplicity, use the **Plaintext protocol** to send data directly from the terminal:
+
+1. Send test data using the following command:
+ ```bash
+ echo "test.count 42 $(date +%s)" | nc -q0 localhost 2003
```
- Metric messages need to contain a metric name, a value, and a timestamp.
+ This sends a metric named `test.count` with the value `42`.
+
+2. Refresh the Graphite interface and see your new data displayed under the `test` schema.
+
+3. Adjust the time range in Graphite to see how the data appears over different periods.
+
+## Conclusion
- The example feeds the storage schema `test` with the metric `count` and the value `42`. To get the timestamp, the `date` command is used.
-3. Go back to the Graphite Web Interface and reload it. The new storage scheme **test** will appear in the menu.
-4. Set the time range to a few minutes, by clicking on the _clock-icon_:
-
-5. The values that you have sent will appear in the graph:
-
-6. Wait for 15 minutes and refresh the graph by setting the time frame to the past 15 minutes. The graph will look different:
-
+You have now successfully installed and configured Graphite on **Ubuntu 22.04**. You've learned how to:
-This is because our first collection does not store data for 15 minutes and Graphite will look into the second collection for rendering the graph.
+- Set up the Graphite web application and configure the backend with Carbon.
+- Set data retention policies and aggregation methods.
+- Feed test data into Graphite and visualize it through the web interface.
-As the data was sent with a _count_ metric, Graphite adds up the values in the larger intervals instead of averaging them. It is therefore essential to choose the right metric for each use case.
+For production environments, consider using tools to automate data collection, as sending metrics via the terminal is not recommended for long-term use.
-
- - Pushing Content from a terminal is not the usual way to send data to Graphite. Instead, you will use a tool to automatize the collection of data.
- - A complete list of tools that work with Graphite is available in the [official documentation](https://graphite.readthedocs.io/en/latest/tools.html).
-
\ No newline at end of file
+For more details, refer to the [official Graphite documentation](https://graphite.readthedocs.io/en/latest/).
\ No newline at end of file
diff --git a/tutorials/configure-smtp-relay-tem/index.mdx b/tutorials/configure-smtp-relay-tem/index.mdx
index 9aae07f40f..05599b2fe6 100644
--- a/tutorials/configure-smtp-relay-tem/index.mdx
+++ b/tutorials/configure-smtp-relay-tem/index.mdx
@@ -10,7 +10,7 @@ categories:
- transactional-email
- instances
dates:
- validation: 2024-08-27
+ validation: 2025-03-06
posted: 2023-08-08
---
diff --git a/tutorials/deploy-phpmyadmin-with-docker/index.mdx b/tutorials/deploy-phpmyadmin-with-docker/index.mdx
index c822952b7d..d2de012e74 100644
--- a/tutorials/deploy-phpmyadmin-with-docker/index.mdx
+++ b/tutorials/deploy-phpmyadmin-with-docker/index.mdx
@@ -10,7 +10,7 @@ categories:
- postgresql-and-mysql
tags: phpMyAdmin Docker InstantApp MySQL
dates:
- validation: 2024-08-27
+ validation: 2025-03-06
posted: 2018-06-04
---
diff --git a/tutorials/deploy-ubuntu-20-04-instance-scaleway-elements/index.mdx b/tutorials/deploy-ubuntu-20-04-instance-scaleway-elements/index.mdx
index 3bddb18e82..c1ed20bd88 100644
--- a/tutorials/deploy-ubuntu-20-04-instance-scaleway-elements/index.mdx
+++ b/tutorials/deploy-ubuntu-20-04-instance-scaleway-elements/index.mdx
@@ -1,6 +1,6 @@
---
meta:
- title: Deploying an Ubuntu 20.04 LTS (Focal Fossa) Instance on Scaleway
+ title: Deploying an Ubuntu 20.04 LTS (Focal Fossa) Instance on Scaleway (End of Standard Support in May 2025)
description: In this tutorial, you will learn how to deploy, update, and manage an Ubuntu 20.04 LTS (Focal Fossa) Instance on Scaleway.
content:
h1: Deploying an Ubuntu 20.04 LTS (Focal Fossa) Instance on Scaleway
@@ -9,10 +9,16 @@ tags: Ubuntu focal-fossa
categories:
- instances
dates:
- validation: 2024-08-27
+ validation: 2025-03-05
posted: 2021-12-02
---
+
+ Ubuntu 20.04 LTS (Focal Fossa) will reach its **End of Standard Support in May 2025**. For security and maintenance reasons, we strongly recommend using a more recent version of Ubuntu for your deployment.
+
+ We recommend using a newer version of Ubuntu, such as Ubuntu 22.04 LTS (Jammy Jellyfish) or later, for your Scaleway Instance. This will ensure you receive the latest security updates and features.
+
+
Ubuntu is one of the most popular Linux distributions. First released in 2004, Ubuntu quickly became the favorite Linux distribution for users around the world, mostly because it is easy to install and use.
Ubuntu is developed and maintained by the company [Canonical](https://canonical.com/) and a [large community](https://loco.ubuntu.com/). Their commercial and community teams release new versions of the distribution every six months and collaborate to produce a single, high-quality release with long-term support (LTS) every two years.
diff --git a/tutorials/deploy-ubuntu-22-04-instance/index.mdx b/tutorials/deploy-ubuntu-22-04-instance/index.mdx
index 863113eb34..bb4527d7a5 100644
--- a/tutorials/deploy-ubuntu-22-04-instance/index.mdx
+++ b/tutorials/deploy-ubuntu-22-04-instance/index.mdx
@@ -9,7 +9,7 @@ tags: Ubuntu jammy-jellyfish
categories:
- instances
dates:
- validation: 2024-08-27
+ validation: 2025-03-06
posted: 2022-08-11
---
diff --git a/tutorials/focalboard-project-management/index.mdx b/tutorials/focalboard-project-management/index.mdx
index 60e5f99616..8fe9f380cc 100644
--- a/tutorials/focalboard-project-management/index.mdx
+++ b/tutorials/focalboard-project-management/index.mdx
@@ -10,7 +10,7 @@ hero:
categories:
- instances
dates:
- validation: 2024-08-27
+ validation: 2025-03-06
posted: 2023-02-22
---
diff --git a/tutorials/mariadb-ubuntu-bionic/index.mdx b/tutorials/mariadb-ubuntu-bionic/index.mdx
index 03ceff6c01..b139e07086 100644
--- a/tutorials/mariadb-ubuntu-bionic/index.mdx
+++ b/tutorials/mariadb-ubuntu-bionic/index.mdx
@@ -10,7 +10,7 @@ categories:
- instances
- postgresql-and-mysql
dates:
- validation: 2024-08-27
+ validation: 2025-03-06
posted: 2018-06-20
---
diff --git a/tutorials/mastodon-community/index.mdx b/tutorials/mastodon-community/index.mdx
index b83fba389a..55e6be8548 100644
--- a/tutorials/mastodon-community/index.mdx
+++ b/tutorials/mastodon-community/index.mdx
@@ -10,7 +10,7 @@ categories:
tags: messaging social-network Prework Mastodon ubuntu
hero: assets/scaleway_mastodon.webp
dates:
- validation: 2024-08-27
+ validation: 2025-03-06
posted: 2019-03-05
---
@@ -29,7 +29,7 @@ Mastodon provides the possibility of using [Amazon S3-compatible Object Storage]
- A [domain or subdomain](/domains-and-dns/quickstart/) pointed to your Instance
- Enabled the SMTP ports to send out email notifications
-## Installing Prework
+## Installing prework
1. [Connect to your Instance](/instances/how-to/connect-to-instance/) via SSH.
2. Update the APT package cache and the software already installed on the Instance.
@@ -39,7 +39,7 @@ Mastodon provides the possibility of using [Amazon S3-compatible Object Storage]
3. Install `curl` on the system and add an external repository for the required version of Node.js. Install it by running the following commands:
```
apt install curl -y
- curl -sL https://deb.nodesource.com/setup_16.x | bash -
+ curl -sL https://deb.nodesource.com/setup_18.x | bash -
```
Mastodon uses the [Yarn](https://yarnpkg.com/en/) package manager.
4. Install the repository for the required version of Yarn by running the following commands:
@@ -52,12 +52,12 @@ Mastodon provides the possibility of using [Amazon S3-compatible Object Storage]
wget -O /usr/share/keyrings/postgresql.asc https://www.postgresql.org/media/keys/ACCC4CF8.asc
echo "deb [signed-by=/usr/share/keyrings/postgresql.asc] http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/postgresql.list
```
-6. Update the system and install Yarn.
+6. Update the system and install `yarn`.
```
apt-get update && apt-get install -y yarn
```
7. Install the following packages, which Mastodon depends on:
- - [Imagemagick](https://www.imagemagick.org/) for image related operations
+ - [Imagemagick](https://www.imagemagick.org/) for image-related operations
- [FFmpeg](https://www.ffmpeg.org/) for conversion of GIFs to MP4s
- [Protobuf](https://github.com/protocolbuffers/protobuf) with `libprotobuf-dev` and `protobuf-compiler`, used for language detection
- [Nginx](https://nginx.org/) as frontend web server
@@ -65,7 +65,7 @@ Mastodon provides the possibility of using [Amazon S3-compatible Object Storage]
- [PostgreSQL](https://www.postgresql.org/) is used as an SQL database for Mastodon
- [Node.js](https://nodejs.org/en/) is used for Mastodon's streaming API
- [Yarn](https://yarnpkg.com/en/) is a Node.js package manager
- - [Certbot](https://certbot.eff.org/) is a tool to manage TLS certificated issued by the [Let's Encrypt](https://letsencrypt.org/) nonprofit "Certificate Authority" (CA)
+ - [Certbot](https://certbot.eff.org/) is a tool to manage TLS certificates issued by the [Let's Encrypt](https://letsencrypt.org/) nonprofit "Certificate Authority" (CA)
- other `-dev` packages and `g++`. These packages are required for the compilation of Ruby using ruby-build.
```
@@ -94,7 +94,7 @@ Mastodon provides the possibility of using [Amazon S3-compatible Object Storage]
cd ~/.rbenv && src/configure && make -C src
echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc
echo 'eval "$(rbenv init -)"' >> ~/.bashrc
- # Restart the users shell
+ # Restart the user's shell
exec bash
# Check if rbenv is correctly installed
type rbenv
@@ -103,8 +103,8 @@ Mastodon provides the possibility of using [Amazon S3-compatible Object Storage]
```
11. Install and enable the version of [Ruby](https://www.ruby-lang.org/en/) that is used by Mastodon.
```
- RUBY_CONFIGURE_OPTS=--with-jemalloc rbenv install 2.6.6
- rbenv global 2.6.6
+ RUBY_CONFIGURE_OPTS=--with-jemalloc rbenv install 3.1.0
+ rbenv global 3.1.0
```
This step may take up to several minutes to complete.
@@ -137,7 +137,7 @@ Mastodon requires access to a PostgreSQL database to store its configuration and
```
2. Enter the user's home directory and clone the Mastodon Git repository into the `live` directory:
```
- git clone https://github.com/tootsuite/mastodon.git live && cd live
+ git clone https://github.com/mastodon/mastodon.git live && cd live
```
3. Check out to the latest stable branch.
```
@@ -155,35 +155,40 @@ Mastodon requires access to a PostgreSQL database to store its configuration and
```
6. Type and enter `exit` to return to the root account.
-## Requesting a Let's Encrypt Certificate
+## Requesting a Let's Encrypt certificate
-1. Stop Nginx before requesting the certificate:
+1. Install Certbot via Snap:
```
- systemctl stop nginx.service
+ snap install --classic certbot
```
-2. Use `certbot` to request a certificate with TLS SNI validation in standalone mode. Replace `example.com` with your domain name:
+2. Stop Nginx before requesting the certificate:
```
- certbot certonly --standalone -d example.com
+ systemctl stop nginx.service
```
- As Let's Encrypt certificates have a validity of 90 days, a cron-job can be used to renew them and restart Nginx automatically.
-3. Create a new file and open it in a text editor like nano:
- ```
- nano /etc/cron.daily/letsencrypt-renew
- ```
-4. Copy the following content into the file, save it, and exit nano:
+3. Use `certbot` to request a certificate with TLS SNI validation in standalone mode. Replace `example.com` with your domain name:
```
- #!/usr/bin/env bash
- certbot renew
- systemctl reload nginx.service
- ```
-5. Allow execution of the script and restart the cron daemon. It will run the script daily:
- ```
- chmod +x /etc/cron.daily/letsencrypt-renew
- systemctl restart cron.service
+ certbot certonly --standalone -d example.com
```
+4. Set up automatic renewal using a cron job:
+ - Create a new cron job:
+ ```
+ nano /etc/cron.daily/letsencrypt-renew
+ ```
+ - Copy the following content into the file:
+ ```
+ #!/usr/bin/env bash
+ certbot renew
+ systemctl reload nginx.service
+ ```
+ - Save and exit, then make the script executable:
+ ```
+ chmod +x /etc/cron.daily/letsencrypt-renew
+ systemctl restart cron.service
+ ```
+
## Configuring Nginx
1. Copy the example configuration file shipped with Mastodon in your Nginx `sites-available` directory and create a symlink to it in the `sites-enabled` directory:
@@ -195,202 +200,12 @@ Mastodon requires access to a PostgreSQL database to store its configuration and
```
nano /etc/nginx/sites-available/mastodon
```
-3. Replace `example.com` in the configuration file with your domain or subdomain. Replace with your domain in all following occurrences as well.
+3. Update the server_name directive to reflect your domain name.
```
- map $http_upgrade $connection_upgrade {
- default upgrade;
- '' close;
- }
-
- upstream backend {
- server 127.0.0.1:3000 fail_timeout=0;
- }
-
- upstream streaming {
- server 127.0.0.1:4000 fail_timeout=0;
- }
-
- proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=CACHE:10m inactive=7d max_size=1g;
-
- server {
- listen 80;
- listen [::]:80;
- server_name example.com; <- /!\ Replace example.com with your domain name /!\
- root /home/mastodon/live/public;
- location /.well-known/acme-challenge/ { allow all; }
- location / { return 301 https://$host$request_uri; }
- }
-
- server {
- listen 443 ssl http2;
- listen [::]:443 ssl http2;
- server_name example.com;
-
- ssl_protocols TLSv1.2 TLSv1.3;
- ssl_ciphers HIGH:!MEDIUM:!LOW:!aNULL:!NULL:!SHA;
- ssl_prefer_server_ciphers on;
- ssl_session_cache shared:SSL:10m;
-
- # Uncomment these lines once you acquire a certificate:
- # ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; <- /!\ Replace example.com with your domain name and uncomment this line /!\
- # ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; <- /!\ Replace example.com with your domain name and uncomment this line /!\
-
- keepalive_timeout 70;
- sendfile on;
- client_max_body_size 80m;
-
- root /home/mastodon/live/public;
-
- gzip on;
- gzip_disable "msie6";
- gzip_vary on;
- gzip_proxied any;
- gzip_comp_level 6;
- gzip_buffers 16 8k;
- gzip_http_version 1.1;
- gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
-
- add_header Strict-Transport-Security "max-age=31536000";
-
- location / {
- try_files $uri @proxy;
- }
-
- location ~ ^/(emoji|packs|system/accounts/avatars|system/media_attachments/files) {
- add_header Cache-Control "public, max-age=31536000, immutable";
- add_header Strict-Transport-Security "max-age=31536000";
- try_files $uri @proxy;
- }
-
- location /sw.js {
- add_header Cache-Control "public, max-age=0";
- add_header Strict-Transport-Security "max-age=31536000";
- try_files $uri @proxy;
- }
-
- location @proxy {
- proxy_set_header Host $host;
- proxy_set_header X-Real-IP $remote_addr;
- proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
- proxy_set_header X-Forwarded-Proto https;
- proxy_set_header Proxy "";
- proxy_pass_header Server;
-
- proxy_pass http://backend;
- proxy_buffering on;
- proxy_redirect off;
- proxy_http_version 1.1;
- proxy_set_header Upgrade $http_upgrade;
- proxy_set_header Connection $connection_upgrade;
-
- proxy_cache CACHE;
- proxy_cache_valid 200 7d;
- proxy_cache_valid 410 24h;
- proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
- add_header X-Cached $upstream_cache_status;
- add_header Strict-Transport-Security "max-age=31536000";
-
- tcp_nodelay on;
- }
-
- location /api/v1/streaming {
- proxy_set_header Host $host;
- proxy_set_header X-Real-IP $remote_addr;
- proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
- proxy_set_header X-Forwarded-Proto https;
- proxy_set_header Proxy "";
-
- proxy_pass http://streaming;
- proxy_buffering off;
- proxy_redirect off;
- proxy_http_version 1.1;
- proxy_set_header Upgrade $http_upgrade;
- proxy_set_header Connection $connection_upgrade;
-
- tcp_nodelay on;
- }
-
- error_page 500 501 502 503 504 /500.html;
- }
+ server_name example.com;
```
-4. Save and exit.
-## Configuring the Mastodon application
-
-1. Enter the `mastodon` user account:
+4. Restart Nginx to apply the configuration changes:
```
- su - mastodon
- ```
-2. Change into the **/home/mastodon/live** directory and run the Mastodon installer:
- ```
- cd /home/mastodon/live
- RAILS_ENV=production bundle exec rake mastodon:setup
- ```
-
- The interactive installer guides you through the setup process.
-3. Enter the domain name or subdomain of the Instance.
-4. Select **No** when asked if you want to use Docker.
-
- Most of the other values are already pre-filled with the correct settings. Edit them if required for your setup.
-5. Select **Amazon S3** as a service provider to set up Mastodon with Object Storage. Valid [API keys](/iam/how-to/create-api-keys/) are required in this step.
-
- Enter the details as follows:
-
- ```
- Provider Amazon S3
- Object Storage bucket name: [scaleway_bucket_name]
- S3 region: fr-par
- S3 hostname: s3.fr-par.scw.cloud
- S3 access key: [scaleway_access_key]
- S3 secret key: [scaleway_secret_key]
- ```
-
-
- If your bucket is located in Amsterdam, use `nl-ams` as the region and `s3.nl-ams.scw.cloud` as the S3 hostname If it is located in Warsaw, use `pl-waw` and `s3.pl-waw.scw.cloud`.
-
-
- Once the configuration is complete, the installer will start to compile the application. This may take some time and consume a lot of RAM.
-
- Once the application is installed, you will be asked if you want to create an Administrator account for your Mastodon instance.
-3. Type `Y` to create the account. Enter the username for the admin user, followed by your email address. A random password will be generated. Take note of it, as you will need it to connect to your Instance.
- ```
- All done! You can now power on the Mastodon server 🐘
-
- Do you want to create an admin user straight away? Yes
- Username: admin <-- Enter the username for your Mastodon admin account
- E-mail: me@myemail.com <-- Enter your email address
- You can login with the password: 9dc4d92d93a26e9b6c021bb75b4a3ce2
- ```
-4. Type `exit` to switch back to the root account.
-
-## Setting-up systemd services
-
-Systemd scripts are used to manage services on Ubuntu systems. Three different scripts are required for Mastodon. These scripts come with the Mastodon package, you need to copy them to their final destination, and then activate the services.
-
-1. Copy the Mastodon `systemd` scripts into their final destination:
- ```
- cp /home/mastodon/live/dist/mastodon-*.service /etc/systemd/system/
- ```
-2. Reload the systemd daemon:
- ```
- systemctl daemon-reload
- ```
-3. Start the services and enable them, so they will start automatically upon the next system reboot:
- ```
- systemctl start mastodon-web.service mastodon-sidekiq.service mastodon-streaming.service
- systemctl enable mastodon-web.service mastodon-sidekiq.service mastodon-streaming.service
- ```
-4. Verify if all services are running:
- ```
- systemctl status mastodon-*.service
- ```
-
-If everything is running, open a web browser and go to your domain name. You will see the home page of your Mastodon instance:
-
-
-
-You can log in with the admin account created during the installation to configure additional parameters of your instance, link Instances to join a federated network, create another user account, and start sharing posts and photos on your timeline. If configured with Object Storage, all files uploaded to the instance are automatically stored in the Object Storage bucket and embedded in the users' timeline:
-
-
-
-For more information and advanced configuration of Mastodon, refer to the [official documentation](https://github.com/tootsuite/documentation#running-mastodon).
\ No newline at end of file
+ systemctl restart nginx
+ ```
\ No newline at end of file
diff --git a/tutorials/migrate-databases-instance/index.mdx b/tutorials/migrate-databases-instance/index.mdx
index dfd50432e7..62f50ec8cd 100644
--- a/tutorials/migrate-databases-instance/index.mdx
+++ b/tutorials/migrate-databases-instance/index.mdx
@@ -10,7 +10,7 @@ categories:
- postgresql-and-mysql
- instances
dates:
- validation: 2024-08-27
+ validation: 2025-03-06
posted: 2020-09-21
---
diff --git a/tutorials/migrate-dedibox-to-elastic-metal/index.mdx b/tutorials/migrate-dedibox-to-elastic-metal/index.mdx
index 22b2560dc6..459dea19ff 100644
--- a/tutorials/migrate-dedibox-to-elastic-metal/index.mdx
+++ b/tutorials/migrate-dedibox-to-elastic-metal/index.mdx
@@ -1,27 +1,28 @@
---
meta:
title: Transferring your data from Dedibox to Elastic Metal
- description: This tutorial provides information about how to migrate your existing data from a Scaleway Dedibox to an Elastic Metal server
+ description: This tutorial provides updated information on migrating your existing data from a Scaleway Dedibox to an Elastic Metal server.
content:
h1: Transferring your data from Dedibox to Elastic Metal
- paragraph: This tutorial provides information about how to migrate your existing data from a Scaleway Dedibox to an Elastic Metal server
+ paragraph: This tutorial provides updated information on migrating your existing data from a Scaleway Dedibox to an Elastic Metal server.
tags: dedibox elastic-metal migration
categories:
- dedibox
- elastic-metal
dates:
- validation: 2024-08-27
+ validation: 2025-03-06
posted: 2022-01-25
---
-This tutorial provides information about how to migrate your existing data from a Dedibox to an Elastic Metal server. Its purpose is to guide you in your migration to the resources that best fit your needs for improved stability, performance, and reliability.
+This tutorial provides step-by-step guidance for migrating your existing data from a Dedibox to an Elastic Metal server, ensuring improved stability, performance, and reliability.
-We use Duplicity to encrypt the backup and upload it to Object Storage. Then we download and decrypt the data on the Elastic Metal server.
+We use **Duplicity** to encrypt the backup and upload it to Object Storage. Then, we download and decrypt the data on the Elastic Metal server.
+### Prerequisites
- A Scaleway account logged into the [console](https://console.scaleway.com)
-- [Owner](/iam/concepts/#owner) status or [IAM permissions](/iam/concepts/#permission) allowing you to perform actions in the intended Organization
+- [Owner](/iam/concepts/#owner) status or appropriate [IAM permissions](/iam/concepts/#permission)
- An [SSH key](/organizations-and-projects/how-to/create-ssh-key/)
- A Dedibox server
- [Created](/elastic-metal/how-to/create-server/) and [installed](/elastic-metal/how-to/install-server/) an Elastic Metal server
@@ -29,205 +30,129 @@ We use Duplicity to encrypt the backup and upload it to Object Storage. Then we
## Creating an Object Storage bucket
1. Log in to the Scaleway console.
-2. Click **Storage** on the side menu. A list of your buckets appears. If you have not created a bucket yet, the list will be empty.
-3. Click **Create a Bucket**.
-4. Name your bucket and validate your bucket creation. The bucket name must be unique and contain only alphanumeric and lowercase characters.
+2. Click **Storage** on the side menu to view your buckets.
+3. Click **Create a Bucket**, enter a unique name (lowercase alphanumeric characters only), and validate.
+4. Ensure the correct **region** is selected (e.g., `s3.fr-par.scw.cloud`, `s3.nl-ams.scw.cloud`).
-## Installing software requirements on the Dedibox server
+## Installing software requirements on the Dedibox Server
- To make sure we can generate a GPG Key, we need to create some entropy. We suggest using [Haveged](https://github.com/jirka-h/haveged) constantly on your server to generate a small amount of entropy.
+ To ensure entropy for generating a GPG key, install and run [Haveged](https://github.com/jirka-h/haveged).
-Run the following command to update the APT package manager, upgrade the software already installed on the server, and download and install Duplicity:
+Run the following commands to update your system and install Duplicity:
```sh
-apt update && apt upgrade
-apt install -y python3-boto python3-pip haveged gettext librsync-dev
-wget https://gitlab.com/duplicity/duplicity/-/archive/rel.2.2.2/duplicity-rel.2.2.2.tar.gz
-tar xaf duplicity-2.2.*.tar.gz
-cd duplicity-2.2.*/
-python3 -m pip install -r requirements.txt
-python3 -m pip install
+apt update && apt upgrade -y
+apt install -y python3-boto3 python3-pip haveged gettext librsync-dev pipx
+python3 -m pip install --upgrade pip
```
+### Installing Duplicity
+
+Choose one of the following installation methods, depending on whether you want to install for all users or just the current user:
+
+#### Install for all users (recommended)
+```sh
+sudo pipx --global install duplicity
+```
+This will install Duplicity in `/usr/local/bin/duplicity` and its dependencies in `/opt/pipx/venvs/duplicity`.
+
+#### Install for current user only
+```sh
+pipx install duplicity
+```
+This will install Duplicity in `~/.local/bin/duplicity` and its dependencies in `~/.local/pipx/venvs/duplicity`.
+
+For more information, visit the [Duplicity GitLab page](https://gitlab.com/duplicity/duplicity).
+
- In the command above, we download Duplicity version `2.2.2`. Check the [Duplicity](https://duplicity.gitlab.io/) website for the latest version of the tool.
+ Always check the [Duplicity website](https://duplicity.gitlab.io/) for the latest version.
-### Creating a GPG key
+## Creating a GPG key
-1. To generate the GPG key, launch this command.
+1. Generate the GPG key:
```sh
gpg --full-generate-key
```
- Enter a passphrase. You will be asked to define the characteristics of your key. We will go with the default settings:
-
- * What kind of key you want: (1) RSA and RSA (default)
- * What keysize do you want: (3072)
- * How long the key should be valid: 0 = key does not expire
- * GPG will then ask how to call your key, an address, and a description.
-2. You need to use the GPG Key fingerprint, it could be an 8, 16, or 40 char long hash. You can also find the fingerprint of your key with the command:
- ```
+ Use default settings:
+ - Key type: **(1) RSA and RSA (default)**
+ - Key size: **3072**
+ - Expiration: **0 (never expires)**
+ - Assign a name, email, and comment.
+2. Retrieve the GPG Key fingerprint:
+ ```sh
gpg --list-keys
- gpg: checking the trustdb
- gpg: marginals needed: 3 completes needed: 1 trust model: pgp
- gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u
- /home/me/.gnupg/pubring.kbx
- ------------------------------
-
- pub rsa3072 2022-01-25 [SC]
- XXXXXXXXXXXXX-FINGERPRINT-XXXXXXXXXXXXXX
- uid [ultimate] backups (Scaleway Object Storage backups)
- sub rsa3072 2022-01-25 [E]
```
-### Transferring the PGP key to the Elastic Metal server
+## Transferring the GPG key to the Elastic Metal server
-1. Export the keys so you can decrypt your files on the Elastic Metal server. Also having the GPG private and public keys stored somewhere else will come in handy in case you lose access to your machine. Export the GPG keys with:
- ```
- gpg --export-secret-key keyname > ~/my-key.asc
- ```
-2. Transfer the key to the Elastic Metal server using `rsync`.
+1. Export the GPG private key:
+ ```sh
+ gpg --export-secret-key --armor "your-key-id" > ~/my-key.asc
```
- scp /root/my-key.asc root@:/root/
+2. Securely transfer the key:
+ ```sh
+ scp ~/my-key.asc root@:/root/
```
-### Backing up your Dedibox
+## Backing up your Dedibox
-1. Create the required folders and a configuration file for Duplicity:
- ```
+1. Create the necessary files and directories:
+ ```sh
touch scw-backup.sh .scw-configrc
chmod 700 scw-backup.sh
chmod 600 .scw-configrc
-
mkdir -p /var/log/duplicity
touch /var/log/duplicity/logfile{.log,-recent.log}
```
-2. Add the following lines to `.scw-configrc`:
+2. Add the following configurations to `.scw-configrc`:
```sh
- # Scaleway credentials keys
export AWS_ACCESS_KEY_ID=""
export AWS_SECRET_ACCESS_KEY=""
- export SCW_BUCKET="s3://s3.fr-par.scw.cloud/"
-
- # GPG Key information
+ export SCW_BUCKET="s3://s3.fr-par.scw.cloud/"
export PASSPHRASE=""
export GPG_FINGERPRINT=""
-
- # Folder to back up
- export SOURCE=""
-
- # Log files
+ export SOURCE=""
export LOGFILE_RECENT="/var/log/duplicity/logfile-recent.log"
export LOGFILE="/var/log/duplicity/logfile.log"
-
- log () {
- date=`date +%Y-%m-%d`
- hour=`date +%H:%M:%S`
- echo "$date $hour $*" >> ${LOGFILE_RECENT}
- }
- export -f log
```
-3. Copy the following script to `scw-backup.sh`:
+3. Backup script (`scw-backup.sh`):
```sh
#!/bin/bash
- source /.scw-configrc
-
- currently_backuping=$(ps -ef | grep duplicity | grep python | wc -l)
-
- if [ $currently_backuping -eq 0 ]; then
- # Clear the recent log file
- cat /dev/null > ${LOGFILE_RECENT}
-
- log ">>> creating and uploading backup to Object Storage"
- duplicity \
- full \
- --asynchronous-upload \
- --encrypt-key=${GPG_FINGERPRINT} \
- --sign-key=${GPG_FINGERPRINT} \
- ${SOURCE} ${SCW_BUCKET} >> ${LOGFILE_RECENT} 2>&1
-
- cat ${LOGFILE_RECENT} >> ${LOGFILE}
- fi
+ source .scw-configrc
+ duplicity full --encrypt-key=${GPG_FINGERPRINT} ${SOURCE} ${SCW_BUCKET}
```
4. Run the backup:
- ```
+ ```sh
./scw-backup.sh
```
-5. Check if everything went well:
- ```
- cat /var/log/duplicity/logfile-recent.log
- ```
## Restoring data on your Elastic Metal server
-1. Install the required prerequisites and duplicity on the Elastic Metal server:
+1. Install required packages:
```sh
- apt update && apt upgrade
- apt install -y python3-boto python3-pip gettext librsync-dev
- wget https://launchpad.net/duplicity/0.8-series/0.8.21/+download/duplicity-0.8.21.tar.gz
- tar xaf duplicity-0.8.*.tar.gz
- cd duplicity-0.8.*/
- pip3 install -r requirements.txt
- python3 setup.py install
- ```
-2. Import the GPG key on the Elastic Metal server:
- ```
- gpg --import my-key.asc
- ```
-3. Create the required folders and a configuration file for Duplicity:
+ apt update && apt upgrade -y
+ apt install -y python3-boto3 python3-pip gettext librsync-dev pipx
+ python3 -m pip install --upgrade pip
+ sudo pipx --global install duplicity
```
- touch scw-restore.sh .scw-configrc
- chmod 700 scw-restore.sh
- chmod 600 .scw-configrc
-
- mkdir -p /var/log/duplicity
- touch /var/log/duplicity/logfile{.log,-recent.log}
- ```
-4. Add the following lines to `.scw-configrc`:
+2. Import the GPG key:
```sh
- # Scaleway credentials keys
- export AWS_ACCESS_KEY_ID=""
- export AWS_SECRET_ACCESS_KEY=""
- export SCW_BUCKET="s3://s3.fr-par.scw.cloud/"
-
- # GPG Key information
- export PASSPHRASE=""
- export GPG_FINGERPRINT=""
-
- # Folder to back up
- export SOURCE=""
-
- # Log files
- export LOGFILE_RECENT="/var/log/duplicity/logfile-recent.log"
- export LOGFILE="/var/log/duplicity/logfile.log"
-
- log () {
- date=`date +%Y-%m-%d`
- hour=`date +%H:%M:%S`
- echo "$date $hour $*" >> ${LOGFILE_RECENT}
- }
- export -f log
+ gpg --import ~/my-key.asc
```
-5. Edit the file `scw-restore.sh`, and add the following:
+3. Restore script (`scw-restore.sh`):
```sh
#!/bin/bash
source .scw-configrc
-
- echo -e "Downloading full backup to:" $1
-
- duplicity \
- --time 0D \
- ${SCW_BUCKET} $1
- ```
-6. Download the data from your bucket to the Elastic Metal server:
+ duplicity restore ${SCW_BUCKET} /destination/folder/
```
- ./scw-restore.sh /tmp/backup-recovery/
+4. Execute the restore script:
+ ```sh
+ ./scw-restore.sh
```
-Once downloaded you can move the data to its final destination onto your new machine.
-
- You can also use Duplicity for regular and incremental backups of your data on the Object Storage platform. Follow our tutorial [How to back up your dedicated server on Object Storage with Duplicity](/tutorials/backup-dedicated-server-s3-duplicity/) for more information.
-
\ No newline at end of file
+ Duplicity can also be used for **incremental backups**. See [How to back up your dedicated server on Object Storage with Duplicity](/tutorials/backup-dedicated-server-s3-duplicity/) for details.
+
diff --git a/tutorials/nextcloud-instantapp/index.mdx b/tutorials/nextcloud-instantapp/index.mdx
index 0c26ec2b52..5464fa881c 100644
--- a/tutorials/nextcloud-instantapp/index.mdx
+++ b/tutorials/nextcloud-instantapp/index.mdx
@@ -9,7 +9,7 @@ tags: apps InstantApp NextCloud
categories:
- instances
dates:
- validation: 2024-08-27
+ validation: 2025-03-06
posted: 2018-08-16
---
diff --git a/tutorials/plausible-analytics-ubuntu/index.mdx b/tutorials/plausible-analytics-ubuntu/index.mdx
index 9bf5bbcbab..ecf3c1873a 100644
--- a/tutorials/plausible-analytics-ubuntu/index.mdx
+++ b/tutorials/plausible-analytics-ubuntu/index.mdx
@@ -10,7 +10,7 @@ hero:
categories:
- instances
dates:
- validation: 2024-08-27
+ validation: 2025-03-06
posted: 2023-02-21
---
@@ -27,6 +27,8 @@ This tool significantly contributes to the enhancement of site performance, with
- [Owner](/iam/concepts/#owner) status or [IAM permissions](/iam/concepts/#permission) allowing you to perform actions in the intended Organization
- An [SSH key](/organizations-and-projects/how-to/create-ssh-key/)
- An [Instance](/instances/how-to/create-an-instance/) running on Ubuntu Jammy Jellyfish (22.04 LTS)
+- A CPU that supports SSE 4.2 or NEON instruction sets (required for ClickHouse)
+- At least 2GB of RAM for stable performance
1. Log into your Instance using SSH:
```
@@ -46,23 +48,23 @@ This tool significantly contributes to the enhancement of site performance, with
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
```
-5. Update the packet manager, and install Docker and its dependencies using `apt`.
+5. Update the package manager, and install Docker and its dependencies using `apt`.
```
apt update && apt install docker-ce docker-ce-cli docker-compose containerd.io docker-buildx-plugin docker-compose-plugin
```
6. Download and unpack the Plausible repository:
```
- curl -L https://github.com/plausible/hosting/archive/master.tar.gz | tar -xz
+ curl -L https://github.com/plausible/community-edition/archive/master.tar.gz | tar -xz
```
If you have `git` installed on your machine, you can also clone the repository using the following command:
```
- git clone https://github.com/plausible/hosting
+ git clone https://github.com/plausible/community-edition
```
7. Enter the Plausible directory:
```
- cd hosting*
+ cd community-edition*
```
Two important files are located in the downloaded folder:
`docker-compose.yml` - installs and orchestrates networking between your Plausible server, Postgres database, Clickhouse database (for stats), and an SMTP server. This file comes preconfigured with values for most setups. You can tweak it if required.
@@ -80,95 +82,5 @@ This tool significantly contributes to the enhancement of site performance, with
docker-compose up -d
```
-## Configuring an Nginx reverse and caching proxy for Plausible
+For more information about Plausible, refer to the [official Plausible documentation](https://plausible.io/docs).
-As Plausible supports only plain HTTP traffic by default, we will configure an Nginx reverse proxy to provide HTTPS with a Let's Encrypt certificate to increase security and cache the analytics script for improved performance.
-
-
- To carry out the following steps you need to have a (sub)domain pointing to your Instance and valid A and/or AAAA records configured for it.
-
-
-1. Install Nginx and Certbot using `apt`. Nginx is a versatile web server that we use as a proxy to protect Plausible from direct access to the application and Certbot is a tool to automate the issuing and renewal of TLS certificates by Let's Encrypt.
- ```
- apt install nginx python3-certbot-nginx
- ```
-2. Create a new Nginx configuration file for Plausible and edit it as follows. In our case, Plausible will be hosted at `plausible.example.com`. Remember to edit this to your own domain name.
- ```
- nano /etc/nginx/sites-available/plausible.example.com.conf
- ```
- ```
- proxy_cache_path /var/cache/nginx/data/jscache levels=1:2 keys_zone=jscache:100m inactive=30d use_temp_path=off max_size=100m;
-
- server {
- listen 80;
- listen [::]:80;
-
- server_name plausible.example.com;
-
- access_log /var/log/nginx/plausible-access.log;
- error_log /var/log/nginx/plausible-error.log;
-
- location / {
- proxy_pass http://127.0.0.1:8000;
- }
-
- location = /js/script.js {
- # Change this if you use a different variant of the script
- proxy_pass http://localhost:8000/js/script.js;
- proxy_set_header Host plausible.example.com;
- proxy_ssl_server_name on;
- proxy_ssl_session_reuse off;
-
-
- # Tiny, negligible performance improvement. Very optional.
- proxy_buffering on;
-
- # Cache the script for 6 hours, as long as plausible.io returns a valid response
- proxy_cache jscache;
- proxy_cache_valid 200 6h;
- proxy_cache_use_stale updating error timeout invalid_header http_500;
-
- # Optional. Adds a header to tell if you got a cache hit or miss
- add_header X-Cache $upstream_cache_status;
- }
-
- location = /api/event {
- proxy_pass http://localhost:8000/api/event;
- proxy_set_header Host plausible.example.com;
- proxy_ssl_server_name on;
- proxy_ssl_session_reuse off;
- proxy_buffering on;
- proxy_http_version 1.1;
-
- proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
- proxy_set_header X-Forwarded-Proto $scheme;
- proxy_set_header X-Forwarded-Host $host;
- }
- }
- ```
-3. Test the configuration to ensure that it will function correctly:
- ```
- nginx -t
- ```
-4. Enable the server block by symlinking:
- ```
- ln -s /etc/nginx/sites-available/plausible.example.com.conf /etc/nginx/sites-enabled/plausible.example.com.conf
- ```
-5. Restart Nginx to enable the new configuration:
- ```
- systemctl nginx restart
- ```
-6. Run `certbot` to obtain a TLS certificate for your domain name and let the application configure Nginx automatically.
- ```
- certbot
- ```
-
- For more details on using [Certbot](https://certbot.eff.org/), refer to our [dedicated documentation](/tutorials/nginx-reverse-proxy/#adding-tls-to-your-nginx-reverse-proxy-using-lets-encrypt).
-
-7. Open the file `docker-compose.yml` in a text editor and change the line `- 8000:8000` to - `127.0.0.1:8000:8000`. This prevents Plausible from being accessed remotely using HTTP on port 8000 which is a security concern.
-8. Stop the application using `docker-compose stop`. Then bring it up again to apply the new configuration `docker-compose up`.
-9. Open a web browser and point it to `https://plausible.example.com/`. The Plausible registration screen displays. Complete the form to create your account.
-
- During account generation, the snippet to add to your website displays. Add it to your site and Plausible starts running analytics on your visitors.
-
-For more information about Plausible, refer to the [official Plausible documentation](https://plausible.io/docs).
\ No newline at end of file
diff --git a/tutorials/scaleway-slack-community/index.mdx b/tutorials/scaleway-slack-community/index.mdx
index 9af6a802b6..75f91bd02d 100644
--- a/tutorials/scaleway-slack-community/index.mdx
+++ b/tutorials/scaleway-slack-community/index.mdx
@@ -9,7 +9,7 @@ categories:
- compute
tags: Slack forum
dates:
- validation: 2024-08-27
+ validation: 2025-03-06
posted: 2020-02-12
---
diff --git a/tutorials/setup-k8s-cluster-rancher/index.mdx b/tutorials/setup-k8s-cluster-rancher/index.mdx
index 7307cdf02d..6fa8364a4b 100644
--- a/tutorials/setup-k8s-cluster-rancher/index.mdx
+++ b/tutorials/setup-k8s-cluster-rancher/index.mdx
@@ -1,23 +1,23 @@
---
meta:
- title: Setting up a Kubernetes cluster using Rancher on Ubuntu Bionic Beaver
+ title: Setting up a Kubernetes Cluster using Rancher on Ubuntu with Docker
description: Rancher is an open-source container management platform providing a graphical interface that makes container management easier.
content:
- h1: Setting up a Kubernetes cluster using Rancher on Ubuntu Bionic Beaver
+ h1: Setting up a Kubernetes Cluster using Rancher on Ubuntu with Docker
paragraph: Rancher is an open-source container management platform providing a graphical interface that makes container management easier.
-tags: Kubernetes Rancher k8s containers
+tags: Kubernetes, Rancher, k8s, containers
categories:
- kubernetes
- instances
- domains-and-dns
dates:
- validation: 2024-08-27
+ validation: 2025-03-06
posted: 2019-08-12
---
Rancher is an open-source container management platform providing a graphical interface that makes container management easier.
-The Rancher UI makes it easy to manage secrets, roles, and permissions. It allows you to scale nodes and pods and set up load balancers without requiring a command line tool or editing hard-to-read YAML files.
+The Rancher UI makes it easy to manage secrets, roles, and permissions. It allows you to scale nodes and pods and set up load balancers without requiring a command-line tool or editing hard-to-read YAML files.
@@ -26,7 +26,7 @@ The Rancher UI makes it easy to manage secrets, roles, and permissions. It allow
- An [SSH key](/organizations-and-projects/how-to/create-ssh-key/)
- [Configured a domain name](/domains-and-dns/quickstart/) (i.e. `rancher.example.com`) pointing to the first Instance
-### Spinning up the required Instances
+## Spinning up the required Instances
1. Click **Instances** in the **Compute** section of the side menu. The [Instances page](https://console.scaleway.com/instance/servers) displays.
2. Click **Create Instance**. The [Instance creation wizard](https://console.scaleway.com/instance/servers/create) displays.
@@ -34,78 +34,84 @@ The Rancher UI makes it easy to manage secrets, roles, and permissions. It allow
4. Click the **InstantApps** tab, and choose the **Docker** image:
-5. Choose a region, type, and name for your Instance (i.e. `rancher1`), then click **Create Instance**.
+5. Choose a region, type, and name for your Instance (i.e., `rancher1`), then click **Create Instance**.
6. Repeat these steps two more times to spin up a total of three Instances running Docker.
-### Installing Rancher
+## Installing Rancher
1. Log into the first Instance (`rancher1`) via [SSH](/instances/how-to/connect-to-instance/).
-2. Launch the following command to fetch the docker image `rancher/rancher` to run in a container with automatic restarting enabled, in case the container fails. Edit the value `rancher.example.com` with the personalized domain name pointing to the Instance to generate a Let's Encrypt certificate automatically:
- ```
+2. Run the following command to fetch the Docker image `rancher/rancher` and run it in a container. This setup ensures that the Rancher container will restart automatically in case of failure. Make sure to replace `rancher.example.com` with your actual domain name pointing to the first instance to enable automatic Let's Encrypt SSL certificate generation:
+ ```bash
docker run -d --restart=unless-stopped -p 80:80 -p 443:443 -v /host/rancher:/var/lib/rancher rancher/rancher --acme-domain rancher.example.com
```
+ This command installs Rancher in a Docker container and automatically configures SSL using Let's Encrypt for secure communication.
-### Configuring Rancher
+## Configuring Rancher
-1. Once Rancher is installed, open a web browser and point it to your Rancher domain (i.e. `https://rancher.example.com`). The following page displays:
+1. Once Rancher is installed, open a web browser and navigate to your Rancher domain (e.g., `https://rancher.example.com`). You will see the Rancher setup page:
-2. Enter a password and its confirmation, and click **Continue** to move forward with the installation.
-3. The (empty) Rancher dashboard displays:
+2. Enter a password and its confirmation, and click **Continue** to proceed with the installation.
+3. The empty Rancher dashboard will display:
-### Creating a cluster
+## Creating a cluster
-1. Click **Add cluster** to configure a new Kubernetes cluster.
-2. The cluster creation page displays. Click **Custom** to deploy the cluster on the already launched Scaleway Instances.
+1. In the Rancher UI, click **Add Cluster** to start configuring your new Kubernetes cluster.
+2. The cluster creation page will appear. Click **Custom** to deploy the cluster on the already launched Scaleway Instances:
-3. Enter a name for the cluster, choose the desired Kubernetes version and network provider, and select **None** as cloud provider.
+3. Name the cluster, choose the desired Kubernetes version, and select **None** for the cloud provider (since this is a custom setup).
-4. Choose the options for the worker nodes. A Kubernetes cluster must have at least one `etcd` and one `control plane`.
- - `etcd` is a key value storage system used by Kubernetes to keep the state of the entire environment. For redundancy, we recommend running an odd number of copies of the etcd (e.g. 1, 3, 5…).
- - The Kubernetes `control plane` maintains a record of all objects (i.e. pods) in a cluster and updates them with the configuration provided in the Rancher admin interface.
- - Kubernetes `workers` run the actual workloads and monitoring tools to ensure the healthiness of the containers. All pod deployments happen on the worker node.
-
- Choose the roles for each of the Instances in the cluster and run the command shown on the page to install the required software and link them with Rancher:
-
+4. Assign roles for each instance in the cluster:
+ - **Control Plane**: Manages the state and configuration of the cluster.
+ - **etcd**: Stores the state of the entire cluster (recommended to run 3 instances for redundancy).
+ - **Worker**: Runs your containers/pods and handles the workload.
+ Once the roles are assigned, run the command shown on the page to install the necessary software on each instance.
-
- Once all Instances are ready, click **Done** to initialize the cluster.
-5. Once the cluster is ready, the dashboard displays:
+5. Once all instances are ready, click **Done** to initialize the cluster.
+6. When the cluster is initialized, the dashboard will display:
-### Deploying a cluster workload
+## Deploying a cluster workload
-The cluster is now ready, and the deployment of the first pod can take place. A [pod](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/) is the smallest and simplest execution unit of a Kubernetes application that you can create or deploy.
+Now that the cluster is set up, let us deploy your first pod. A [pod](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/) is the smallest deployable unit in Kubernetes.
-1. Head over **Global** in the header bar, select the cluster and click **Default** from the drop-down menu:
+1. In the top navigation bar, click **Global**, select your cluster, then choose **Default** from the drop-down menu:
-2. The clusters dashboard displays. Click **Deploy**:
+2. On the clusters dashboard, click **Deploy**:
-3. Enter the details of the workload:
- - **Name**: A friendly name for the workload.
- - **Docker Image**: Enter [`nginxdemos/hello`](https://hub.docker.com/r/nginxdemos/hello/) to deploy a [Nginx](http://nginx.org/) demo application.
- - Click **Add port** to configure the [port mapping](https://en.wikipedia.org/wiki/Port_forwarding).
- - **Publish the container port**: Set the value to port `80`
- - **Protocol**: Set the value to `TCP`
- - **As a**: Set the Value to `NodePort`
- - **Listening port**: Set the value to port `30000`
-
+3. Enter the details for the workload:
+ - **Name**: A friendly name for your workload.
+ - **Docker Image**: Enter `nginxdemos/hello` to deploy a demo Nginx application.
+ - Under **Port Mapping**, click **Add port** and set the following:
+ - **Publish the container port**: `80`
+ - **Protocol**: `TCP`
+ - **As a**: `NodePort`
+ - **Listening port**: `30000`
-
- Click **Launch** to create the workload.
-4. Once deployed, open a web browser and point it to `http://:30000/`. The Nginx demo application displays:
+4. Click **Launch** to create the workload.
+5. After deployment, you can access the Nginx demo application by visiting `http://:30000/` in your web browser:
-### Scaling a cluster workload
+### Scaling the cluster workload
-Currently, the Nginx demo application lives in a single pod and is deployed only on one Instance. Rancher provides the possibility to scale your deployment to multiple pods directly from the web interface.
+Currently, the Nginx demo app is running on a single pod. Let’s scale it to multiple pods.
-1. From the cluster dashboard, click **…**. Then, click **Edit** in the pop-up menu:
+1. From the cluster dashboard, click the ellipsis (**…**) next to your deployment and select **Edit**:
-2. Edit the Workload type and set the number of scalable deployments to **3**:
+2. Set the number of replicas for the workload to **3** to scale to 3 pods:
-3. Click **Save**. Rancher will now send instructions to Kubernetes to update the workload configuration and to deploy 3 pods running the Nginx demo application in parallel.
+3. Click **Save**. Rancher will update the Kubernetes deployment to create 3 replicas of the pod.
-4. Access the application running on the second Instance by typing: `http://:30000/` in a web browser. The Nginx demo application displays.
+4. To access the application running on the second instance, visit `http://:30000/` in your browser. The Nginx demo application should display.
+
+## Security considerations and best practices
+
+- SSL/TLS: Ensure your Rancher domain is configured with a valid SSL certificate for secure communication. The `--acme-domain` option in the Rancher Docker command automatically handles Let's Encrypt certificates.
+- Cluster security: It is a good practice to follow Kubernetes security guidelines for RBAC (Role-Based Access Control) and network policies when deploying to a production environment. For example, configure namespaces, enforce least-privilege access, and use network policies to control traffic between pods.
+- Backup & recovery: Regularly backup your Rancher configurations and Kubernetes data (e.g., etcd) to ensure you can restore your cluster in case of failure.
+
+### Further reading
- For more information about [Rancher](https://ranchermanager.docs.rancher.com/v2.9) and [Kubernetes](https://kubernetes.io/docs/home/) refer to the official documentation.
\ No newline at end of file
+For more detailed documentation on Rancher and Kubernetes, check out the official docs:
+- [Rancher Documentation](https://ranchermanager.docs.rancher.com/)
+- [Kubernetes Documentation](https://kubernetes.io/docs/home/)
\ No newline at end of file