diff --git a/pages/managed-inference/concepts.mdx b/pages/managed-inference/concepts.mdx
index 8053abf1a9..45a4dab29b 100644
--- a/pages/managed-inference/concepts.mdx
+++ b/pages/managed-inference/concepts.mdx
@@ -7,7 +7,7 @@ content:
paragraph: This page explains all the concepts related to Managed Inference
tags:
dates:
- validation: 2024-11-29
+ validation: 2025-06-02
categories:
- ai-data
---
diff --git a/tutorials/ceph-cluster/assets/scaleway-ceph_s3.webp b/tutorials/ceph-cluster/assets/scaleway-ceph_s3.webp
deleted file mode 100644
index 67488eb8cf..0000000000
Binary files a/tutorials/ceph-cluster/assets/scaleway-ceph_s3.webp and /dev/null differ
diff --git a/tutorials/ceph-cluster/assets/scaleway_ceph.webp b/tutorials/ceph-cluster/assets/scaleway_ceph.webp
deleted file mode 100644
index 959175bb65..0000000000
Binary files a/tutorials/ceph-cluster/assets/scaleway_ceph.webp and /dev/null differ
diff --git a/tutorials/ceph-cluster/index.mdx b/tutorials/ceph-cluster/index.mdx
deleted file mode 100644
index 0e8aa1f59e..0000000000
--- a/tutorials/ceph-cluster/index.mdx
+++ /dev/null
@@ -1,289 +0,0 @@
----
-meta:
- title: Building your own Ceph distributed storage cluster on dedicated servers
- description: Learn how to set up a Ceph cluster on Scaleway Dedibox and use it as datastore for your VMware ESXi machines.
-content:
- h1: Building your own Ceph distributed storage cluster on dedicated servers
- paragraph: Learn how to set up a Ceph cluster on Scaleway Dedibox and use it as datastore for your VMware ESXi machines.
-categories:
- - object-storage
- - dedibox
-tags: dedicated-servers dedibox Ceph object-storage
-hero: assets/scaleway_ceph.webp
-dates:
- validation: 2023-11-27
- validation_frequency: 18
- posted: 2020-06-29
----
-
-Ceph is an open-source, software-defined storage solution designed to address object, block, and file storage needs. It can handle several exabytes of data, replicating and ensuring fault tolerance using standard hardware. Ceph minimizes administration time and costs, making it both self-healing and self-managing.
-
-This tutorial guides you through deploying a three-node [Ceph](https://www.ceph.com) cluster using [Dedibox dedicated servers](https://www.scaleway.com/en/dedibox/) running Ubuntu Focal Fossa (20.04 LTS).
-
-
-
-- A Dedibox account logged into the [console](https://console.online.net)
-- [Owner](/iam/concepts/#owner) status or [IAM permissions](/iam/concepts/#permission) allowing you to perform actions in the intended Organization
-- 3 Dedibox servers running Ubuntu Focal Fossa 20.04 LTS or later
-- An additional admin machine available to install `ceph-deploy`
-
-## Installing ceph-deploy on the admin machine
-
-`ceph-deploy` simplifies Ceph cluster deployment with a user-friendly command-line interface. Install it on an independent admin machine using the following steps:
-
-1. Connect to the admin machine using SSH:
-
- ```bash
- ssh myuser@my.admin.server.ip
- ```
-
-2. Add the Ceph release key to apt:
-
- ```bash
- wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
- ```
-
-3. Add the Ceph repository to the APT package manager:
-
- ```bash
- echo deb https://eu.ceph.com/debian-octopus/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
- ```
-
-4. Update the APT package manager to include Ceph's repository:
-
- ```bash
- sudo apt update
- ```
-
-5. Install `ceph-deploy`:
-
- ```bash
- sudo apt install ceph-deploy
- ```
-
-### Creating a ceph-deploy user
-
-`ceph-deploy` requires a user with passwordless sudo privileges for installing software on storage nodes. Follow these steps to create a dedicated user:
-
-1. Connect to a Ceph node using SSH:
-
- ```bash
- ssh user@ceph-node
- ```
-
-2. Create a user called `ceph-deploy`:
-
- ```bash
- sudo useradd -d /home/ceph-deploy -m ceph-deploy
- ```
-
- - Note: You can rename the user to your preferences if needed.
-
-3. Configure the password of the `ceph-deploy` user:
-
- ```bash
- sudo passwd ceph-deploy
- ```
-
-4. Add the user to the sudoers configuration:
-
- ```bash
- echo "ceph-deploy ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph-deploy
- sudo chmod 0440 /etc/sudoers.d/ceph-deploy
- ```
-
-5. Install an NTP client on Ceph nodes to avoid time-drift issues:
-
- ```bash
- sudo apt install ntpsec
- ```
-
-6. Install Python for deploying the cluster:
-
- ```bash
- sudo apt install python-minimal
- ```
-
-7. Repeat the above steps for each of the three nodes.
-
-### Enabling passwordless SSH
-
-Generate an SSH key and distribute the public key to each Ceph node for passwordless authentication:
-
-1. Generate an SSH key pair on the admin node:
-
- ```bash
- ssh-keygen
- ```
-
- - Press Enter to save the key in the default location.
-
-2. Ensure Ceph node hostnames are configured in `/etc/hosts`.
-
-3. Transfer the public key to each Ceph node:
-
- ```bash
- ssh-copy-id ceph-deploy@ceph-node-a
- ssh-copy-id ceph-deploy@ceph-node-b
- ssh-copy-id ceph-deploy@ceph-node-c
- ```
-
-## Deploying a Ceph cluster
-
-Deploy the Ceph cluster on your machines by following these steps:
-
-1. Create a directory on the admin node for configuration files and keys:
-
- ```bash
- mkdir my-ceph-cluster
- cd my-ceph-cluster
- ```
-
-2. Create the cluster:
-
- ```bash
- ceph-deploy --username ceph-deploy new ceph-node-a
- ```
-
- - Replace `ceph-node-a` with the FQDN of your node.
-
-3. Install Ceph packages on the nodes:
-
- ```bash
- ceph-deploy --username ceph-deploy install ceph-node-a ceph-node-b ceph-node-c
- ```
-
-4. Deploy initial monitors and gather keys:
-
- ```bash
- ceph-deploy --username ceph-deploy mon create-initial
- ```
-
- - Verify generated files using `ls`.
-
-5. Copy the configuration file and admin key to Ceph Nodes:
-
- ```bash
- ceph-deploy --username ceph-deploy admin ceph-node-a ceph-node-b ceph-node-c
- ```
-
-6. Deploy manager daemon on all Ceph nodes:
-
- ```bash
- ceph-deploy --username ceph-deploy mgr create ceph-node-a ceph-node-b ceph-node-c
- ```
-
-7. Configure Object Storage Devices (OSD) on each Ceph node:
-
- ```bash
- ceph-deploy osd create --data /dev/sdb ceph-node-a
- ceph-deploy osd create --data /dev/sdb ceph-node-b
- ceph-deploy osd create --data /dev/sdb ceph-node-c
- ```
-
- - Ensure the device is not in use and does not contain important data.
-
-8. Check the cluster status:
-
- ```bash
- sudo ceph health
- ```
-
- - The cluster should report `HEALTH_OK`.
-
-### Deploying a Ceph Object Gateway (RGW)
-
-Deploy the Ceph Object Gateway (RGW) to access files using Amazon S3-compatible clients:
-
-1. Run the following command on the admin machine:
-
- ```bash
- ceph-deploy --username ceph-deploy rgw create ceph-node-a
- ```
-
- - Note the displayed information about the RGW instance.
-
-2. Modify the port in `/etc/ceph/ceph.conf`:
-
- ```bash
- sudo nano /etc/ceph/ceph.conf
- ```
-
- - Add or modify lines:
-
- ```bash
- [client]
- rgw frontends = civetweb port=80
- ```
-
- - For HTTPS:
-
- ```bash
- [client]
- rgw frontends = civetweb port=443s ssl_certificate=/etc/ceph/private/bundle_keyandcert.pem
- ```
-
-3. Verify the installation by accessing `http://ceph-node-a:7480` in a web browser.
-
-## Creating Object Storage credentials
-
-On the gateway instance (`ceph-node-a`), run the following command to create a new user:
-
-```bash
-sudo radosgw-admin user create --uid=johndoe --display-name="John Doe" --email=john@example.com
-```
-
-- Note the `access_key` and `user_key`. Proceed to configure your Object Storage client, e.g., [aws-cli](/object-storage/api-cli/object-storage-aws-cli/).
-
-## Configuring AWS-CLI
-
-Use AWS-CLI to manage objects in your Ceph storage cluster:
-
-1. Install `aws-cli` and `awscli-plugin`:
-
- ```bash
- pip3 install awscli
- pip3 install awscli-plugin-endpoint
- ```
-
-2. Create `~/.aws/config`:
-
- ```ini
- [plugins]
- endpoint = awscli_plugin_endpoint
-
- [default]
- region = default
- s3 =
- endpoint_url = http://ceph-node-a:7480
- signature_version = s3v4
- max_concurrent_requests = 100
- max_queue_size = 1000
- multipart_threshold = 50 MB
- multipart_chunksize = 10 MB
- s3api =
- endpoint_url = http://ceph-node-a:7480
- ```
-
-3. Create `~/.aws/credentials`:
-
- ```ini
- [default]
- aws_access_key_id=
- aws_secret_access_key=
- ```
-
- - Replace `` and `` with user credentials.
-
-4. Create a bucket, upload a test file, and check the content:
-
- ```bash
- aws s3 mb s3://MyBucket
- echo "Hello World!" > testfile.txt
- aws s3 cp testfile.txt s3://MyBucket
- aws s3 ls s3://MyBucket
- ```
-
-## Conclusion
-
-You have successfully configured an Amazon S3-compatible storage cluster using Ceph and three [Dedibox dedicated servers](https://www.scaleway.com/en/dedibox/). You can now manage your data using any Amazon S3-compatible tool. For advanced configuration, refer to the official [Ceph documentation](https://docs.ceph.com/docs/master/).
\ No newline at end of file
diff --git a/tutorials/get-started-crossplane-kubernetes/index.mdx b/tutorials/get-started-crossplane-kubernetes/index.mdx
index e1f8809fee..55e72c85e5 100644
--- a/tutorials/get-started-crossplane-kubernetes/index.mdx
+++ b/tutorials/get-started-crossplane-kubernetes/index.mdx
@@ -9,7 +9,7 @@ tags: crossplane kubernetes
categories:
- kubernetes
dates:
- validation: 2024-11-25
+ validation: 2025-06-02
posted: 2023-05-05
---
@@ -53,7 +53,7 @@ Run the following `up uxp install` command to install the latest stable version
You should see an output like the following:
```plaintext
- UXP 1.18.0-up.1 installed
+ UXP 1.20.0-up.1 installed
```
## Installing the provider into your Kubernetes cluster
@@ -101,7 +101,7 @@ Run the following `up uxp install` command to install the latest stable version
```
NAME INSTALLED HEALTHY PACKAGE AGE
- provider-scaleway True True xpkg.upbound.io/scaleway/provider-scaleway:v0..0 11s
+ provider-scaleway True Unknown xpkg.upbound.io/scaleway/provider-scaleway:v0.2.0 9s
```
diff --git a/tutorials/install-medusa/index.mdx b/tutorials/install-medusa/index.mdx
index 7a54e2aec5..7913453b48 100644
--- a/tutorials/install-medusa/index.mdx
+++ b/tutorials/install-medusa/index.mdx
@@ -9,7 +9,7 @@ categories:
- instances
tags: ecommerce shopping medusajs
dates:
- validation: 2024-11-25
+ validation: 2025-06-02
posted: 2023-05-10
---
@@ -36,9 +36,8 @@ This tutorial will show you how to install and use MedusaJS, create a new projec
```
apt update
apt install -y ca-certificates curl gnupg
- mkdir -p /etc/apt/keyrings
- curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg
- echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_20.x nodistro main" | tee /etc/apt/sources.list.d/nodesource.list
+ curl -fsSL https://deb.nodesource.com/setup_23.x -o nodesource_setup.sh
+ sudo -E bash nodesource_setup.sh
```
4. Update the APT cache and install the required packages for Medusa:
```
diff --git a/tutorials/install-rkhunter/index.mdx b/tutorials/install-rkhunter/index.mdx
index 3d92209515..ee49f6e4c6 100644
--- a/tutorials/install-rkhunter/index.mdx
+++ b/tutorials/install-rkhunter/index.mdx
@@ -1,15 +1,15 @@
---
meta:
- title: Detecting Rootkits and Security Holes with rkhunter on Ubuntu
- description: Detecting Rootkits and Security Holes with RKhunter on Ubuntu
+ title: Detecting rootkits and security holes with rkhunter on Ubuntu
+ description: Detecting rootkits and security holes with rkhunter on Ubuntu
content:
- h1: Detecting Rootkits and Security Holes with rkhunter on Ubuntu
- paragraph: Detecting Rootkits and Security Holes with RKhunter on Ubuntu
+ h1: Detecting rootkits and Security Holes with rkhunter on Ubuntu
+ paragraph: Detecting rootkits and security holes with rkhunter on Ubuntu
categories:
- compute
tags: Rootkits rkhunter security
dates:
- validation: 2024-11-25
+ validation: 2025-06-02
posted: 2018-10-30
---
diff --git a/tutorials/kubernetes-package-management-helm/index.mdx b/tutorials/kubernetes-package-management-helm/index.mdx
index 8f4461f08c..61e8513fbf 100644
--- a/tutorials/kubernetes-package-management-helm/index.mdx
+++ b/tutorials/kubernetes-package-management-helm/index.mdx
@@ -9,7 +9,7 @@ categories:
- kubernetes
tags: kubernetes helm k8s
dates:
- validation: 2024-11-25
+ validation: 2025-06-02
posted: 2024-05-23
---
@@ -51,7 +51,7 @@ For a complete overview of Helm and its basic concepts, refer to this [Scaleway
kubectl cluster-info
```
You will see cluster information if `kubectl` is installed and configured correctly. If not, make sure to [deploy a Kubernetes Kapsule cluster](/kubernetes/how-to/create-cluster/) first.
-
+
### Installing Helm Client