Skip to content

Commit 049c671

Browse files
authored
docs(tuto): restore tutorial ceph (#5173)
* docs(tuto): restore tutorial ceph * docs(tutorials): update tutorial ceph * Apply suggestions from code review
1 parent e30c80d commit 049c671

File tree

3 files changed

+227
-0
lines changed

3 files changed

+227
-0
lines changed
38.9 KB
Loading
306 KB
Loading

tutorials/ceph-cluster/index.mdx

Lines changed: 227 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,227 @@
1+
---
2+
meta:
3+
title: Building your own Ceph distributed storage cluster on dedicated servers
4+
description: Learn how to set up a Ceph cluster on Scaleway Dedibox and use it as datastore for your VMware ESXi machines.
5+
content:
6+
h1: Building your own Ceph distributed storage cluster on dedicated servers
7+
paragraph: Learn how to set up a Ceph cluster on Scaleway Dedibox and use it as datastore for your VMware ESXi machines.
8+
categories:
9+
- object-storage
10+
- dedibox
11+
tags: dedicated-servers dedibox Ceph object-storage
12+
hero: assets/scaleway_ceph.webp
13+
dates:
14+
validation: 2025-06-25
15+
validation_frequency: 18
16+
posted: 2020-06-29
17+
---
18+
19+
Ceph is an open-source, software-defined storage solution that provides object, block, and file storage at exabyte scale. It is self-healing, self-managing, and fault-tolerant, using commodity hardware to minimize costs.
20+
This tutorial guides you through deploying a three-node Ceph cluster with a RADOS Gateway (RGW) for S3-compatible object storage on Dedibox dedicated servers running Ubuntu 24.04 LTS.
21+
22+
<Macro id="requirements" />
23+
24+
- A Dedibox account logged into the [console](https://console.online.net)
25+
- [Owner](/iam/concepts/#owner) status or [IAM permissions](/iam/concepts/#permission) allowing you to perform actions in the intended Organization
26+
- 3 Dedibox servers (Ceph nodes) running Ubuntu 24.04 LTS, each with:
27+
- At least 8GB RAM, 4 CPU cores, and one unused data disk (e.g., /dev/sdb) for OSDs.
28+
- Network connectivity between nodes and the admin machine.
29+
- An admin machine (Ubuntu 24.04 LTS recommended) with SSH access to Ceph nodes.
30+
31+
## Configure networking and SSH
32+
33+
1. Log into each of the ceph nodes and the admin machine using SSH
34+
35+
2. Install software dependencies on all nodes and the admin machine:
36+
```
37+
sudo apt update
38+
sudo apt install -y python3 chrony lvm2 podman
39+
sudo systemctl enable chrony
40+
```
41+
42+
3. Set unique hostnames on each Ceph node:
43+
```
44+
sudo hostnamectl set-hostname ceph-node-a # Repeat for ceph-node-b, ceph-node-c
45+
```
46+
47+
4. Configure `/etc/hosts` on all nodes and the admin machine to resolve hostnames:
48+
```
49+
echo "<node-a-ip> ceph-node-a" | sudo tee -a /etc/hosts
50+
echo "<node-b-ip> ceph-node-b" | sudo tee -a /etc/hosts
51+
echo "<node-c-ip> ceph-node-c" | sudo tee -a /etc/hosts
52+
```
53+
54+
5. Create a deployment user (cephadm) on each Ceph node:
55+
```
56+
sudo useradd -m -s /bin/bash cephadm
57+
sudo passwd cephadm
58+
echo "cephadm ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephadm
59+
sudo chmod 0440 /etc/sudoers.d/cephadm
60+
```
61+
62+
6. Enable passwordless SSH from the admin machine to Ceph nodes:
63+
```
64+
ssh-keygen -t ed25519 -N "" -f ~/.ssh/id_ed25519
65+
ssh-copy-id cephadm@ceph-node-a
66+
ssh-copy-id cephadm@ceph-node-b
67+
ssh-copy-id cephadm@ceph-node-c
68+
```
69+
70+
7. Verify time synchronization on all nodes:
71+
```
72+
chronyc sources
73+
```
74+
75+
## Install cephadm on the admin machine
76+
77+
1. Add the Ceph repository for the latest stable release (e.g., Reef or newer):
78+
```
79+
wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo tee /etc/apt/trusted.gpg.d/ceph.asc
80+
sudo apt-add-repository 'deb https://download.ceph.com/debian-squid/ jammy main'
81+
sudo apt update
82+
```
83+
84+
2. Install `cephadm`:
85+
```
86+
sudo apt install -y cephadm
87+
```
88+
89+
3. Verify the installation:
90+
```
91+
cephadm --version
92+
```
93+
94+
95+
## Bootstrap the Ceph cluster
96+
97+
1. Bootstrap the cluster on the admin machine, using the admin node’s IP:
98+
```
99+
sudo cephadm bootstrap \
100+
--mon-ip <admin-node-ip> \
101+
--initial-dashboard-user admin \
102+
--initial-dashboard-password <strong-password> \
103+
--dashboard-ssl
104+
```
105+
106+
2. Access the Ceph dashboard at `https://<admin-node-ip>:8443` to verify the setup.
107+
108+
109+
## Add Ceph nodes to the cluster
110+
111+
1. Add each Ceph node to the cluster:
112+
```
113+
sudo cephadm orch host add ceph-node-a
114+
sudo cephadm orch host add ceph-node-b
115+
sudo cephadm orch host add ceph-node-c
116+
```
117+
118+
2. Verify hosts:
119+
```
120+
sudo ceph orch host ls
121+
```
122+
123+
124+
125+
### Deploy Object Storage devices (OSDs)
126+
127+
1. List available disks on each node:
128+
```
129+
sudo ceph orch device ls
130+
```
131+
132+
2. Deploy OSDs on all available unused disks:
133+
```
134+
sudo ceph orch apply osd --all-available-devices
135+
```
136+
137+
3. Verify the OSD deployment:
138+
```
139+
sudo ceph osd tree
140+
```
141+
142+
143+
### Deploy RADOS gateway (RGW)
144+
145+
1. Deploy a single RGW instance on ceph-node-a:
146+
```
147+
sudo ceph orch apply rgw default --placement="count:1 host:ceph-node-a" --port=80
148+
```
149+
150+
2. To use HTTPS (recommended), generate a self-signed certificate:
151+
```
152+
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
153+
-keyout /etc/ceph/private/rgw.key \
154+
-out /etc/ceph/private/rgw.crt \
155+
-subj "/CN=ceph-node-a"
156+
```
157+
158+
3. Redeploy RGW with HTTPS:
159+
```
160+
sudo ceph orch apply rgw default \
161+
--placement="count:1 host:ceph-node-a" \
162+
--port=443 \
163+
--ssl-cert=/etc/ceph/private/rgw.crt \
164+
--ssl-key=/etc/ceph/private/rgw.key
165+
```
166+
167+
4. Verify RGW by accessing http://ceph-node-a:80 (or https://ceph-node-a:443 for HTTPS).
168+
169+
## Create an RGW user
170+
171+
1. Create a user for S3-compatible access:
172+
```
173+
sudo radosgw-admin user create \
174+
--uid=johndoe \
175+
--display-name="John Doe" \
176+
177+
```
178+
179+
Note the generated `access_key` and `secret_key` from the output.
180+
181+
182+
## Step 8: Configure AWS-CLI for Object Storage
183+
184+
1. Install AWS-CLI on the admin machine or the "a" client:
185+
```
186+
pip3 install awscli awscli-plugin-endpoint
187+
```
188+
189+
2. Create a configuration file `~/.aws/config`:
190+
```
191+
[plugins]
192+
endpoint = awscli_plugin_endpoint
193+
194+
[default]
195+
region = default
196+
s3 =
197+
endpoint_url = http://ceph-node-a:80
198+
signature_version = s3v4
199+
s3api =
200+
endpoint_url = http://ceph-node-a:80
201+
202+
For HTTPS, use https://ceph-node-a:443.
203+
204+
Create ~/.aws/credentials:
205+
[default]
206+
aws_access_key_id=<access_key>
207+
aws_secret_access_key=<secret_key>
208+
```
209+
210+
3. Test the setup:
211+
```sh
212+
aws s3 mb s3://mybucket --endpoint-url http://ceph-node-a:80
213+
echo "Hello Ceph!" > testfile.txt
214+
aws s3 cp testfile.txt s3://mybucket --endpoint-url http://ceph-node-a:80
215+
aws s3 ls s3://mybucket --endpoint-url http://ceph-node-a:80
216+
```
217+
218+
4. Verify the cluster health status:
219+
```
220+
sudo ceph -s
221+
```
222+
223+
Ensure the output shows `HEALTH_OK`.
224+
225+
## Conclusion
226+
227+
You have deployed a Ceph storage cluster with S3-compatible object storage using three Dedibox servers on Ubuntu 24.04 LTS. The cluster is managed with `cephadm`, ensuring modern orchestration and scalability. For advanced configurations (e.g., multi-zone RGW, monitoring with Prometheus), refer to the official [Ceph documentation](https://docs.ceph.com/docs/master/).

0 commit comments

Comments
 (0)