|
| 1 | +--- |
| 2 | +meta: |
| 3 | + title: Building your own Ceph distributed storage cluster on dedicated servers |
| 4 | + description: Learn how to set up a Ceph cluster on Scaleway Dedibox and use it as datastore for your VMware ESXi machines. |
| 5 | +content: |
| 6 | + h1: Building your own Ceph distributed storage cluster on dedicated servers |
| 7 | + paragraph: Learn how to set up a Ceph cluster on Scaleway Dedibox and use it as datastore for your VMware ESXi machines. |
| 8 | +categories: |
| 9 | + - object-storage |
| 10 | + - dedibox |
| 11 | +tags: dedicated-servers dedibox Ceph object-storage |
| 12 | +hero: assets/scaleway_ceph.webp |
| 13 | +dates: |
| 14 | + validation: 2025-06-24 |
| 15 | + validation_frequency: 18 |
| 16 | + posted: 2020-06-29 |
| 17 | +--- |
| 18 | + |
| 19 | +Ceph is an open-source, software-defined storage solution designed to address object, block, and file storage needs. It can handle several exabytes of data, replicating and ensuring fault tolerance using standard hardware. Ceph minimizes administration time and costs, making it both self-healing and self-managing. |
| 20 | + |
| 21 | +This tutorial guides you through deploying a three-node [Ceph](https://www.ceph.com) cluster using [Dedibox dedicated servers](https://www.scaleway.com/en/dedibox/) running Ubuntu Focal Fossa (20.04 LTS). |
| 22 | + |
| 23 | +<Macro id="requirements" /> |
| 24 | + |
| 25 | +- A Dedibox account logged into the [console](https://console.online.net) |
| 26 | +- [Owner](/iam/concepts/#owner) status or [IAM permissions](/iam/concepts/#permission) allowing you to perform actions in the intended Organization |
| 27 | +- 3 Dedibox servers running Ubuntu Focal Fossa 20.04 LTS or later |
| 28 | +- An additional admin machine available to install `ceph-deploy` |
| 29 | + |
| 30 | +## Installing ceph-deploy on the admin machine |
| 31 | + |
| 32 | +`ceph-deploy` simplifies Ceph cluster deployment with a user-friendly command-line interface. Install it on an independent admin machine using the following steps: |
| 33 | + |
| 34 | +1. Connect to the admin machine using SSH: |
| 35 | + |
| 36 | + ```bash |
| 37 | + |
| 38 | + ``` |
| 39 | + |
| 40 | +2. Add the Ceph release key to apt: |
| 41 | + |
| 42 | + ```bash |
| 43 | + wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add - |
| 44 | + ``` |
| 45 | + |
| 46 | +3. Add the Ceph repository to the APT package manager: |
| 47 | + |
| 48 | + ```bash |
| 49 | + echo deb https://eu.ceph.com/debian-octopus/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list |
| 50 | + ``` |
| 51 | + |
| 52 | +4. Update the APT package manager to include Ceph's repository: |
| 53 | + |
| 54 | + ```bash |
| 55 | + sudo apt update |
| 56 | + ``` |
| 57 | + |
| 58 | +5. Install `ceph-deploy`: |
| 59 | + |
| 60 | + ```bash |
| 61 | + sudo apt install ceph-deploy |
| 62 | + ``` |
| 63 | + |
| 64 | +### Creating a ceph-deploy user |
| 65 | + |
| 66 | +`ceph-deploy` requires a user with passwordless sudo privileges for installing software on storage nodes. Follow these steps to create a dedicated user: |
| 67 | + |
| 68 | +1. Connect to a Ceph node using SSH: |
| 69 | + |
| 70 | + ```bash |
| 71 | + ssh user@ceph-node |
| 72 | + ``` |
| 73 | + |
| 74 | +2. Create a user called `ceph-deploy`: |
| 75 | + |
| 76 | + ```bash |
| 77 | + sudo useradd -d /home/ceph-deploy -m ceph-deploy |
| 78 | + ``` |
| 79 | + |
| 80 | + - Note: You can rename the user to your preferences if needed. |
| 81 | + |
| 82 | +3. Configure the password of the `ceph-deploy` user: |
| 83 | + |
| 84 | + ```bash |
| 85 | + sudo passwd ceph-deploy |
| 86 | + ``` |
| 87 | + |
| 88 | +4. Add the user to the sudoers configuration: |
| 89 | + |
| 90 | + ```bash |
| 91 | + echo "ceph-deploy ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph-deploy |
| 92 | + sudo chmod 0440 /etc/sudoers.d/ceph-deploy |
| 93 | + ``` |
| 94 | + |
| 95 | +5. Install an NTP client on Ceph nodes to avoid time-drift issues: |
| 96 | + |
| 97 | + ```bash |
| 98 | + sudo apt install ntpsec |
| 99 | + ``` |
| 100 | + |
| 101 | +6. Install Python for deploying the cluster: |
| 102 | + |
| 103 | + ```bash |
| 104 | + sudo apt install python-minimal |
| 105 | + ``` |
| 106 | + |
| 107 | +7. Repeat the above steps for each of the three nodes. |
| 108 | + |
| 109 | +### Enabling passwordless SSH |
| 110 | + |
| 111 | +Generate an SSH key and distribute the public key to each Ceph node for passwordless authentication: |
| 112 | + |
| 113 | +1. Generate an SSH key pair on the admin node: |
| 114 | + |
| 115 | + ```bash |
| 116 | + ssh-keygen |
| 117 | + ``` |
| 118 | + |
| 119 | + - Press Enter to save the key in the default location. |
| 120 | + |
| 121 | +2. Ensure Ceph node hostnames are configured in `/etc/hosts`. |
| 122 | + |
| 123 | +3. Transfer the public key to each Ceph node: |
| 124 | + |
| 125 | + ```bash |
| 126 | + ssh-copy-id ceph-deploy@ceph-node-a |
| 127 | + ssh-copy-id ceph-deploy@ceph-node-b |
| 128 | + ssh-copy-id ceph-deploy@ceph-node-c |
| 129 | + ``` |
| 130 | + |
| 131 | +## Deploying a Ceph cluster |
| 132 | + |
| 133 | +Deploy the Ceph cluster on your machines by following these steps: |
| 134 | + |
| 135 | +1. Create a directory on the admin node for configuration files and keys: |
| 136 | + |
| 137 | + ```bash |
| 138 | + mkdir my-ceph-cluster |
| 139 | + cd my-ceph-cluster |
| 140 | + ``` |
| 141 | + |
| 142 | +2. Create the cluster: |
| 143 | + |
| 144 | + ```bash |
| 145 | + ceph-deploy --username ceph-deploy new ceph-node-a |
| 146 | + ``` |
| 147 | + |
| 148 | + - Replace `ceph-node-a` with the FQDN of your node. |
| 149 | + |
| 150 | +3. Install Ceph packages on the nodes: |
| 151 | + |
| 152 | + ```bash |
| 153 | + ceph-deploy --username ceph-deploy install ceph-node-a ceph-node-b ceph-node-c |
| 154 | + ``` |
| 155 | + |
| 156 | +4. Deploy initial monitors and gather keys: |
| 157 | + |
| 158 | + ```bash |
| 159 | + ceph-deploy --username ceph-deploy mon create-initial |
| 160 | + ``` |
| 161 | + |
| 162 | + - Verify generated files using `ls`. |
| 163 | + |
| 164 | +5. Copy the configuration file and admin key to Ceph Nodes: |
| 165 | + |
| 166 | + ```bash |
| 167 | + ceph-deploy --username ceph-deploy admin ceph-node-a ceph-node-b ceph-node-c |
| 168 | + ``` |
| 169 | + |
| 170 | +6. Deploy manager daemon on all Ceph nodes: |
| 171 | + |
| 172 | + ```bash |
| 173 | + ceph-deploy --username ceph-deploy mgr create ceph-node-a ceph-node-b ceph-node-c |
| 174 | + ``` |
| 175 | + |
| 176 | +7. Configure Object Storage Devices (OSD) on each Ceph node: |
| 177 | + |
| 178 | + ```bash |
| 179 | + ceph-deploy osd create --data /dev/sdb ceph-node-a |
| 180 | + ceph-deploy osd create --data /dev/sdb ceph-node-b |
| 181 | + ceph-deploy osd create --data /dev/sdb ceph-node-c |
| 182 | + ``` |
| 183 | + |
| 184 | + - Ensure the device is not in use and does not contain important data. |
| 185 | + |
| 186 | +8. Check the cluster status: |
| 187 | + |
| 188 | + ```bash |
| 189 | + sudo ceph health |
| 190 | + ``` |
| 191 | + |
| 192 | + - The cluster should report `HEALTH_OK`. |
| 193 | + |
| 194 | +### Deploying a Ceph Object Gateway (RGW) |
| 195 | + |
| 196 | +Deploy the Ceph Object Gateway (RGW) to access files using Amazon S3-compatible clients: |
| 197 | + |
| 198 | +1. Run the following command on the admin machine: |
| 199 | + |
| 200 | + ```bash |
| 201 | + ceph-deploy --username ceph-deploy rgw create ceph-node-a |
| 202 | + ``` |
| 203 | + |
| 204 | + - Note the displayed information about the RGW instance. |
| 205 | + |
| 206 | +2. Modify the port in `/etc/ceph/ceph.conf`: |
| 207 | + |
| 208 | + ```bash |
| 209 | + sudo nano /etc/ceph/ceph.conf |
| 210 | + ``` |
| 211 | + |
| 212 | + - Add or modify lines: |
| 213 | + |
| 214 | + ```bash |
| 215 | + [client] |
| 216 | + rgw frontends = civetweb port=80 |
| 217 | + ``` |
| 218 | + |
| 219 | + - For HTTPS: |
| 220 | + |
| 221 | + ```bash |
| 222 | + [client] |
| 223 | + rgw frontends = civetweb port=443s ssl_certificate=/etc/ceph/private/bundle_keyandcert.pem |
| 224 | + ``` |
| 225 | + |
| 226 | +3. Verify the installation by accessing `http://ceph-node-a:7480` in a web browser. |
| 227 | + |
| 228 | +## Creating Object Storage credentials |
| 229 | + |
| 230 | +On the gateway instance (`ceph-node-a`), run the following command to create a new user: |
| 231 | + |
| 232 | +```bash |
| 233 | +sudo radosgw-admin user create --uid=johndoe --display-name= "John Doe" [email protected] |
| 234 | +``` |
| 235 | + |
| 236 | +- Note the `access_key` and `user_key`. Proceed to configure your Object Storage client, e.g., [aws-cli](/object-storage/api-cli/object-storage-aws-cli/). |
| 237 | + |
| 238 | +## Configuring AWS-CLI |
| 239 | + |
| 240 | +Use AWS-CLI to manage objects in your Ceph storage cluster: |
| 241 | + |
| 242 | +1. Install `aws-cli` and `awscli-plugin`: |
| 243 | + |
| 244 | + ```bash |
| 245 | + pip3 install awscli |
| 246 | + pip3 install awscli-plugin-endpoint |
| 247 | + ``` |
| 248 | + |
| 249 | +2. Create `~/.aws/config`: |
| 250 | + |
| 251 | + ```ini |
| 252 | + [plugins] |
| 253 | + endpoint = awscli_plugin_endpoint |
| 254 | + |
| 255 | + [default] |
| 256 | + region = default |
| 257 | + s3 = |
| 258 | + endpoint_url = http://ceph-node-a:7480 |
| 259 | + signature_version = s3v4 |
| 260 | + max_concurrent_requests = 100 |
| 261 | + max_queue_size = 1000 |
| 262 | + multipart_threshold = 50 MB |
| 263 | + multipart_chunksize = 10 MB |
| 264 | + s3api = |
| 265 | + endpoint_url = http://ceph-node-a:7480 |
| 266 | + ``` |
| 267 | + |
| 268 | +3. Create `~/.aws/credentials`: |
| 269 | + |
| 270 | + ```ini |
| 271 | + [default] |
| 272 | + aws_access_key_id=<ACCESS_KEY> |
| 273 | + aws_secret_access_key=<SECRET_KEY> |
| 274 | + ``` |
| 275 | + |
| 276 | + - Replace `<ACCESS_KEY>` and `<SECRET_KEY>` with user credentials. |
| 277 | + |
| 278 | +4. Create a bucket, upload a test file, and check the content: |
| 279 | + |
| 280 | + ```bash |
| 281 | + aws s3 mb s3://MyBucket |
| 282 | + echo "Hello World!" > testfile.txt |
| 283 | + aws s3 cp testfile.txt s3://MyBucket |
| 284 | + aws s3 ls s3://MyBucket |
| 285 | + ``` |
| 286 | + |
| 287 | +## Conclusion |
| 288 | + |
| 289 | +You have successfully configured an Amazon S3-compatible storage cluster using Ceph and three [Dedibox dedicated servers](https://www.scaleway.com/en/dedibox/). You can now manage your data using any Amazon S3-compatible tool. For advanced configuration, refer to the official [Ceph documentation](https://docs.ceph.com/docs/master/). |
0 commit comments