Skip to content

Commit 05ad36a

Browse files
committed
Add hacking files
This patch adds a series of files to the `hack` directory to facilitate the development. These files include: - Helper functions - Script to deploy a local toy Ceph cluster - Script to create the LVM loopback device inside the OpenShift VM - OpenStack and Cinder client installation and configuration files - Manifest for OpenStack with Cinder and Glance using local Ceph cluster
1 parent 9f3d1ec commit 05ad36a

File tree

12 files changed

+304
-0
lines changed

12 files changed

+304
-0
lines changed

hack/README.md

Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
# Hacking
2+
3+
### Ceph cluster
4+
5+
As describe in the [Getting Started Guide](../README.md#getting-started), the
6+
`dev/create-ceph.sh` script can help us create a *toy* Ceph cluster we can use
7+
for development.
8+
9+
### LVM backend
10+
11+
Similar to the script that creates a *toy* Ceph backend there is also a script
12+
called `dev/create-lvm.sh` that create an LVM Cinder VG, within the CRC VM,
13+
that can be used by the Cinder LVM backend driver.
14+
15+
### Helpers
16+
17+
If we source `hack/dev/helpers.sh` we'll get a couple of helper functions:
18+
19+
- `crc_login`: To login to the OpenShift cluster.
20+
- `crc_ssh`: To SSH to the OpenShift VM or to run SSH commands in it.
21+
22+
### SSH OpenShift VM
23+
24+
We can SSH into the OpenShift VM multiple ways: Using `oc debug`, using `ssh`,
25+
or using the `virsh console`.
26+
27+
With `oc debug`:
28+
29+
```sh
30+
$ oc get node
31+
NAME STATUS ROLES AGE VERSION
32+
crc-p9hmx-master-0 Ready master,worker 26d v1.24.0+4f0dd4d
33+
34+
$ oc debug node/crc-p9hmx-master-0
35+
36+
sh-4.4# chroot /host
37+
```
38+
39+
To use `ssh` we can do:
40+
41+
```sh
42+
$ ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i ~/.crc/machines/crc/id_ecdsa core@`crc ip`
43+
44+
[core@crc-p9hmx-master-0 ~]$
45+
```
46+
47+
Or we can use the helper function defined before to just do `crc_ssh`.
48+
49+
### Containers in VM
50+
51+
The OpenShift VM runs CoreOS and uses [Cri-O](https://cri-o.io/) as the
52+
container runtime, so once we are inside the container we need to use `crictl`
53+
to interact with the containers:
54+
55+
```sh
56+
[core@crc-p9hmx-master-0 ~]$ sudo crictl ps
57+
```
58+
59+
And its configuration files are under `/etc/containers`.

hack/dev/admin-rc

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
# Clear any old environment that may conflict.
2+
for key in $( set | awk -F= '/^OS_/ {print $1}' ); do unset "${key}" ; done
3+
4+
export OS_AUTH_TYPE=password
5+
export OS_PASSWORD=12345678
6+
export OS_AUTH_URL=http://keystone-public-openstack.apps-crc.testing
7+
export OS_SYSTEM_SCOPE=all
8+
export OS_USERNAME=admin
9+
export OS_PROJECT_NAME=admin
10+
export COMPUTE_API_VERSION=1.1
11+
export NOVA_VERSION=1.1
12+
export OS_NO_CACHE=True
13+
export OS_CLOUDNAME=default
14+
export OS_IDENTITY_API_VERSION='3'
15+
export OS_USER_DOMAIN_NAME='Default'
16+
export OS_PROJECT_DOMAIN_NAME='Default'
17+
export OS_CACERT="/etc/pki/ca-trust/source/anchors/cm-local-ca.pem"
18+
# Add OS_CLOUDNAME to PS1
19+
if [ -z "${CLOUDPROMPT_ENABLED:-}" ]; then
20+
export PS1=${PS1:-""}
21+
export PS1=\${OS_CLOUDNAME:+"(\$OS_CLOUDNAME)"}\ $PS1
22+
export CLOUDPROMPT_ENABLED=1
23+
fi

hack/dev/ceph/I_AM_A_DEMO

Whitespace-only changes.
Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
[client.admin]
2+
key = AQBCtBhj0gM6FRAACq4EGHK6qYqRBSbw4zFavg==
3+
caps mds = "allow *"
4+
caps mgr = "allow *"
5+
caps mon = "allow *"
6+
caps osd = "allow *"

hack/dev/ceph/ceph.conf

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
[global]
2+
fsid = 5fe62cc7-0392-4a32-8466-081ce0ea970f
3+
mon initial members = localhost
4+
mon host = v2:192.168.130.1:3300/0
5+
osd crush chooseleaf type = 0
6+
osd journal size = 100
7+
public network = 0.0.0.0/0
8+
cluster network = 0.0.0.0/0
9+
osd pool default size = 1
10+
mon warn on pool no redundancy = false
11+
auth allow insecure global id reclaim = false
12+
osd objectstore = bluestore
13+
14+
[osd.0]
15+
osd data = /var/lib/ceph/osd/ceph-0
16+

hack/dev/ceph/ceph.mon.keyring

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
[mon.]
2+
key = AQBCtBhjSAiDFhAAiNDfWsKMES1krJAye5sk0Q==
3+
caps mon = "allow *"
4+
[client.admin]
5+
key = AQBCtBhj0gM6FRAACq4EGHK6qYqRBSbw4zFavg==
6+
caps mds = "allow *"
7+
caps mgr = "allow *"
8+
caps mon = "allow *"
9+
caps osd = "allow *"

hack/dev/clouds.yaml

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
clouds:
2+
default:
3+
auth:
4+
auth_url: http://keystone-public-openstack.apps-crc.testing
5+
project_name: admin
6+
username: admin
7+
user_domain_name: Default
8+
project_domain_name: Default
9+
region_name: regionOne
10+
verify: false
11+
identity_api_version: '3'
12+

hack/dev/create-ceph.sh

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
#!/bin/env bash
2+
LOCATION=$(realpath `dirname -- $BASH_SOURCE[0]`)
3+
sudo cp -R "${LOCATION}/ceph" /etc
4+
5+
# Change Ceph default features (if we want to attach using krbd)
6+
# echo -e "\nrbd default features = 3" | sudo tee -a /etc/ceph/ceph.conf
7+
8+
echo 'Running ceph Pacific demo cluster'
9+
sudo podman run -d --name ceph --net=host -v /etc/ceph:/etc/ceph:z -v /lib/modules:/lib/modules -e MON_IP=192.168.130.1 -e CEPH_PUBLIC_NETWORK=0.0.0.0/0 -e DEMO_DAEMONS='osd' quay.io/ceph/daemon:latest-pacific demo
10+
11+
sleep 3
12+
13+
sudo podman exec -it ceph bash -c 'ceph osd pool create volumes 4 && ceph osd pool application enable volumes rbd'
14+
sudo podman exec -it ceph bash -c 'ceph osd pool create backups 4 && ceph osd pool application enable backups rbd'
15+
sudo podman exec -it ceph bash -c 'ceph osd pool create images 4 && ceph osd pool application enable images rgw'

hack/dev/create-lvm.sh

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
#!/bin/env bash
2+
set -ev
3+
set -x
4+
5+
LOCATION=$(realpath `dirname -- $BASH_SOURCE[0]`)
6+
source "$LOCATION/helpers.sh"
7+
8+
# Enable iSCSI because it always fails to start for some reason, and just to be
9+
# extra sure create the initiator name if it doesn't exist.
10+
crc_ssh 'if [[ ! -e /etc/iscsi/initiatorname.iscsi ]]; then echo InitiatorName=`iscsi-iname` | sudo tee /etc/iscsi/initiatorname.iscsi; fi; if ! systemctl --no-pager status iscsid; then sudo systemctl restart iscsid; fi'
11+
12+
# Multipath failed to start because it doesn't have a configuration, create it
13+
# and restart the service
14+
crc_ssh 'if [[ ! -e /etc/multipath.conf ]]; then sudo mpathconf --enable --with_multipathd y --user_friendly_names n --find_multipaths y && sudo systemctl start multipathd; fi'
15+
16+
loopback_file="/var/home/core/cinder-volumes"
17+
echo Creating $loopback_file
18+
crc_ssh "if [[ ! -e $loopback_file ]]; then truncate -s 10G $loopback_file; fi"
19+
crc_ssh "if ! crc_ssh sudo vgdisplay cinder-volumes; then sudo vgcreate cinder-volumes \`sudo losetup --show -f $loopback_file\` && sudo vgscan; fi"

hack/dev/helpers.sh

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
#!/usr/bin/env bash
2+
3+
function crc_ssh {
4+
SSH_PARAMS="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i ~/.crc/machines/crc/id_ecdsa"
5+
SSH_REMOTE="core@`crc ip`"
6+
ssh $SSH_PARAMS $SSH_REMOTE "$@"
7+
}
8+
9+
10+
function crc_login {
11+
echo Logging in
12+
oc login -u kubeadmin -p 12345678 https://api.crc.testing:6443
13+
}

0 commit comments

Comments
 (0)