Skip to content

Commit 3616595

Browse files
authored
Merge pull request #1793 from stackhpc/tenks-ci-zuul
Add ci-tenks environment and zuul CI
2 parents 6f7ea1f + 0511f0d commit 3616595

37 files changed

+971
-0
lines changed
Lines changed: 75 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,75 @@
1+
==========
2+
ci-tenks
3+
==========
4+
5+
The ``ci-tenks`` Kayobe environment is used to test seed services.
6+
It is currently a work in progress.
7+
8+
The environment is deployed using `automated-deployment.sh`. It bootstraps
9+
localhost as a hypervisor for a seed and one controller instance. The seed
10+
provisions the controller with Bifrost.
11+
12+
It currently tests:
13+
14+
* Seed hypervisor host configuration
15+
* Seed VM provisioning
16+
* Seed host configuration
17+
* Pulp deployment
18+
* Pulp container syncing (one container - Bifrost)
19+
* Bifrost overcloud provisioning
20+
21+
In the future it could test:
22+
23+
* Pulp package syncing
24+
* Overcloud host configuration, pulling packages from a local Pulp
25+
* Upgrades (Host OS and OpenStack)
26+
* Multi-node OpenStack deployments
27+
28+
* Multiple controllers
29+
* Multiple compute nodes (and live migration)
30+
* Multiple storage nodes (Ceph)
31+
32+
These extensions depend on more SMS hypervisor capacity and improved sync times
33+
for the local Pulp instance.
34+
35+
Prerequisites
36+
=============
37+
38+
* A Rocky Linux 9 or Ubuntu Noble 24.04 host
39+
* 16GB of memory
40+
* 4 cores
41+
* No LVM
42+
43+
Setup
44+
=====
45+
46+
The environment is designed to run in CI, however can also be deployed
47+
manually.
48+
49+
Access the host via SSH. You may wish to start a ``tmux`` session.
50+
51+
Download the setup script:
52+
53+
.. parsed-literal::
54+
55+
curl -LO https://raw.githubusercontent.com/stackhpc/stackhpc-kayobe-config/stackhpc/2025.1/etc/kayobe/environments/ci-tenks/automated-deployment.sh
56+
57+
Change the permissions on the script:
58+
59+
.. parsed-literal::
60+
61+
sudo chmod +x automated-deployment.sh
62+
63+
Acquire the Ansible Vault password for this repository, and store a
64+
copy at ``~/vault-pw``.
65+
66+
.. note::
67+
68+
The vault password is currently the same as for the ``ci-aio``
69+
environment.
70+
71+
Run the setup script:
72+
73+
.. parsed-literal::
74+
75+
./automated-deployment.sh

doc/source/contributor/environments/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,4 +9,5 @@ The following Kayobe environments are provided with this configuration:
99
ci-aio
1010
ci-builder
1111
ci-multinode
12+
ci-tenks
1213
aufn-ceph
Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
# CI-Tenks Kayobe Environment
2+
3+
This Kayobe environment is designed for use in CI, primarily to test Seed
4+
service deployment and Bifrost provisioning. It is currently a work in
5+
progress.
6+
7+
The environment is deployed using the `automated-deployment.sh` script. This
8+
script bootstraps the localhost as a hypervisor for a Seed and one Controller
9+
instance. The Seed provisions the Controller using Bifrost.
10+
11+
### Current Tests
12+
13+
The environment currently tests the following:
14+
15+
* Seed Hypervisor host configuration
16+
* Seed VM provisioning
17+
* Seed host configuration
18+
* Pulp deployment
19+
* Pulp container syncing (one container - Bifrost)
20+
* Bifrost Overcloud provisioning
21+
22+
### Future Enhancements
23+
24+
Potential future tests include:
25+
26+
* Pulp package syncing
27+
* Overcloud host configuration, pulling packages from a local Pulp instance
28+
* Full openstack service deployment (AIO or otherwise)
29+
* Upgrades (Host OS and OpenStack)
30+
* Multi-node OpenStack deployments:
31+
* Multiple Controllers
32+
* Multiple Compute nodes (including live migration)
33+
* Multiple Storage nodes (e.g., Ceph)
34+
35+
These enhancements depend on increased SMS hypervisor capacity and improved
36+
synchronization times for the local Pulp instance.
Lines changed: 121 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,121 @@
1+
2+
#!/bin/bash
3+
4+
###########################################
5+
# STACKHPC-KAYOBE-CONFIG ci-tenks VERSION #
6+
###########################################
7+
8+
# Script for a full deployment.
9+
10+
set -eu
11+
12+
BASE_PATH=~
13+
KAYOBE_BRANCH=stackhpc/2025.1
14+
KAYOBE_CONFIG_REF=${KAYOBE_CONFIG_REF:-stackhpc/2025.1}
15+
KAYOBE_ENVIRONMENT=${KAYOBE_ENVIRONMENT:-ci-tenks}
16+
17+
if [[ ! -f $BASE_PATH/vault-pw ]]; then
18+
echo "Vault password file not found at $BASE_PATH/vault-pw"
19+
exit 1
20+
fi
21+
22+
export KAYOBE_VAULT_PASSWORD=$(cat $BASE_PATH/vault-pw)
23+
24+
# Install git and tmux.
25+
if $(which dnf 2>/dev/null >/dev/null); then
26+
sudo dnf -y install git tmux python3.12
27+
else
28+
sudo apt update
29+
sudo apt -y install git tmux gcc libffi-dev python3-dev python-is-python3 python3-pip python3.12-venv
30+
fi
31+
32+
# Disable the firewall.
33+
sudo systemctl is-enabled firewalld && sudo systemctl stop firewalld && sudo systemctl disable firewalld || true
34+
35+
# Disable SELinux both immediately and permanently.
36+
if $(which setenforce 2>/dev/null >/dev/null); then
37+
sudo setenforce 0
38+
sudo sed -i 's/^SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
39+
fi
40+
41+
# Prevent sudo from performing DNS queries.
42+
echo 'Defaults !fqdn' | sudo tee /etc/sudoers.d/no-fqdn
43+
44+
# Clone repositories
45+
cd $BASE_PATH
46+
mkdir -p src
47+
pushd src
48+
if [[ ! -d kayobe-config ]]; then
49+
git clone https://github.com/stackhpc/stackhpc-kayobe-config kayobe-config
50+
pushd kayobe-config
51+
git checkout $KAYOBE_CONFIG_REF
52+
popd
53+
fi
54+
[[ -d kayobe ]] || git clone https://github.com/stackhpc/kayobe.git -b $KAYOBE_BRANCH
55+
[[ -d kayobe/tenks ]] || (cd kayobe && git clone https://opendev.org/openstack/tenks.git)
56+
popd
57+
58+
# Create Kayobe virtualenv
59+
mkdir -p venvs
60+
pushd venvs
61+
if [[ ! -d kayobe ]]; then
62+
python3.12 -m venv kayobe
63+
fi
64+
# NOTE: Virtualenv's activate and deactivate scripts reference an
65+
# unbound variable.
66+
set +u
67+
source kayobe/bin/activate
68+
set -u
69+
pip install -U pip
70+
pip install -r ../src/kayobe-config/requirements.txt
71+
popd
72+
73+
# Activate environment
74+
pushd $BASE_PATH/src/kayobe-config
75+
source kayobe-env --environment $KAYOBE_ENVIRONMENT
76+
77+
# Configure host networking (bridge, routes & firewall)
78+
sudo $KAYOBE_CONFIG_PATH/environments/$KAYOBE_ENVIRONMENT/configure-local-networking.sh
79+
80+
# Bootstrap the Ansible control host.
81+
kayobe control host bootstrap
82+
83+
# Configure the seed hypervisor host.
84+
kayobe seed hypervisor host configure
85+
86+
# Provision the seed VM.
87+
kayobe seed vm provision
88+
89+
# Configure the seed host, and deploy a local registry.
90+
kayobe seed host configure
91+
92+
# Deploy local pulp server as a container on the seed VM
93+
kayobe seed service deploy --tags seed-deploy-containers --kolla-tags none
94+
95+
# Deploying the seed restarts networking interface, run configure-local-networking.sh again to re-add routes.
96+
sudo $KAYOBE_CONFIG_PATH/environments/$KAYOBE_ENVIRONMENT/configure-local-networking.sh
97+
98+
# Sync package & container repositories.
99+
# FIXME: repo sync playbook takes around 30 minutes (tested on ubuntu).
100+
# for now we should skip it and just get to provisioning. Once we have a local
101+
# package mirror, we can probably add it back in and at least get to host
102+
# configuration.
103+
#kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/pulp-repo-sync.yml
104+
#kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/pulp-repo-publish.yml
105+
kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/pulp-container-sync.yml -e stackhpc_pulp_images_kolla_filter=bifrost
106+
kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/pulp-container-publish.yml -e stackhpc_pulp_images_kolla_filter=bifrost
107+
108+
# Re-run full task to set up bifrost_deploy etc. using newly-populated pulp repo
109+
kayobe seed service deploy
110+
111+
# NOTE: Make sure to use ./tenks, since just ‘tenks’ will install via PyPI.
112+
(export TENKS_CONFIG_PATH=$KAYOBE_CONFIG_PATH/environments/$KAYOBE_ENVIRONMENT/tenks.yml && \
113+
export KAYOBE_CONFIG_SOURCE_PATH=$BASE_PATH/src/kayobe-config && \
114+
export KAYOBE_VENV_PATH=$BASE_PATH/venvs/kayobe && \
115+
cd $BASE_PATH/src/kayobe && \
116+
./dev/tenks-deploy-overcloud.sh ./tenks)
117+
118+
# Inspect and provision the overcloud hardware:
119+
kayobe overcloud inventory discover
120+
kayobe overcloud hardware inspect
121+
kayobe overcloud provision
Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
---
2+
###############################################################################
3+
# Cephadm deployment configuration.
4+
5+
# List of additional cephadm commands to run before deployment
6+
# cephadm_commands:
7+
# - "config set global osd_pool_default_size {{ [3, groups['osds'] | length] | min }}"
8+
# - "config set global osd_pool_default_min_size {{ [3, groups['osds'] | length] | min }}"
9+
10+
# Ceph OSD specification.
11+
cephadm_osd_spec:
12+
service_type: osd
13+
service_id: osd_spec_default
14+
placement:
15+
host_pattern: "*"
16+
data_devices:
17+
all: true
18+
19+
###############################################################################
20+
# Ceph post-deployment configuration.
21+
22+
# List of Ceph erasure coding profiles. See stackhpc.cephadm.ec_profiles role
23+
# for format.
24+
cephadm_ec_profiles: []
25+
26+
# List of Ceph CRUSH rules. See stackhpc.cephadm.crush_rules role for format.
27+
cephadm_crush_rules: []
28+
29+
# List of Ceph pools. See stackhpc.cephadm.pools role for format.
30+
cephadm_pools:
31+
- name: backups
32+
application: rbd
33+
state: present
34+
- name: images
35+
application: rbd
36+
state: present
37+
- name: volumes
38+
application: rbd
39+
state: present
40+
- name: vms
41+
application: rbd
42+
state: present
43+
44+
# List of Cephx keys. See stackhpc.cephadm.keys role for format.
45+
cephadm_keys:
46+
- name: client.cinder
47+
caps:
48+
mon: "profile rbd"
49+
osd: "profile rbd pool=volumes, profile rbd pool=vms, profile rbd-read-only pool=images"
50+
mgr: "profile rbd pool=volumes, profile rbd pool=vms"
51+
state: present
52+
- name: client.cinder-backup
53+
caps:
54+
mon: "profile rbd"
55+
osd: "profile rbd pool=volumes, profile rbd pool=backups"
56+
mgr: "profile rbd pool=volumes, profile rbd pool=backups"
57+
state: present
58+
- name: client.glance
59+
caps:
60+
mon: "profile rbd"
61+
osd: "profile rbd pool=images"
62+
mgr: "profile rbd pool=images"
63+
state: present
Lines changed: 81 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,81 @@
1+
#!/bin/bash
2+
3+
set -e
4+
set -o pipefail
5+
6+
# This should be run on the seed hypervisor.
7+
8+
# IP addresses on the all-in-one Kayobe cloud network.
9+
# These IP addresses map to those statically configured in
10+
# etc/kayobe/network-allocation.yml and etc/kayobe/networks.yml.
11+
controller_vip=192.168.39.2
12+
seed_hv_ip=192.168.33.4
13+
14+
iface=$(ip route | awk '$1 == "default" {print $5; exit}')
15+
16+
# Private IP address by which the seed hypervisor is accessible in the cloud
17+
# hosting the VM.
18+
seed_hv_private_ip=$(ip a show dev $iface | awk '$1 == "inet" { gsub(/\/[0-9]*/,"",$2); print $2; exit }')
19+
20+
# Forward the following ports to the controller.
21+
# 80: Horizon
22+
# 6080: VNC console
23+
forwarded_ports="80 6080"
24+
25+
# Install iptables.
26+
if $(which dnf >/dev/null 2>&1); then
27+
sudo dnf -y install iptables
28+
else
29+
sudo apt update
30+
sudo apt -y install iptables
31+
fi
32+
33+
# Configure local networking.
34+
# Add bridges for the Kayobe networks.
35+
if ! sudo ip l show brprov >/dev/null 2>&1; then
36+
sudo ip l add brprov type bridge
37+
sudo ip l set brprov up
38+
sudo ip a add $seed_hv_ip/24 dev brprov
39+
fi
40+
41+
if ! sudo ip l show brcloud >/dev/null 2>&1; then
42+
sudo ip l add brcloud type bridge
43+
sudo ip l set brcloud up
44+
fi
45+
46+
# On Rocky Linux, bridges without a port are DOWN, which causes network
47+
# configuration to fail. Add a dummy interface and plug it into the bridge.
48+
for i in mgmt prov cloud; do
49+
if ! sudo ip l show dummy-$i >/dev/null 2>&1; then
50+
sudo ip l add dummy-$i type dummy
51+
fi
52+
done
53+
54+
# Configure IP routing and NAT to allow the seed VM and overcloud hosts to
55+
# route via this route to the outside world.
56+
sudo iptables -A POSTROUTING -t nat -o $iface -j MASQUERADE
57+
sudo sysctl -w net.ipv4.conf.all.forwarding=1
58+
59+
# FIXME: IP MASQUERADE from control plane fails without this on Ubuntu.
60+
if ! $(which dnf >/dev/null 2>&1); then
61+
sudo modprobe br_netfilter
62+
echo 0 | sudo tee /proc/sys/net/bridge/bridge-nf-call-iptables
63+
fi
64+
65+
# Configure port forwarding from the hypervisor to the Horizon GUI on the
66+
# controller.
67+
sudo iptables -A FORWARD -i $iface -o brprov -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
68+
sudo iptables -A FORWARD -i brprov -o $iface -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
69+
for port in $forwarded_ports; do
70+
# Allow new connections.
71+
sudo iptables -A FORWARD -i $iface -o brcloud -p tcp --syn --dport $port -m conntrack --ctstate NEW -j ACCEPT
72+
# Destination NAT.
73+
sudo iptables -t nat -A PREROUTING -i $iface -p tcp --dport $port -j DNAT --to-destination $controller_vip
74+
# Source NAT.
75+
sudo iptables -t nat -A POSTROUTING -o brcloud -p tcp --dport $port -d $controller_vip -j SNAT --to-source $seed_hv_private_ip
76+
done
77+
78+
echo
79+
echo "NOTE: The network configuration applied by this script is not"
80+
echo "persistent across reboots."
81+
echo "If you reboot the system, please re-run this script."

0 commit comments

Comments
 (0)