Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
75 changes: 75 additions & 0 deletions doc/source/contributor/environments/ci-tenks.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
==========
ci-tenks
==========

The ``ci-tenks`` Kayobe environment is used to test seed services.
It is currently a work in progress.

The environment is deployed using `automated-deployment.sh`. It bootstraps
localhost as a hypervisor for a seed and one controller instance. The seed
provisions the controller with Bifrost.

It currently tests:

* Seed hypervisor host configuration
* Seed VM provisioning
* Seed host configuration
* Pulp deployment
* Pulp container syncing (one container - Bifrost)
* Bifrost overcloud provisioning

In the future it could test:

* Pulp package syncing
* Overcloud host configuration, pulling packages from a local Pulp
* Upgrades (Host OS and OpenStack)
* Multi-node OpenStack deployments

* Multiple controllers
* Multiple compute nodes (and live migration)
* Multiple storage nodes (Ceph)

These extensions depend on more SMS hypervisor capacity and improved sync times
for the local Pulp instance.

Prerequisites
=============

* A Rocky Linux 9 or Ubuntu Noble 24.04 host
* 16GB of memory
* 4 cores
* No LVM

Setup
=====

The environment is designed to run in CI, however can also be deployed
manually.

Access the host via SSH. You may wish to start a ``tmux`` session.

Download the setup script:

.. parsed-literal::

curl -LO https://raw.githubusercontent.com/stackhpc/stackhpc-kayobe-config/stackhpc/2025.1/etc/kayobe/environments/ci-tenks/automated-deployment.sh

Change the permissions on the script:

.. parsed-literal::

sudo chmod +x automated-deployment.sh

Acquire the Ansible Vault password for this repository, and store a
copy at ``~/vault-pw``.

.. note::

The vault password is currently the same as for the ``ci-aio``
environment.

Run the setup script:

.. parsed-literal::

./automated-deployment.sh
1 change: 1 addition & 0 deletions doc/source/contributor/environments/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,5 @@ The following Kayobe environments are provided with this configuration:
ci-aio
ci-builder
ci-multinode
ci-tenks
aufn-ceph
36 changes: 36 additions & 0 deletions etc/kayobe/environments/ci-tenks/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# CI-Tenks Kayobe Environment

This Kayobe environment is designed for use in CI, primarily to test Seed
service deployment and Bifrost provisioning. It is currently a work in
progress.

The environment is deployed using the `automated-deployment.sh` script. This
script bootstraps the localhost as a hypervisor for a Seed and one Controller
instance. The Seed provisions the Controller using Bifrost.

### Current Tests

The environment currently tests the following:

* Seed Hypervisor host configuration
* Seed VM provisioning
* Seed host configuration
* Pulp deployment
* Pulp container syncing (one container - Bifrost)
* Bifrost Overcloud provisioning

### Future Enhancements

Potential future tests include:

* Pulp package syncing
* Overcloud host configuration, pulling packages from a local Pulp instance
* Full openstack service deployment (AIO or otherwise)
* Upgrades (Host OS and OpenStack)
* Multi-node OpenStack deployments:
* Multiple Controllers
* Multiple Compute nodes (including live migration)
* Multiple Storage nodes (e.g., Ceph)

These enhancements depend on increased SMS hypervisor capacity and improved
synchronization times for the local Pulp instance.
121 changes: 121 additions & 0 deletions etc/kayobe/environments/ci-tenks/automated-deployment.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,121 @@

#!/bin/bash

###########################################
# STACKHPC-KAYOBE-CONFIG ci-tenks VERSION #
###########################################

# Script for a full deployment.

set -eu

BASE_PATH=~
KAYOBE_BRANCH=stackhpc/2025.1
KAYOBE_CONFIG_REF=${KAYOBE_CONFIG_REF:-stackhpc/2025.1}
KAYOBE_ENVIRONMENT=${KAYOBE_ENVIRONMENT:-ci-tenks}

if [[ ! -f $BASE_PATH/vault-pw ]]; then
echo "Vault password file not found at $BASE_PATH/vault-pw"
exit 1
fi

export KAYOBE_VAULT_PASSWORD=$(cat $BASE_PATH/vault-pw)

# Install git and tmux.
if $(which dnf 2>/dev/null >/dev/null); then
sudo dnf -y install git tmux python3.12
else
sudo apt update
sudo apt -y install git tmux gcc libffi-dev python3-dev python-is-python3 python3-pip python3.12-venv
fi

# Disable the firewall.
sudo systemctl is-enabled firewalld && sudo systemctl stop firewalld && sudo systemctl disable firewalld || true

# Disable SELinux both immediately and permanently.
if $(which setenforce 2>/dev/null >/dev/null); then
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
fi

# Prevent sudo from performing DNS queries.
echo 'Defaults !fqdn' | sudo tee /etc/sudoers.d/no-fqdn

# Clone repositories
cd $BASE_PATH
mkdir -p src
pushd src
if [[ ! -d kayobe-config ]]; then
git clone https://github.com/stackhpc/stackhpc-kayobe-config kayobe-config
pushd kayobe-config
git checkout $KAYOBE_CONFIG_REF
popd
fi
[[ -d kayobe ]] || git clone https://github.com/stackhpc/kayobe.git -b $KAYOBE_BRANCH
[[ -d kayobe/tenks ]] || (cd kayobe && git clone https://opendev.org/openstack/tenks.git)
popd

# Create Kayobe virtualenv
mkdir -p venvs
pushd venvs
if [[ ! -d kayobe ]]; then
python3.12 -m venv kayobe
fi
# NOTE: Virtualenv's activate and deactivate scripts reference an
# unbound variable.
set +u
source kayobe/bin/activate
set -u
pip install -U pip
pip install -r ../src/kayobe-config/requirements.txt
popd

# Activate environment
pushd $BASE_PATH/src/kayobe-config
source kayobe-env --environment $KAYOBE_ENVIRONMENT

# Configure host networking (bridge, routes & firewall)
sudo $KAYOBE_CONFIG_PATH/environments/$KAYOBE_ENVIRONMENT/configure-local-networking.sh

# Bootstrap the Ansible control host.
kayobe control host bootstrap

# Configure the seed hypervisor host.
kayobe seed hypervisor host configure

# Provision the seed VM.
kayobe seed vm provision

# Configure the seed host, and deploy a local registry.
kayobe seed host configure

# Deploy local pulp server as a container on the seed VM
kayobe seed service deploy --tags seed-deploy-containers --kolla-tags none

# Deploying the seed restarts networking interface, run configure-local-networking.sh again to re-add routes.
sudo $KAYOBE_CONFIG_PATH/environments/$KAYOBE_ENVIRONMENT/configure-local-networking.sh

# Sync package & container repositories.
# FIXME: repo sync playbook takes around 30 minutes (tested on ubuntu).
# for now we should skip it and just get to provisioning. Once we have a local
# package mirror, we can probably add it back in and at least get to host
# configuration.
#kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/pulp-repo-sync.yml
#kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/pulp-repo-publish.yml
kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/pulp-container-sync.yml -e stackhpc_pulp_images_kolla_filter=bifrost
kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/pulp-container-publish.yml -e stackhpc_pulp_images_kolla_filter=bifrost

# Re-run full task to set up bifrost_deploy etc. using newly-populated pulp repo
kayobe seed service deploy

# NOTE: Make sure to use ./tenks, since just ‘tenks’ will install via PyPI.
(export TENKS_CONFIG_PATH=$KAYOBE_CONFIG_PATH/environments/$KAYOBE_ENVIRONMENT/tenks.yml && \
export KAYOBE_CONFIG_SOURCE_PATH=$BASE_PATH/src/kayobe-config && \
export KAYOBE_VENV_PATH=$BASE_PATH/venvs/kayobe && \
cd $BASE_PATH/src/kayobe && \
./dev/tenks-deploy-overcloud.sh ./tenks)

# Inspect and provision the overcloud hardware:
kayobe overcloud inventory discover
kayobe overcloud hardware inspect
kayobe overcloud provision
63 changes: 63 additions & 0 deletions etc/kayobe/environments/ci-tenks/cephadm.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
---
###############################################################################
# Cephadm deployment configuration.

# List of additional cephadm commands to run before deployment
# cephadm_commands:
# - "config set global osd_pool_default_size {{ [3, groups['osds'] | length] | min }}"
# - "config set global osd_pool_default_min_size {{ [3, groups['osds'] | length] | min }}"

# Ceph OSD specification.
cephadm_osd_spec:
service_type: osd
service_id: osd_spec_default
placement:
host_pattern: "*"
data_devices:
all: true

###############################################################################
# Ceph post-deployment configuration.

# List of Ceph erasure coding profiles. See stackhpc.cephadm.ec_profiles role
# for format.
cephadm_ec_profiles: []

# List of Ceph CRUSH rules. See stackhpc.cephadm.crush_rules role for format.
cephadm_crush_rules: []

# List of Ceph pools. See stackhpc.cephadm.pools role for format.
cephadm_pools:
- name: backups
application: rbd
state: present
- name: images
application: rbd
state: present
- name: volumes
application: rbd
state: present
- name: vms
application: rbd
state: present

# List of Cephx keys. See stackhpc.cephadm.keys role for format.
cephadm_keys:
- name: client.cinder
caps:
mon: "profile rbd"
osd: "profile rbd pool=volumes, profile rbd pool=vms, profile rbd-read-only pool=images"
mgr: "profile rbd pool=volumes, profile rbd pool=vms"
state: present
- name: client.cinder-backup
caps:
mon: "profile rbd"
osd: "profile rbd pool=volumes, profile rbd pool=backups"
mgr: "profile rbd pool=volumes, profile rbd pool=backups"
state: present
- name: client.glance
caps:
mon: "profile rbd"
osd: "profile rbd pool=images"
mgr: "profile rbd pool=images"
state: present
81 changes: 81 additions & 0 deletions etc/kayobe/environments/ci-tenks/configure-local-networking.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
#!/bin/bash

set -e
set -o pipefail

# This should be run on the seed hypervisor.

# IP addresses on the all-in-one Kayobe cloud network.
# These IP addresses map to those statically configured in
# etc/kayobe/network-allocation.yml and etc/kayobe/networks.yml.
controller_vip=192.168.39.2
seed_hv_ip=192.168.33.4

iface=$(ip route | awk '$1 == "default" {print $5; exit}')

# Private IP address by which the seed hypervisor is accessible in the cloud
# hosting the VM.
seed_hv_private_ip=$(ip a show dev $iface | awk '$1 == "inet" { gsub(/\/[0-9]*/,"",$2); print $2; exit }')

# Forward the following ports to the controller.
# 80: Horizon
# 6080: VNC console
forwarded_ports="80 6080"

# Install iptables.
if $(which dnf >/dev/null 2>&1); then
sudo dnf -y install iptables
else
sudo apt update
sudo apt -y install iptables
fi

# Configure local networking.
# Add bridges for the Kayobe networks.
if ! sudo ip l show brprov >/dev/null 2>&1; then
sudo ip l add brprov type bridge
sudo ip l set brprov up
sudo ip a add $seed_hv_ip/24 dev brprov
fi

if ! sudo ip l show brcloud >/dev/null 2>&1; then
sudo ip l add brcloud type bridge
sudo ip l set brcloud up
fi

# On Rocky Linux, bridges without a port are DOWN, which causes network
# configuration to fail. Add a dummy interface and plug it into the bridge.
for i in mgmt prov cloud; do
if ! sudo ip l show dummy-$i >/dev/null 2>&1; then
sudo ip l add dummy-$i type dummy
fi
done

# Configure IP routing and NAT to allow the seed VM and overcloud hosts to
# route via this route to the outside world.
sudo iptables -A POSTROUTING -t nat -o $iface -j MASQUERADE
sudo sysctl -w net.ipv4.conf.all.forwarding=1

# FIXME: IP MASQUERADE from control plane fails without this on Ubuntu.
if ! $(which dnf >/dev/null 2>&1); then
sudo modprobe br_netfilter
echo 0 | sudo tee /proc/sys/net/bridge/bridge-nf-call-iptables
fi

# Configure port forwarding from the hypervisor to the Horizon GUI on the
# controller.
sudo iptables -A FORWARD -i $iface -o brprov -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
sudo iptables -A FORWARD -i brprov -o $iface -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
for port in $forwarded_ports; do
# Allow new connections.
sudo iptables -A FORWARD -i $iface -o brcloud -p tcp --syn --dport $port -m conntrack --ctstate NEW -j ACCEPT
# Destination NAT.
sudo iptables -t nat -A PREROUTING -i $iface -p tcp --dport $port -j DNAT --to-destination $controller_vip
# Source NAT.
sudo iptables -t nat -A POSTROUTING -o brcloud -p tcp --dport $port -d $controller_vip -j SNAT --to-source $seed_hv_private_ip
done

echo
echo "NOTE: The network configuration applied by this script is not"
echo "persistent across reboots."
echo "If you reboot the system, please re-run this script."
Loading
Loading