Skip to content

Add s390x integration tests #638

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 8 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 9 additions & 2 deletions .github/workflows/integration_test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,9 @@ jobs:
_, runner, task, variant = job.split(":")
# Example: "test_charm.py"
task = task.removeprefix("tests/spread/")
if runner.endswith("-arm"):
if "s390x" in runner:
architecture = "s390x"
elif runner.endswith("-arm"):
architecture = "arm64"
else:
architecture = "amd64"
Expand Down Expand Up @@ -86,15 +88,20 @@ jobs:
runs-on: ${{ matrix.job.runner }}
timeout-minutes: 217 # Sum of steps `timeout-minutes` + 5
steps:
- name: Free up disk space
- name: (GitHub hosted) Free up disk space
timeout-minutes: 1
if: ${{ !contains(matrix.job.runner, 'self-hosted') }}
run: |
printf '\nDisk usage before cleanup\n'
df --human-readable
# Based on https://github.com/actions/runner-images/issues/2840#issuecomment-790492173
rm -r /opt/hostedtoolcache/
printf '\nDisk usage after cleanup\n'
df --human-readable
- name: (IS hosted) Disk usage
timeout-minutes: 1
if: ${{ contains(matrix.job.runner, 'self-hosted') }}
run: df --human-readable
- name: Checkout
timeout-minutes: 3
uses: actions/checkout@v4
Expand Down
2 changes: 1 addition & 1 deletion concierge.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ juju:
providers:
microk8s:
enable: true
bootstrap: true
bootstrap: false
addons:
- dns
- hostpath-storage
Expand Down
56 changes: 52 additions & 4 deletions spread.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -90,13 +90,29 @@ backends:
AWS_SECRET_KEY: '$(HOST: echo $AWS_SECRET_KEY)'
GCP_ACCESS_KEY: '$(HOST: echo $GCP_ACCESS_KEY)'
GCP_SECRET_KEY: '$(HOST: echo $GCP_SECRET_KEY)'
DOCKERHUB_MIRROR: '$(HOST: echo $DOCKERHUB_MIRROR)'
systems:
- ubuntu-24.04:
username: runner
- ubuntu-24.04-arm:
username: runner
variants:
- -juju29
- self-hosted-linux-s390x-noble-edge:
username: ubuntu
environment:
# Several python packages (e.g. cryptography, bcrypt) do not have s390x builds on PyPI,
# so they must be built from source. pkg-config & rust toolchain needed
# (These packages are being built when installing poetry or integration test Python
# dependencies)
CONCIERGE_EXTRA_DEBS: pipx,pkg-config,rustup
# s390x only available on edge risk of microk8s snap
# IS-hosted GitHub runners do not work with the strictly confined microk8s snap:
# https://github.com/canonical/microk8s/issues/5082
# https://chat.canonical.com/canonical/pl/i6yydsx5ifrepp56khq5fq5dke
CONCIERGE_MICROK8S_CHANNEL: latest/edge
variants:
- -juju29

suites:
tests/spread/:
Expand All @@ -113,10 +129,42 @@ prepare: |
snap refresh --hold
chown -R root:root "$SPREAD_PATH"
cd "$SPREAD_PATH"
snap install --classic concierge
# Install via snap after https://github.com/canonical/concierge/pull/81 released
go install github.com/canonical/concierge@latest

if [[ -n "$DOCKERHUB_MIRROR" ]]
then
# Running on IS-hosted runner; configure microk8s to use Docker Hub mirror
# Run before concierge prepare because of https://github.com/canonical/concierge/issues/75

snap install microk8s --channel "$CONCIERGE_MICROK8S_CHANNEL" --classic

# Wait for microk8s to populate iptables
# https://chat.canonical.com/canonical/pl/jo5cg6wqjjrudqd5ybj6hhttee
microk8s status --wait-ready

tee /var/snap/microk8s/current/args/certs.d/docker.io/hosts.toml << EOF
server = "$DOCKERHUB_MIRROR"
[host."${DOCKERHUB_MIRROR#'https://'}"]
capabilities = ["pull", "resolve"]
EOF
microk8s stop
microk8s start
fi

# Install charmcraft & pipx on lxd-vm backend and install pipx on IS-hosted runners
~/go/bin/concierge prepare --trace

microk8s config | juju add-k8s my-k8s --client
juju bootstrap my-k8s concierge-microk8s
juju add-model testing

# Install charmcraft & pipx (on lxd-vm backend)
concierge prepare --trace
if [[ $SPREAD_SYSTEM == *"s390x"* ]]
then
rustup set profile minimal
# TODO add renovate comment for rust version
rustup default 1.88.0
fi

pipx install tox poetry
prepare-each: |
Expand All @@ -128,7 +176,7 @@ prepare-each: |
poetry add --lock --group integration juju@^2
fi
# `concierge prepare` needs to be run for each spread job in case Juju version changed
concierge prepare --trace
~/go/bin/concierge prepare --trace

# Unable to set constraint on all models because of Juju bug:
# https://bugs.launchpad.net/juju/+bug/2065050
Expand Down
2 changes: 1 addition & 1 deletion tests/integration/backups.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
logger = logging.getLogger(__name__)

S3_INTEGRATOR = "s3-integrator"
S3_INTEGRATOR_CHANNEL = "latest/stable"
S3_INTEGRATOR_CHANNEL = "1/edge" # Use edge for s390x
MYSQL_APPLICATION_NAME = "mysql-k8s"
TIMEOUT = 10 * 60
SERVER_CONFIG_USER = "serverconfig"
Expand Down
3 changes: 3 additions & 0 deletions tests/integration/markers.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,3 +18,6 @@
arm64_only = pytest.mark.skipif(
architecture.architecture != "arm64", reason="Requires arm64 architecture"
)
s390x_only = pytest.mark.skipif(
architecture.architecture != "s390x", reason="Requires s390x architecture"
)
20 changes: 19 additions & 1 deletion tests/integration/test_architecture.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,4 +55,22 @@ async def test_amd_charm_on_arm_host(ops_test: OpsTest) -> None:
)


# TODO: add s390x test
@markers.s390x_only
async def test_amd_charm_on_s390x_host(ops_test: OpsTest) -> None:
"""Tries deploying an amd64 charm on s390x host."""
charm = "./[email protected]"

await ops_test.model.deploy(
charm,
application_name=APP_NAME,
num_units=1,
config={"profile": "testing"},
resources={"mysql-image": METADATA["resources"]["mysql-image"]["upstream-source"]},
base="[email protected]",
)

await ops_test.model.wait_for_idle(
apps=[APP_NAME],
status="error",
raise_on_error=False,
)
3 changes: 2 additions & 1 deletion tests/integration/test_backup_aws.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@
host_ip = socket.gethostbyname(socket.gethostname())

S3_INTEGRATOR = "s3-integrator"
S3_INTEGRATOR_CHANNEL = "1/edge" # Use edge for s390x
TIMEOUT = 10 * 60
CLUSTER_ADMIN_PASSWORD = "clusteradminpassword"
SERVER_CONFIG_PASSWORD = "serverconfigpassword"
Expand Down Expand Up @@ -102,7 +103,7 @@ async def test_build_and_deploy(ops_test: OpsTest, charm) -> None:

logger.info("Deploying s3-integrator")

await ops_test.model.deploy(S3_INTEGRATOR, channel="stable", base="[email protected]")
await ops_test.model.deploy(S3_INTEGRATOR, channel=S3_INTEGRATOR_CHANNEL, base="[email protected]")
await ops_test.model.relate(mysql_application_name, S3_INTEGRATOR)

await ops_test.model.wait_for_idle(
Expand Down
3 changes: 2 additions & 1 deletion tests/integration/test_backup_ceph.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@
host_ip = socket.gethostbyname(socket.gethostname())

S3_INTEGRATOR = "s3-integrator"
S3_INTEGRATOR_CHANNEL = "1/edge" # Use edge for s390x
TIMEOUT = 10 * 60
CLUSTER_ADMIN_PASSWORD = "clusteradminpassword"
SERVER_CONFIG_PASSWORD = "serverconfigpassword"
Expand Down Expand Up @@ -158,7 +159,7 @@ async def test_build_and_deploy(ops_test: OpsTest, charm) -> None:

logger.info("Deploying s3-integrator")

await ops_test.model.deploy(S3_INTEGRATOR, channel="stable", base="[email protected]")
await ops_test.model.deploy(S3_INTEGRATOR, channel=S3_INTEGRATOR_CHANNEL, base="[email protected]")
await ops_test.model.relate(mysql_application_name, S3_INTEGRATOR)

await ops_test.model.wait_for_idle(
Expand Down
3 changes: 2 additions & 1 deletion tests/integration/test_backup_gcp.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@
host_ip = socket.gethostbyname(socket.gethostname())

S3_INTEGRATOR = "s3-integrator"
S3_INTEGRATOR_CHANNEL = "1/edge" # Use edge for s390x
TIMEOUT = 10 * 60
CLUSTER_ADMIN_PASSWORD = "clusteradminpassword"
SERVER_CONFIG_PASSWORD = "serverconfigpassword"
Expand Down Expand Up @@ -102,7 +103,7 @@ async def test_build_and_deploy(ops_test: OpsTest, charm) -> None:

logger.info("Deploying s3-integrator")

await ops_test.model.deploy(S3_INTEGRATOR, channel="stable", base="[email protected]")
await ops_test.model.deploy(S3_INTEGRATOR, channel=S3_INTEGRATOR_CHANNEL, base="[email protected]")
await ops_test.model.relate(mysql_application_name, S3_INTEGRATOR)

await ops_test.model.wait_for_idle(
Expand Down
8 changes: 3 additions & 5 deletions tests/integration/test_tls.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,8 +32,8 @@

if juju_.has_secrets:
tls_app_name = "self-signed-certificates"
if architecture.architecture == "arm64":
tls_channel = "latest/edge"
if architecture.architecture == "s390x":
tls_channel = "1/edge"
else:
tls_channel = "latest/stable"
tls_config = {"ca-common-name": "Test CA"}
Expand Down Expand Up @@ -123,9 +123,7 @@ async def test_enable_tls(ops_test: OpsTest) -> None:
# Deploy TLS Certificates operator.
logger.info("Deploy TLS operator")
async with ops_test.fast_forward("60s"):
await ops_test.model.deploy(
tls_app_name, channel=tls_channel, config=tls_config, base="[email protected]"
)
await ops_test.model.deploy(tls_app_name, channel=tls_channel, config=tls_config)
await ops_test.model.wait_for_idle(apps=[tls_app_name], status="active", timeout=15 * 60)

# Relate with TLS charm
Expand Down
9 changes: 0 additions & 9 deletions tests/spread/test_architecture.py/task.yaml

This file was deleted.

9 changes: 0 additions & 9 deletions tests/spread/test_async_replication.py/task.yaml

This file was deleted.

9 changes: 0 additions & 9 deletions tests/spread/test_backup_aws.py/task.yaml

This file was deleted.

7 changes: 0 additions & 7 deletions tests/spread/test_backup_ceph.py/task.yaml

This file was deleted.

9 changes: 0 additions & 9 deletions tests/spread/test_backup_gcp.py/task.yaml

This file was deleted.

7 changes: 0 additions & 7 deletions tests/spread/test_backup_pitr_aws.py/task.yaml

This file was deleted.

7 changes: 0 additions & 7 deletions tests/spread/test_backup_pitr_gcp.py/task.yaml

This file was deleted.

7 changes: 0 additions & 7 deletions tests/spread/test_charm.py/task.yaml

This file was deleted.

7 changes: 0 additions & 7 deletions tests/spread/test_cos_integration_bundle.py/task.yaml

This file was deleted.

7 changes: 0 additions & 7 deletions tests/spread/test_crash_during_setup.py/task.yaml

This file was deleted.

7 changes: 0 additions & 7 deletions tests/spread/test_database.py/task.yaml

This file was deleted.

7 changes: 0 additions & 7 deletions tests/spread/test_k8s_endpoints.py/task.yaml

This file was deleted.

7 changes: 0 additions & 7 deletions tests/spread/test_log_rotation.py/task.yaml

This file was deleted.

7 changes: 0 additions & 7 deletions tests/spread/test_multi_relations.py/task.yaml

This file was deleted.

9 changes: 0 additions & 9 deletions tests/spread/test_mysql_root.py/task.yaml

This file was deleted.

7 changes: 0 additions & 7 deletions tests/spread/test_node_drain.py/task.yaml

This file was deleted.

7 changes: 0 additions & 7 deletions tests/spread/test_replication_data_consistency.py/task.yaml

This file was deleted.

7 changes: 0 additions & 7 deletions tests/spread/test_replication_data_isolation.py/task.yaml

This file was deleted.

Loading