Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .ansible-lint
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
---
skip_list:
- galaxy[no-changelog]
27 changes: 24 additions & 3 deletions .github/workflows/test.yaml
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
---
name: Test

on:
push:

jobs:
build:
runs-on: ubuntu-latest
Expand All @@ -16,6 +16,27 @@ jobs:
spelling:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6
- name: Check spelling
uses: crate-ci/typos@master
uses: crate-ci/typos@master

markdown-lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- name: markdownlint-cli
uses: DavidAnson/markdownlint-cli2-action@main
with:
globs: "**/*.md"

ansible-lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- name: Run ansible-lint
uses: ansible/ansible-lint@main
with:
requirements_file: requirements.yaml
args: "-w role-name"
# for now we need to ignore, it throws a lot of errors that need to be fixed over time
continue-on-error: true
5 changes: 5 additions & 0 deletions .markdownlint.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
{
"default": true,
"MD013": false,
"MD059": false
}
3 changes: 0 additions & 3 deletions CONTRIBUTING.md

This file was deleted.

5 changes: 5 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -7,3 +7,8 @@ test:
test-local:
docker pull metalstack/metal-deployment-base:latest
docker run --rm -it -v $(PWD):/work -w /work metalstack/metal-deployment-base:latest make test

.PHONY: lint
lint:
docker run --rm -v $(PWD):/workdir davidanson/markdownlint-cli2:v0.21.0 "**/*.md"
docker run --rm -v $(PWD):/work --entrypoint bash -w /work --entrypoint sh pipelinecomponents/ansible-lint:edge -c 'ansible-galaxy install -r requirements.yaml && /entrypoint.sh'
5 changes: 3 additions & 2 deletions control-plane/roles/gardener-operator/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,10 @@ Check out the Gardener project for further documentation on [gardener.cloud](htt

If you are still using the `gardener` role for setting up the Gardener, please read the following notes for the migration to the Gardener Operator.

<!-- markdownlint-disable-next-line no-blanks-blockquote -->
> [!CAUTION]
> The migration requires a downtime of the Gardener for end-users. The API servers of the end-users are not disrupted.

<!-- markdownlint-disable-next-line no-blanks-blockquote -->
> [!IMPORTANT]
> For the migration it is required to either wait until Gardener `v1.119` or use a backport feature to `force-redeploy` the existing Gardenlets. If you want to use the backports, please set the following overwrites:
>
Expand All @@ -38,7 +39,7 @@ Here are the steps for the migration:
1. ️⚠️ If you migrate from a standalone ETCD it is necessary to explicitly set `gardener_operator_high_availability_control_plane` to `false`. After the initial deployment of the virtual garden was successful, you can toggle this field to `true` in order to migrate to HA control plane. In case you deployed this without following this instruction, please repair your ETCD as described in [Recovering Etcd Clusters](https://gardener.cloud/docs/other-components/etcd-druid/recovering-etcd-clusters/).
1. Deploy the roles `gardener-operator`, `gardener-extensions`, `gardener-virtual-garden-access` and `gardener-cloud-profile` (order matters).
- In case the `etcd-druid` does not start reconciling the `ETCD` resource for the virtual garden, you might have to manually add the finalizer `druid.gardener.cloud/etcd-druid` on the `ETCD` resource.
1. Manually deploy a kubeconfig secret for remote Gardenlet deployment through the Gardener Operator into the Virtual Garden as described [here](https://gardener.cloud/docs/gardener/deployment/deploy_gardenlet_via_operator/#remote-clusters). Delete the old Gardenlet helm chart from the original Gardener cluster and deploy the Gardenlet through the `gardener-gardenlet` role. Don't forget to specify the `gardenClientConnection.gardenClusterAddress` (see https://github.com/gardener/gardener/pull/11996)
1. Manually deploy a kubeconfig secret for remote Gardenlet deployment through the Gardener Operator into the Virtual Garden as described [here](https://gardener.cloud/docs/gardener/deployment/deploy_gardenlet_via_operator/#remote-clusters). Delete the old Gardenlet helm chart from the original Gardener cluster and deploy the Gardenlet through the `gardener-gardenlet` role. Don't forget to specify the `gardenClientConnection.gardenClusterAddress` (see <https://github.com/gardener/gardener/pull/11996>)
- The gardenlet name needs to be identical with the old name of the initial seed in order to take over the existing resources. Usually, we used the name of the stage for this seed.
1. If you did not take over the existing certificates from the previous Virtual Garden, it might be necessary to run `kubectl --context garden annotate managedseeds -n garden <managed-seed-resource> gardener.cloud/operation=renew-kubeconfig` in order to fix the Gardenlet deployments.
1. Reconcile your shoots, this should end up in a stable setup.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# gardener-virtual-garden-access

Creates a managed resource that rotates the token for a valid kubeconfig to access the Virtual Garden as described in https://gardener.cloud/docs/gardener/concepts/operator/#virtual-garden-kubeconfig.
Creates a managed resource that rotates the token for a valid kubeconfig to access the Virtual Garden as described in <https://gardener.cloud/docs/gardener/concepts/operator/#virtual-garden-kubeconfig>.

## Variables

Expand Down
2 changes: 1 addition & 1 deletion control-plane/roles/isolated-clusters/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ The `control-plane-defaults` folder contains defaults that are used by multiple

| Name | Mandatory | Description |
| ---------------------------------------------------------------- | --------- | ------------------------------------------------------------------------------------------------ |
| isolated_clusters_virtual_garden_kubeconfig | | The kubeconfig to access the virtual garden as a string value. |
| isolated_clusters_virtual_garden_kubeconfig | | The kubeconfig to access the virtual garden as a string value. |
| isolated_clusters_ntp_image_name | | The image name of the ntp service for the partition. |
| isolated_clusters_ntp_image_tag | yes | The tag or version of the ntp service container image. |
| isolated_clusters_ntp_namespace | | The namespace to deploy the ntp server to. |
Expand Down
2 changes: 1 addition & 1 deletion control-plane/roles/metal-python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ None

## Examples

```
```yaml
- name: Install metal-python
include_role:
name: metal-roles/control-plane/roles/metal-python
Expand Down
2 changes: 1 addition & 1 deletion control-plane/roles/metal/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -218,7 +218,7 @@ Configuration for metal-apiserver:

| Name | Mandatory | Description |
| ---------------------------------------------- | --------- | -------------------------------------------------------------------- |
| metal_apiserver_auditing_enabled | | Whether or not to configure timescaledb auditing. Default true. |
| metal_apiserver_auditing_enabled | | Whether or not to configure timescaledb auditing. Default true. |
| metal_apiserver_auditing_timescaledb_host | | The timescaledb host |
| metal_apiserver_auditing_timescaledb_port | | The timescaledb port |
| metal_apiserver_auditing_timescaledb_db | | The timescaledb database name |
Expand Down
2 changes: 1 addition & 1 deletion control-plane/roles/monitoring/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
\*\*\*\*# monitoring
# monitoring

This role is designed to set up monitoring using Ansible.
The role includes tasks to install and configure the following monitoring tools:
Expand Down
6 changes: 3 additions & 3 deletions control-plane/roles/zitadel/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,12 +7,12 @@ Role that deploys and manages and configures [Zitadel](https://zitadel.com/), an

## UI

Because `ExternalSecure: true` is set by default, Zitadel is only available over HTTPS. Using Zitadel with HTTP does currently not work due to https://github.com/zitadel/zitadel/issues/11019.
Because `ExternalSecure: true` is set by default, Zitadel is only available over HTTPS. Using Zitadel with HTTP does currently not work due to <https://github.com/zitadel/zitadel/issues/11019>.

## Other

- Login image not loading because of csp (https://github.com/zitadel/zitadel/pull/11088)
- For deploying data automatically through CI, we use https://github.com/metal-stack/zitadel-init
- Login image not loading because of csp (<https://github.com/zitadel/zitadel/pull/11088>)
- For deploying data automatically through CI, we use <https://github.com/metal-stack/zitadel-init>

## Variables

Expand Down
2 changes: 1 addition & 1 deletion partition/roles/docker-on-cumulus/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,4 @@ Installs docker on cumulus in the default vrf.
| Name | Mandatory | Description |
| ------------------------------ | --------- | --------------------------------------------- |
| docker_fluentd_logging_enabled | | Enables fluentd logging for the Docker daemon |
| docker_fluentd_endpoint | | The fluentd endpoint to log to |
| docker_fluentd_endpoint | | The fluentd endpoint to log to |
2 changes: 1 addition & 1 deletion partition/roles/lvm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,4 +26,4 @@ lvm_lvs:
fstype: ext4
mountpath: /metal-image-cache-sync
opts: --mirrors 1 --type raid1 --nosync
```
```
Loading
Loading