Skip to content

Commit 16ce482

Browse files
authored
Merge branch 'master' into migrate-metal-image-cache-sync-to-metal-apiserver
2 parents 41c7c66 + e9972a5 commit 16ce482

File tree

22 files changed

+95
-63
lines changed

22 files changed

+95
-63
lines changed

.ansible-lint

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
---
2+
skip_list:
3+
- galaxy[no-changelog]

.github/workflows/test.yaml

Lines changed: 24 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
1+
---
12
name: Test
23

34
on:
45
push:
5-
66
jobs:
77
build:
88
runs-on: ubuntu-latest
@@ -16,6 +16,27 @@ jobs:
1616
spelling:
1717
runs-on: ubuntu-latest
1818
steps:
19-
- uses: actions/checkout@v4
19+
- uses: actions/checkout@v6
2020
- name: Check spelling
21-
uses: crate-ci/typos@master
21+
uses: crate-ci/typos@master
22+
23+
markdown-lint:
24+
runs-on: ubuntu-latest
25+
steps:
26+
- uses: actions/checkout@v6
27+
- name: markdownlint-cli
28+
uses: DavidAnson/markdownlint-cli2-action@main
29+
with:
30+
globs: "**/*.md"
31+
32+
ansible-lint:
33+
runs-on: ubuntu-latest
34+
steps:
35+
- uses: actions/checkout@v6
36+
- name: Run ansible-lint
37+
uses: ansible/ansible-lint@main
38+
with:
39+
requirements_file: requirements.yaml
40+
args: "-w role-name"
41+
# for now we need to ignore, it throws a lot of errors that need to be fixed over time
42+
continue-on-error: true

.markdownlint.json

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
{
2+
"default": true,
3+
"MD013": false,
4+
"MD059": false
5+
}

CONTRIBUTING.md

Lines changed: 0 additions & 3 deletions
This file was deleted.

Makefile

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,3 +7,8 @@ test:
77
test-local:
88
docker pull metalstack/metal-deployment-base:latest
99
docker run --rm -it -v $(PWD):/work -w /work metalstack/metal-deployment-base:latest make test
10+
11+
.PHONY: lint
12+
lint:
13+
docker run --rm -v $(PWD):/workdir davidanson/markdownlint-cli2:v0.21.0 "**/*.md"
14+
docker run --rm -v $(PWD):/work --entrypoint bash -w /work --entrypoint sh pipelinecomponents/ansible-lint:edge -c 'ansible-galaxy install -r requirements.yaml && /entrypoint.sh'

control-plane/roles/gardener-operator/README.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,9 +10,10 @@ Check out the Gardener project for further documentation on [gardener.cloud](htt
1010

1111
If you are still using the `gardener` role for setting up the Gardener, please read the following notes for the migration to the Gardener Operator.
1212

13+
<!-- markdownlint-disable-next-line no-blanks-blockquote -->
1314
> [!CAUTION]
1415
> The migration requires a downtime of the Gardener for end-users. The API servers of the end-users are not disrupted.
15-
16+
<!-- markdownlint-disable-next-line no-blanks-blockquote -->
1617
> [!IMPORTANT]
1718
> For the migration it is required to either wait until Gardener `v1.119` or use a backport feature to `force-redeploy` the existing Gardenlets. If you want to use the backports, please set the following overwrites:
1819
>
@@ -38,7 +39,7 @@ Here are the steps for the migration:
3839
1. ️⚠️ If you migrate from a standalone ETCD it is necessary to explicitly set `gardener_operator_high_availability_control_plane` to `false`. After the initial deployment of the virtual garden was successful, you can toggle this field to `true` in order to migrate to HA control plane. In case you deployed this without following this instruction, please repair your ETCD as described in [Recovering Etcd Clusters](https://gardener.cloud/docs/other-components/etcd-druid/recovering-etcd-clusters/).
3940
1. Deploy the roles `gardener-operator`, `gardener-extensions`, `gardener-virtual-garden-access` and `gardener-cloud-profile` (order matters).
4041
- In case the `etcd-druid` does not start reconciling the `ETCD` resource for the virtual garden, you might have to manually add the finalizer `druid.gardener.cloud/etcd-druid` on the `ETCD` resource.
41-
1. Manually deploy a kubeconfig secret for remote Gardenlet deployment through the Gardener Operator into the Virtual Garden as described [here](https://gardener.cloud/docs/gardener/deployment/deploy_gardenlet_via_operator/#remote-clusters). Delete the old Gardenlet helm chart from the original Gardener cluster and deploy the Gardenlet through the `gardener-gardenlet` role. Don't forget to specify the `gardenClientConnection.gardenClusterAddress` (see https://github.com/gardener/gardener/pull/11996)
42+
1. Manually deploy a kubeconfig secret for remote Gardenlet deployment through the Gardener Operator into the Virtual Garden as described [here](https://gardener.cloud/docs/gardener/deployment/deploy_gardenlet_via_operator/#remote-clusters). Delete the old Gardenlet helm chart from the original Gardener cluster and deploy the Gardenlet through the `gardener-gardenlet` role. Don't forget to specify the `gardenClientConnection.gardenClusterAddress` (see <https://github.com/gardener/gardener/pull/11996>)
4243
- The gardenlet name needs to be identical with the old name of the initial seed in order to take over the existing resources. Usually, we used the name of the stage for this seed.
4344
1. If you did not take over the existing certificates from the previous Virtual Garden, it might be necessary to run `kubectl --context garden annotate managedseeds -n garden <managed-seed-resource> gardener.cloud/operation=renew-kubeconfig` in order to fix the Gardenlet deployments.
4445
1. Reconcile your shoots, this should end up in a stable setup.

control-plane/roles/gardener-virtual-garden-access/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# gardener-virtual-garden-access
22

3-
Creates a managed resource that rotates the token for a valid kubeconfig to access the Virtual Garden as described in https://gardener.cloud/docs/gardener/concepts/operator/#virtual-garden-kubeconfig.
3+
Creates a managed resource that rotates the token for a valid kubeconfig to access the Virtual Garden as described in <https://gardener.cloud/docs/gardener/concepts/operator/#virtual-garden-kubeconfig>.
44

55
## Variables
66

control-plane/roles/isolated-clusters/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ The `control-plane-defaults` folder contains defaults that are used by multiple
3030

3131
| Name | Mandatory | Description |
3232
| ---------------------------------------------------------------- | --------- | ------------------------------------------------------------------------------------------------ |
33-
| isolated_clusters_virtual_garden_kubeconfig | | The kubeconfig to access the virtual garden as a string value. |
33+
| isolated_clusters_virtual_garden_kubeconfig | | The kubeconfig to access the virtual garden as a string value. |
3434
| isolated_clusters_ntp_image_name | | The image name of the ntp service for the partition. |
3535
| isolated_clusters_ntp_image_tag | yes | The tag or version of the ntp service container image. |
3636
| isolated_clusters_ntp_namespace | | The namespace to deploy the ntp server to. |

control-plane/roles/metal-python/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ None
1919

2020
## Examples
2121

22-
```
22+
```yaml
2323
- name: Install metal-python
2424
include_role:
2525
name: metal-roles/control-plane/roles/metal-python

control-plane/roles/metal/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -218,7 +218,7 @@ Configuration for metal-apiserver:
218218

219219
| Name | Mandatory | Description |
220220
| ---------------------------------------------- | --------- | -------------------------------------------------------------------- |
221-
| metal_apiserver_auditing_enabled | | Whether or not to configure timescaledb auditing. Default true. |
221+
| metal_apiserver_auditing_enabled | | Whether or not to configure timescaledb auditing. Default true. |
222222
| metal_apiserver_auditing_timescaledb_host | | The timescaledb host |
223223
| metal_apiserver_auditing_timescaledb_port | | The timescaledb port |
224224
| metal_apiserver_auditing_timescaledb_db | | The timescaledb database name |

0 commit comments

Comments
 (0)