Skip to content

Commit b95ebf4

Browse files
committed
Change kubectl user from cluster-admin to system:admin
Updated kubectl command to use 'system:admin' instead of 'cluster-admin'.
1 parent e7ef765 commit b95ebf4

File tree

4 files changed

+15
-15
lines changed

4 files changed

+15
-15
lines changed

docs/modules/ROOT/pages/framework/backfill_billing.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,8 +13,8 @@ In accordance to the https://git.vshn.net/aline.abler/scriptofdoom[scriptofdoom.
1313
while read -r cronjob rest
1414
do
1515
echo $cronjob
16-
kubectl --as cluster-admin -n syn-appcat create job --from cronjob/$cronjob $cronjob --dry-run -oyaml | yq e '.spec.template.spec.containers[0].args[0] = "appuio-reporting report --timerange 1h --begin=$(date -d \"now -12 hours\" -u +\"%Y-%m-%dT%H:00:00Z\") --repeat-until=$(date -u +\"%Y-%m-%dT%H:00:00Z\")"' | kubectl --as cluster-admin apply -f -
17-
done <<< "$(kubectl --as cluster-admin -n syn-appcat get cronjobs.batch --no-headers)"
16+
kubectl --as=system:admin -n syn-appcat create job --from cronjob/$cronjob $cronjob --dry-run -oyaml | yq e '.spec.template.spec.containers[0].args[0] = "appuio-reporting report --timerange 1h --begin=$(date -d \"now -12 hours\" -u +\"%Y-%m-%dT%H:00:00Z\") --repeat-until=$(date -u +\"%Y-%m-%dT%H:00:00Z\")"' | kubectl --as=system:admin apply -f -
17+
done <<< "$(kubectl --as=system:admin -n syn-appcat get cronjobs.batch --no-headers)"
1818
----
1919

2020
This will loop over all the billing cronjobs in the `syn-appcat`, create a new job from them and replace the args with whatever we want.

docs/modules/ROOT/pages/framework/runbooks/GuaranteedUptimeTarget.adoc

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -7,14 +7,14 @@ This alert is based on our SLI Exporter and how we in Appcat measure uptime of o
77

88
== icon:bug[] Steps for Debugging
99

10-
There is no obvious reason why it happend, but we can easily check what happened. Evevry "guaranteed_availability" database has at least 2 replicas and PodDistruptionBudget set to 1. So, if one replica is down, the second one should be up and running. If that failed it means that there is some issue with the database or node itself.
10+
There is no obvious reason why it happend, but we can easily check what happened. Evevry "guaranteed_availability" database has at least 2 replicas and PodDistruptionBudget set to 1. So, if one replica is down, the second one should be up and running. If that failed it means that there is some issue with the database or node itself.
1111

1212
.Finding the failed database
1313
Check database name and namespace from alert. There are 2 relevant namespaces: claim namespace and instance namespace. Instance namespace is generated and always has format "vshn-<service_name(postgresql, redis, (...etc))>-<instance_name>".
1414

1515
[source,bash]
1616
----
17-
kubectl -n $instanceNamespace get pods
17+
kubectl -n $instanceNamespace get pods
1818
kubectl -n $instanceNamespace describe $failing_pod
1919
kubectl -n $instanceNamespace logs pods/$failing_pod
2020
----
@@ -23,9 +23,9 @@ It might be also worth checking for failing Kubernetes Objects and Composite:
2323
[source,bash]
2424
----
2525
#$instanceNamespace_generated_chars can be obtained in a way: `echo vshn-postgresql-my-super-prod-5jfjn | rev | cut -d'-' -f1 | rev` ===> 5jfjn
26-
kubectl --as cluster-admin get objects | egrep $instanceNamespace_generated_chars
27-
kubectl --as cluster-admin describe objects $objectname
28-
kubectl --as cluster-admin describe xvshn[TAB here for specific service] | egrep $instanceNamespace_generated_chars
26+
kubectl --as=system:admin get objects | egrep $instanceNamespace_generated_chars
27+
kubectl --as=system:admin describe objects $objectname
28+
kubectl --as=system:admin describe xvshn[TAB here for specific service] | egrep $instanceNamespace_generated_chars
2929
----
3030

3131
.Check SLI Prober logs
@@ -65,7 +65,7 @@ Possible reasons for failing SLI Prober:
6565

6666
[source,bash]
6767
-----
68-
Details:
68+
Details:
6969
OnCall : true
7070
alertname : vshn-vshnpostgresql-GuaranteedUptimeTarget
7171
@@ -88,4 +88,4 @@ After You receive such alert on email, you can easily check interesting informat
8888

8989
* instance namespace: `vshn-postgresql-postgresql-analytics-kxxxa`
9090
* instanceNamespace_GeneratedChars: `kxxxa`
91-
* claim namespace: `postgresql-analytics-db`
91+
* claim namespace: `postgresql-analytics-db`

docs/modules/ROOT/pages/service/mariadb/restore.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,13 +7,13 @@ To restore a VSHNMariaDB backup the following tools are needed:
77
* https://github.com/mfuentesg/ksd[GitHub - mfuentesg/ksd: kubernetes secret decoder]
88
* https://k8up.io/[K8up]
99
* https://restic.net/[Restic]
10-
* `alias k=kubectl` && `alias ka='kubectl --as cluster-admin`
10+
* `alias k=kubectl` && `alias ka='kubectl --as=system:admin`
1111
1212
== Acquiring VSHNMariaDB backup
1313

1414
Locate instance namespace of VSHNMariaDB You want to backup:
1515
`k -n vshn-test get vshnmariadbs.vshn.appcat.vshn.io vshn-testing -o yaml | grep instanceNamespace` and for convinience use `kubens` using new namespace
16-
Depending on a cluster configuration it might be necessary for You to use all other commands using `kubectl --as cluster-admin` especially on Appuio Cloud
16+
Depending on a cluster configuration it might be necessary for You to use all other commands using `kubectl --as=system:admin` especially on Appuio Cloud
1717
There are two important secrets in instance namespace:
1818
* backup-bucket-credentials
1919
* k8up-repository-password

docs/modules/ROOT/pages/service/postgresql/runbooks/howto-manual-restore.adoc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ From the said alert the customer namespace can be deducted together with the nam
1515

1616
[source,bash]
1717
----
18-
kubectl get XVSHNPostgreSQL <name-from-alert> --as cluster-admin
18+
kubectl get XVSHNPostgreSQL <name-from-alert> --as=system:admin
1919
----
2020

2121
NOTE: The XRD is protected in case the deletion protection is on.
@@ -28,7 +28,7 @@ The instance namespace is hidden from the customer.
2828

2929
[source,bash]
3030
----
31-
kubectl get XVSHNPostgreSQL <name-from-alert> -o=jsonpath='{.status.instanceNamespace}' --as cluster-admin
31+
kubectl get XVSHNPostgreSQL <name-from-alert> -o=jsonpath='{.status.instanceNamespace}' --as=system:admin
3232
----
3333

3434
[WARNING]
@@ -142,7 +142,7 @@ In case there is no secret it has to be recreated with the credentials from the
142142
+
143143
[source,bash]
144144
----
145-
kubectl get XObjectBucket --as cluster-admin <name>
145+
kubectl get XObjectBucket --as=system:admin <name>
146146
----
147147
<7> THe bucket name
148148
<8> S3 Cloud provider endpoint
@@ -167,4 +167,4 @@ To check the restore process itself use the following command:
167167
[source,bash]
168168
----
169169
kubectl -n <instance-namespace> logs <pod-name> -f
170-
----
170+
----

0 commit comments

Comments
 (0)