Skip to content

Commit 8eabdb4

Browse files
authored
KubeArchive: kflux-prd-rh03 (#7294)
Signed-off-by: Hector Martinez <[email protected]>
1 parent f03b748 commit 8eabdb4

File tree

5 files changed

+264
-9
lines changed

5 files changed

+264
-9
lines changed

argo-cd-apps/base/member/infra-deployments/kubearchive/kubearchive.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,8 +33,8 @@ spec:
3333
# - nameNormalized: kflux-prd-rh02
3434
# values.clusterDir: kflux-prd-rh02
3535
# database is not created here yet
36-
# - nameNormalized: kflux-prd-rh03
37-
# values.clusterDir: kflux-prd-rh03
36+
- nameNormalized: kflux-prd-rh03
37+
values.clusterDir: kflux-prd-rh03
3838
template:
3939
metadata:
4040
name: kubearchive-{{nameNormalized}}

argo-cd-apps/overlays/konflux-public-production/delete-applications.yaml

Lines changed: 0 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -6,13 +6,6 @@ metadata:
66
name: tempo
77
$patch: delete
88
---
9-
# KubeArchive not yet ready to go to production
10-
apiVersion: argoproj.io/v1alpha1
11-
kind: ApplicationSet
12-
metadata:
13-
name: kubearchive
14-
$patch: delete
15-
---
169
apiVersion: argoproj.io/v1alpha1
1710
kind: ApplicationSet
1811
metadata:

components/konflux-ui/production/kflux-prd-rh03/kustomization.yaml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@ configMapGenerator:
1212
literals:
1313
- IMPERSONATE=true
1414
- TEKTON_RESULTS_URL=https://tekton-results-api-service.tekton-results.svc.cluster.local:8080
15+
- KUBEARCHIVE_URL=https://kubearchive-api-server.product-kubearchive.svc.cluster.local:8081
1516

1617
patches:
1718
- path: add-service-certs-patch.yaml

components/kubearchive/production/README.md

Lines changed: 64 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,28 @@ via ArgoCD.
1111

1212
## DB Secret Paths
1313

14+
For new clusters the database is created automatically with a Konflux automation and the secret
15+
is stored in a different place. Older clusters had the database created using `app-interface`.
16+
17+
Old clusters (`app-interface`)
18+
19+
* stone-stg-rh01
20+
* stone-stage-p01
21+
* stone-prd-rh01
22+
* kflux-prd-rh02
23+
* stone-prod-p01
24+
* stone-prod-p02
25+
* kflux-ocp-p01
26+
27+
New clusters (Konflux Automation):
28+
29+
* kflux-prd-rh03
30+
* kflux-rhel-p01
31+
* kflux-osp-p01
32+
33+
34+
### app-interface databases
35+
1436
The paths to the DB secrets are built from
1537
[App Interface Konflux Namespaces](https://gitlab.cee.redhat.com/service/app-interface/-/tree/master/data/services/stonesoup/namespaces?ref_type=heads).
1638
For example, the information to build the DB path for the cluster `stone-prod-p01` is defined on the file `stonesoup-prod-private-1.appsrep09ue1.yaml`.
@@ -29,3 +51,45 @@ So the path for `stone-prod-p01` is:
2951
```text
3052
integrations-output/external-resources/appsrep09ue1/stone-prod-p01/stone-prod-p01-kube-archive-rds
3153
```
54+
55+
### Konflux Automation
56+
57+
The databases created using the new Konflux Automation use `ExternalSecret` with a Vault instance. The
58+
`secretStoreRef` should be `apprse-stonesoup-vault` and the `key` contains the name of the cluster.
59+
This is an example using the `kflux-prd-rh03` cluster:
60+
61+
```yaml
62+
--
63+
apiVersion: external-secrets.io/v1beta1
64+
kind: ExternalSecret
65+
metadata:
66+
annotations:
67+
argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
68+
argocd.argoproj.io/sync-wave: "-1"
69+
name: database-secret
70+
namespace: product-kubearchive
71+
spec:
72+
dataFrom:
73+
- extract:
74+
key: production/platform/terraform/generated/kflux-prd-rh03/kubearchive-database
75+
refreshInterval: 1h
76+
secretStoreRef:
77+
kind: ClusterSecretStore
78+
name: appsre-stonesoup-vault
79+
target:
80+
creationPolicy: Owner
81+
deletionPolicy: Delete
82+
name: kubearchive-database-credentials
83+
template:
84+
data:
85+
DATABASE_DB: '{{ index . "db.name" }}'
86+
DATABASE_KIND: postgresql
87+
DATABASE_PASSWORD: '{{ index . "db.password" }}'
88+
DATABASE_PORT: "5432"
89+
DATABASE_URL: '{{ index . "db.host" }}'
90+
DATABASE_USER: '{{ index . "db.user" }}'
91+
```
92+
93+
To check if the database is created, ask [#forum-konflux-infrastructure](https://redhat.enterprise.slack.com/archives/C04F4NE15U1).
94+
However you can assume the database and its secret were created successfuly. If something goes wrong
95+
with the `ExternalSecret` contact the infrastructure team.
Lines changed: 197 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,197 @@
1+
---
2+
apiVersion: kustomize.config.k8s.io/v1beta1
3+
kind: Kustomization
4+
resources:
5+
- ../../base
6+
- ../base
7+
- https://github.com/kubearchive/kubearchive/releases/download/v1.2.0/kubearchive.yaml?timeout=90
8+
9+
namespace: product-kubearchive
10+
11+
patches:
12+
- patch: |-
13+
apiVersion: batch/v1
14+
kind: Job
15+
metadata:
16+
name: kubearchive-schema-migration
17+
spec:
18+
template:
19+
spec:
20+
containers:
21+
- name: migration
22+
env:
23+
- name: KUBEARCHIVE_VERSION
24+
value: v1.2.0
25+
# We don't need the Secret as it will be created by the ExternalSecrets Operator
26+
- patch: |-
27+
$patch: delete
28+
apiVersion: v1
29+
kind: Secret
30+
metadata:
31+
name: kubearchive-database-credentials
32+
namespace: kubearchive
33+
- patch: |-
34+
apiVersion: external-secrets.io/v1beta1
35+
kind: ExternalSecret
36+
metadata:
37+
name: database-secret
38+
spec:
39+
secretStoreRef:
40+
name: appsre-stonesoup-vault
41+
dataFrom:
42+
- extract:
43+
key: production/platform/terraform/generated/kflux-prd-rh03/kubearchive-database
44+
# These patches add an annotation so an OpenShift service
45+
# creates the TLS secrets instead of Cert Manager
46+
- patch: |-
47+
apiVersion: v1
48+
kind: Service
49+
metadata:
50+
name: kubearchive-api-server
51+
namespace: kubearchive
52+
annotations:
53+
service.beta.openshift.io/serving-cert-secret-name: kubearchive-api-server-tls
54+
- patch: |-
55+
apiVersion: v1
56+
kind: Service
57+
metadata:
58+
name: kubearchive-operator-webhooks
59+
namespace: kubearchive
60+
annotations:
61+
service.beta.openshift.io/serving-cert-secret-name: kubearchive-operator-tls
62+
- patch: |-
63+
apiVersion: admissionregistration.k8s.io/v1
64+
kind: MutatingWebhookConfiguration
65+
metadata:
66+
name: kubearchive-mutating-webhook-configuration
67+
annotations:
68+
service.beta.openshift.io/inject-cabundle: "true"
69+
- patch: |-
70+
apiVersion: admissionregistration.k8s.io/v1
71+
kind: ValidatingWebhookConfiguration
72+
metadata:
73+
name: kubearchive-validating-webhook-configuration
74+
annotations:
75+
service.beta.openshift.io/inject-cabundle: "true"
76+
# These patches solve Kube Linter problems
77+
- patch: |-
78+
apiVersion: apps/v1
79+
kind: Deployment
80+
metadata:
81+
name: kubearchive-api-server
82+
namespace: kubearchive
83+
spec:
84+
template:
85+
spec:
86+
containers:
87+
- name: kubearchive-api-server
88+
env:
89+
- name: KUBEARCHIVE_OTEL_MODE
90+
value: enabled
91+
- name: OTEL_EXPORTER_OTLP_ENDPOINT
92+
value: http://otel-collector:4318
93+
- name: AUTH_IMPERSONATE
94+
value: "true"
95+
securityContext:
96+
readOnlyRootFilesystem: true
97+
runAsNonRoot: true
98+
- patch: |-
99+
apiVersion: apps/v1
100+
kind: Deployment
101+
metadata:
102+
name: kubearchive-operator
103+
namespace: kubearchive
104+
spec:
105+
template:
106+
spec:
107+
containers:
108+
- name: manager
109+
env:
110+
- name: KUBEARCHIVE_OTEL_MODE
111+
value: enabled
112+
- name: OTEL_EXPORTER_OTLP_ENDPOINT
113+
value: http://otel-collector:4318
114+
securityContext:
115+
readOnlyRootFilesystem: true
116+
runAsNonRoot: true
117+
ports:
118+
- containerPort: 8081
119+
resources:
120+
limits:
121+
cpu: 100m
122+
memory: 512Mi
123+
requests:
124+
cpu: 100m
125+
memory: 512Mi
126+
127+
- patch: |-
128+
apiVersion: apps/v1
129+
kind: Deployment
130+
metadata:
131+
name: kubearchive-sink
132+
namespace: kubearchive
133+
spec:
134+
template:
135+
spec:
136+
containers:
137+
- name: kubearchive-sink
138+
env:
139+
- name: KUBEARCHIVE_OTEL_MODE
140+
value: enabled
141+
- name: OTEL_EXPORTER_OTLP_ENDPOINT
142+
value: http://otel-collector:4318
143+
securityContext:
144+
readOnlyRootFilesystem: true
145+
runAsNonRoot: true
146+
resources:
147+
limits:
148+
cpu: 200m
149+
memory: 128Mi
150+
requests:
151+
cpu: 200m
152+
memory: 128Mi
153+
154+
# We don't need this CronJob as it is suspended, we can enable it later
155+
- patch: |-
156+
$patch: delete
157+
apiVersion: batch/v1
158+
kind: CronJob
159+
metadata:
160+
name: cluster-vacuum
161+
namespace: kubearchive
162+
# These patches remove Certificates and Issuer from Cert-Manager
163+
- patch: |-
164+
$patch: delete
165+
apiVersion: cert-manager.io/v1
166+
kind: Certificate
167+
metadata:
168+
name: "kubearchive-api-server-certificate"
169+
namespace: kubearchive
170+
- patch: |-
171+
$patch: delete
172+
apiVersion: cert-manager.io/v1
173+
kind: Certificate
174+
metadata:
175+
name: "kubearchive-ca"
176+
namespace: kubearchive
177+
- patch: |-
178+
$patch: delete
179+
apiVersion: cert-manager.io/v1
180+
kind: Issuer
181+
metadata:
182+
name: "kubearchive-ca"
183+
namespace: kubearchive
184+
- patch: |-
185+
$patch: delete
186+
apiVersion: cert-manager.io/v1
187+
kind: Issuer
188+
metadata:
189+
name: "kubearchive"
190+
namespace: kubearchive
191+
- patch: |-
192+
$patch: delete
193+
apiVersion: cert-manager.io/v1
194+
kind: Certificate
195+
metadata:
196+
name: "kubearchive-operator-certificate"
197+
namespace: kubearchive

0 commit comments

Comments
 (0)