Skip to content

Commit d13ccb9

Browse files
committed
RHIDP-1923 - GKE: Document how RHDH can be installed in GKE
1 parent 35fb30b commit d13ccb9

File tree

3 files changed

+4
-280
lines changed

3 files changed

+4
-280
lines changed

modules/installation/proc-deploy-rhdh-instance-gke.adoc

Lines changed: 2 additions & 67 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,8 @@
1-
// Module included in the following assemblies:
2-
//
3-
// * assemblies/assembly-install-rhdh-gke.adoc
4-
51
[id="proc-deploy-rhdh-instance-gke.adoc_{context}"]
62
= Deploying the {product-short} instance on {gke-short} with the Operator
73
You can deploy your {product-short} instance in {gke-short} using the Operator.
8-
.Prerequisites
94

5+
.Prerequisites
106
* A cluster administrator has installed the {product} Operator.
117
* You have subscribed to `registry.redhat.io`. For more information, see https://access.redhat.com/RegistryAuthentication[{company-name} Container Registry Authentication].
128
* You have installed `kubectl`. For more information, see https://kubernetes.io/docs/tasks/tools/#kubectl[Install kubetl].
@@ -20,14 +16,7 @@ You can deploy your {product-short} instance in {gke-short} using the Operator.
2016
You need to create an `A` record with the value equal to the IP address. This can take up to one hour to propagate.
2117
====
2218

23-
//* You have an {eks-short} cluster with {aws-short} Application Load Balancer (ALB) add-on installed. For more information, see https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html[Application load balancing on {eks-brand-name}] and https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html[Installing the AWS Load Balancer Controller add-on].
24-
//* You have configured a domain name for your {product-short} instance. The domain name can be a hosted zone entry on Route 53 or managed outside of AWS. For more information, see https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-configuring.html[Configuring Amazon Route 53 as your DNS service] documentation.
25-
//* You have an entry in the {aws-short} Certificate Manager (ACM) for your preferred domain name. Make sure to keep a record of your Certificate ARN.
26-
//* You have set the context to the {eks-short} cluster in your current `kubeconfig`. For more information, see https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html[Creating or updating a kubeconfig file for an Amazon {eks-short} cluster].
27-
28-
2919
.Procedure
30-
3120
. Create a ConfigMap named `app-config-rhdh` containing the {product-short} configuration using the following template:
3221
+
3322
--
@@ -197,58 +186,4 @@ Wait until the DNS name is responsive, indicating that your {product-short} inst
197186
.Additional information
198187
For more information on setting up {gke-short} using Ingress with TLS, see https://github.com/GoogleCloudPlatform/gke-networking-recipes/tree/main/ingress/single-cluster/ingress-https
199188

200-
For more information on setting up {gke-short} with LoadBalancer instead of Ingress, see https://github.com/sumiranchugh/rhdh-gke-poc/tree/main
201-
202-
203-
204-
////
205-
. Create an Ingress resource using the following template, ensuring to customize the names as needed:
206-
+
207-
--
208-
[source,yaml,subs="attributes+"]
209-
----
210-
apiVersion: networking.k8s.io/v1
211-
kind: Ingress
212-
metadata:
213-
# TODO: this the name of your {product-short} Ingress
214-
name: my-rhdh
215-
annotations:
216-
alb.ingress.kubernetes.io/scheme: internet-facing
217-
218-
alb.ingress.kubernetes.io/target-type: ip
219-
220-
# TODO: Using an ALB HTTPS Listener requires a certificate for your own domain. Fill in the ARN of your certificate, e.g.:
221-
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-xxx:xxxx:certificate/xxxxxx
222-
223-
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
224-
225-
alb.ingress.kubernetes.io/ssl-redirect: '443'
226-
227-
# TODO: Set your application domain name.
228-
external-dns.alpha.kubernetes.io/hostname: <rhdh_dns_name>
229-
230-
spec:
231-
ingressClassName: alb
232-
rules:
233-
# TODO: Set your application domain name.
234-
- host: <rhdh_dns_name>
235-
http:
236-
paths:
237-
- path: /
238-
pathType: Prefix
239-
backend:
240-
service:
241-
# TODO: my-rhdh is the name of your Backstage Custom Resource.
242-
# Adjust if you changed it!
243-
name: backstage-my-rhdh
244-
port:
245-
name: http-backend
246-
----
247-
248-
In the previous template, replace ` <rhdh_dns_name>` with your {product-short} domain name and update the value of `alb.ingress.kubernetes.io/certificate-arn` with your certificate ARN.
249-
--
250-
251-
.Verification
252-
253-
Wait until the DNS name is responsive, indicating that your {product-short} instance is ready for use.
254-
////
189+
For more information on setting up {gke-short} with LoadBalancer instead of Ingress, see https://github.com/sumiranchugh/rhdh-gke-poc/tree/main

modules/installation/proc-rhdh-deploy-gke-helm.adoc

Lines changed: 0 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,3 @@
1-
// Module included in the following assemblies
2-
// assembly-install-rhdh-gke.adoc
3-
41
[id='proc-rhdh-deploy-gke-helm_{context}']
52
= Installing {product-short} on {gke-short} with the Helm chart
63

@@ -23,14 +20,7 @@ You need to create an `A` record with the value equal to the IP address. This ca
2320
====
2421
* You have installed Helm 3 or the latest. For more information, see https://helm.sh/docs/intro/install[Installing Helm].
2522

26-
//* You have an {eks-short} cluster with AWS Application Load Balancer (ALB) add-on installed. For more information, see https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html[Application load balancing on Amazon {product-short}] and https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html[Installing the AWS Load Balancer Controller add-on].
27-
//* You have configured a domain name for your {product-short} instance. The domain name can be a hosted zone entry on Route 53 or managed outside of AWS. For more information, see https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-configuring.html[Configuring Amazon Route 53 as your DNS service] documentation.
28-
//* You have an entry in the AWS Certificate Manager (ACM) for your preferred domain name. Make sure to keep a record of your Certificate ARN.
29-
//* You have set the context to the {eks-short} cluster in your current `kubeconfig`. For more information, see https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html[Creating or updating a kubeconfig file for an Amazon {eks-short} cluster].
30-
//* You have installed Helm 3 or the latest. For more information, see https://docs.aws.amazon.com/eks/latest/userguide/helm.html[Using Helm with Amazon {eks-short}].
31-
3223
.Procedure
33-
3424
. Go to your terminal and run the following command to add the Helm chart repository containing the {product-short} chart to your local Helm registry:
3525
+
3626
--
Lines changed: 2 additions & 203 deletions
Original file line numberDiff line numberDiff line change
@@ -1,24 +1,11 @@
1-
// Module included in the following assemblies
2-
// assembly-install-rhdh-gke.adoc
3-
4-
// [id='proc-rhdh-deploy-gke-operator_{context}']
5-
// = Installing {product-short} on {gke-short} with the Operator
6-
7-
// You can install the {product} Operator with or without the Operator Lifecycle Manager (OLM) framework.
8-
9-
// .Additonal resources
10-
// * For information about the OLM, see link:https://olm.operatorframework.io/docs/[Operator Lifecycle Manager(OLM)] documentation.
11-
1+
[id="proc-rhdh-deploy-gke-operator.adoc_{context}"]
122
= Installing the {product-short} Operator with the OLM framework
133

144
You can install the {product-short} Operator on {gke-short} using the https://olm.operatorframework.io[Operator Lifecycle Manager (OLM) framework]. Following that, you can proceed to deploy your {product-short} instance in {gke-short}.
155

166
For information about the OLM, see link:https://olm.operatorframework.io/docs/[Operator Lifecycle Manager(OLM)] documentation.
177

188
.Prerequisites
19-
20-
// TODO: Compare with GKE OLM install prerequisites
21-
229
* You have subscribed to `registry.redhat.io`. For more information, see https://access.redhat.com/RegistryAuthentication[{company-name} Container Registry Authentication].
2310

2411
* You have installed the Operator Lifecycle Manager (OLM). For more information about installation and troubleshooting, see https://operatorhub.io/how-to-install-an-operator#How-do-I-get-Operator-Lifecycle-Manager?[How do I get Operator Lifecycle Manager?]
@@ -29,26 +16,7 @@ For information about the OLM, see link:https://olm.operatorframework.io/docs/[O
2916

3017
* You have logged in to your Google account and created a https://cloud.google.com/kubernetes-engine/docs/how-to/creating-an-autopilot-cluster[GKE Autopilot] or https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-zonal-cluster[GKE Standard] cluster.
3118

32-
////
33-
[TBC] Are these prerequisites required for the operator install procedure or just the deployment procedure?
34-
35-
* You have configured a domain name for your {product-short} instance.
36-
37-
* You have reserved a static external Premium IPv4 Global IP address that is not attached to any VM.
38-
39-
* You have configured the DNS records for your domain name to point to the IP address that have reseved. For more information see https://cloud.google.com/vpc/docs/reserve-static-external-ip-address#reserve_new_static[Reserve a new static external IP address]
40-
+
41-
[NOTE]
42-
You need to create an `A` record with the value equal to the IP address. This can take up to one hour to propagate.
43-
////
44-
45-
////
46-
* You have set the context to the {eks-short} cluster in your current `kubeconfig`. For more information, see https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html[Creating or updating a kubeconfig file for an Amazon {eks-short} cluster].
47-
////
48-
49-
5019
.Procedure
51-
5220
. Connect to your GKE cluster using the following command:
5321
+
5422
--
@@ -233,173 +201,4 @@ kubectl -n rhdh-operator edit configmap backstage-default-config
233201

234202
.. Save and exit.
235203
+
236-
Wait for a few minutes until the changes are automatically applied to the operator pods.
237-
238-
////
239-
== Installing the {product-short} Operator without the OLM framework
240-
241-
.Prerequisites
242-
* You have installed the following commands:
243-
** `git`
244-
** `make`
245-
** `sed`
246-
247-
.Procedure
248-
249-
. Clone the Operator repository to your local machine using the following command:
250-
+
251-
--
252-
[source,terminal]
253-
----
254-
git clone --depth=1 https://github.com/redhat-developer/rhdh-operator.git rhdh-operator && cd rhdh-operator
255-
----
256-
--
257-
258-
. Run the following command and generate the deployment manifest:
259-
+
260-
--
261-
[source,terminal]
262-
----
263-
make deployment-manifest
264-
----
265-
266-
The previous command generates a file named `rhdh-operator-<VERSION>.yaml`, which is updated manually.
267-
--
268-
269-
. Run the following command to apply replacements in the generated deployment manifest:
270-
+
271-
--
272-
[source,terminal]
273-
----
274-
sed -i "s/backstage-operator/rhdh-operator/g" rhdh-operator-*.yaml
275-
sed -i "s/backstage-system/rhdh-operator/g" rhdh-operator-*.yaml
276-
sed -i "s/backstage-controller-manager/rhdh-controller-manager/g" rhdh-operator-*.yaml
277-
----
278-
--
279-
280-
. Open the generated deployment manifest file in an editor and perform the following steps:
281-
.. Locate the `db-statefulset.yaml` string and add the `fsGroup` to its `spec.template.spec.securityContext`, as shown in the following example:
282-
+
283-
--
284-
[source,yaml]
285-
----
286-
db-statefulset.yaml: |
287-
apiVersion: apps/v1
288-
kind: StatefulSet
289-
--- TRUNCATED ---
290-
spec:
291-
--- TRUNCATED ---
292-
restartPolicy: Always
293-
securityContext:
294-
# You can assign any random value as fsGroup
295-
fsGroup: 2000
296-
serviceAccount: default
297-
serviceAccountName: default
298-
--- TRUNCATED ---
299-
----
300-
--
301-
302-
.. Locate the `deployment.yaml` string and add the `fsGroup` to its specification, as shown in the following example:
303-
+
304-
--
305-
[source,yaml]
306-
----
307-
deployment.yaml: |
308-
apiVersion: apps/v1
309-
kind: Deployment
310-
--- TRUNCATED ---
311-
spec:
312-
securityContext:
313-
# You can assign any random value as fsGroup
314-
fsGroup: 3000
315-
automountServiceAccountToken: false
316-
--- TRUNCATED ---
317-
----
318-
--
319-
320-
.. Locate the `service.yaml` string and change the `type` to `NodePort` as follows:
321-
+
322-
--
323-
[source,yaml]
324-
----
325-
service.yaml: |
326-
apiVersion: v1
327-
kind: Service
328-
spec:
329-
# NodePort is required for the ALB to route to the Service
330-
type: NodePort
331-
--- TRUNCATED ---
332-
----
333-
--
334-
335-
.. Replace the default images with the images that are pulled from the {company-name} Ecosystem:
336-
+
337-
--
338-
[source,terminal,subs="attributes+"]
339-
----
340-
sed -i "s#gcr.io/kubebuilder/kube-rbac-proxy:.*#registry.redhat.io/openshift4/ose-kube-rbac-proxy:v{ocp-version}#g" rhdh-operator-*.yaml
341-
342-
sed -i "s#(quay.io/janus-idp/operator:.*|quay.io/rhdh-community/operator:.*)#registry.redhat.io/rhdh/rhdh-rhel9-operator:{product-version}#g" rhdh-operator-*.yaml
343-
344-
sed -i "s#quay.io/janus-idp/backstage-showcase:.*#registry.redhat.io/rhdh/rhdh-hub-rhel9:{product-version}#g" rhdh-operator-*.yaml
345-
346-
sed -i "s#quay.io/fedora/postgresql-15:.*#registry.redhat.io/rhel9/postgresql-15:latest#g" rhdh-operator-*.yaml
347-
----
348-
--
349-
350-
. Add the image pull secret to the manifest in the Deployment resource as follows:
351-
+
352-
--
353-
[source,yaml]
354-
----
355-
--- TRUNCATED ---
356-
357-
apiVersion: apps/v1
358-
kind: Deployment
359-
metadata:
360-
labels:
361-
app.kubernetes.io/component: manager
362-
app.kubernetes.io/created-by: rhdh-operator
363-
app.kubernetes.io/instance: controller-manager
364-
app.kubernetes.io/managed-by: kustomize
365-
app.kubernetes.io/name: deployment
366-
app.kubernetes.io/part-of: rhdh-operator
367-
control-plane: controller-manager
368-
name: rhdh-controller-manager
369-
namespace: rhdh-operator
370-
spec:
371-
replicas: 1
372-
selector:
373-
matchLabels:
374-
control-plane: controller-manager
375-
template:
376-
metadata:
377-
annotations:
378-
kubectl.kubernetes.io/default-container: manager
379-
labels:
380-
control-plane: controller-manager
381-
spec:
382-
imagePullSecrets:
383-
- name: rhdh-pull-secret
384-
--- TRUNCATED ---
385-
----
386-
--
387-
388-
. Apply the manifest to deploy the operator using the following command:
389-
+
390-
--
391-
[source,terminal]
392-
----
393-
kubectl apply -f rhdh-operator-VERSION.yaml
394-
----
395-
--
396-
397-
. Run the following command to verify that the Operator is running:
398-
+
399-
--
400-
[source,terminal]
401-
----
402-
kubectl -n rhdh-operator get pods -w
403-
----
404-
--
405-
////
204+
Wait for a few minutes until the changes are automatically applied to the operator pods.

0 commit comments

Comments
 (0)