Skip to content

Commit c451611

Browse files
authored
Merge pull request #65487 from michaelryanmcneill/fix/OSDOCS-7728
OSDOCS-7728: Addressing a few small items identified during secondary review
2 parents 9cdcb57 + a020d74 commit c451611

File tree

1 file changed

+44
-62
lines changed

1 file changed

+44
-62
lines changed

cloud_experts_tutorials/cloud-experts-aws-load-balancer-operator.adoc

Lines changed: 44 additions & 62 deletions
Original file line numberDiff line numberDiff line change
@@ -20,39 +20,37 @@ toc::[]
2020

2121
include::snippets/mobb-support-statement.adoc[leveloffset=+1]
2222

23-
2423
[TIP]
2524
====
26-
Load Balancers created by the AWS Load Balancer (ALB) Operator cannot be used for xref:../networking/routes/route-configuration.adoc#route-configuration[{product-title} Routes], and should only be used for individual services or Ingress that does not need the full layer 7 capabilties of a ROSA route.
25+
Load Balancers created by the AWS Load Balancer Operator cannot be used for xref:../networking/routes/route-configuration.adoc#route-configuration[OpenShift Routes], and should only be used for individual services or ingress resources that do not need the full layer 7 capabilities of an OpenShift Route.
2726
====
2827

29-
link:https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/[AWS Load Balancer (ALB)Controller] is a Kubernetes controller that manages Elastic Load Balancing v2 (ELBv2) for a Kubernetes cluster.
30-
31-
* It satisfies Kubernetes link:https://kubernetes.io/docs/concepts/services-networking/ingress/[Ingress and service resources] by provisioning link:https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html[Application Load Balancers (ALB)] and
32-
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html[Network Load Balancers (NLB)].
28+
The link:https://kubernetes-sigs.github.io/aws-load-balancer-controller/[AWS Load Balancer Controller] manages AWS Elastic Load Balancers for a {product-title} (ROSA) cluster. The controller provisions link:https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html[AWS Application Load Balancers (ALB)] when you create Kubernetes Ingress resources and link:https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html[AWS Network Load Balancers (NLB)] when implementing Kubernetes Service resources with a type of LoadBalancer.
3329

34-
Compared with default AWS In Tree Provider, this controller is actively developed with advanced annotations for both ALB and NLB. Some advanced use cases are:
30+
Compared with the default AWS in-tree load balancer provider, this controller is developed with advanced annotations for both ALBs and NLBs. Some advanced use cases are:
3531

36-
* Using native Kubernetes Ingress with ALB
37-
* Integrate ALB with web application firewall (WAF)
38-
* Specify NLB source IP ranges
39-
* Specify NLB internal IP address
32+
* Using native Kubernetes Ingress objects with ALBs
33+
* Integrate ALBs with the AWS Web Application Firewall (WAF) service
34+
* Specify custom NLB source IP ranges
35+
* Specify custom NLB internal IP addresses
4036
41-
link:https://github.com/openshift/aws-load-balancer-operator[ALB Operator] is used to used to install, manage and configure an instance of `aws-load-balancer-controller` in a OpenShift cluster.
37+
The link:https://github.com/openshift/aws-load-balancer-operator[AWS Load Balancer Operator] is used to used to install, manage and configure an instance of `aws-load-balancer-controller` in a ROSA cluster.
4238

43-
.Prerequisites
39+
[id="prerequisites_{context}"]
40+
== Prerequisites
4441

4542
[NOTE]
4643
====
47-
ALB requires a multi-AZ cluster, three public subnets split across three AZs in the same VPC as the cluster, and is not suitable for most PrivateLink clusters.
44+
AWS ALBs require a multi-AZ cluster, as well as three public subnets split across three AZs in the same VPC as the cluster. This makes ALBs unsuitable for many PrivateLink clusters. AWS NLBs do not have this restriction.
4845
====
4946

5047
* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-a-cluster-quickly[A multi-AZ ROSA classic cluster]
5148
* BYO VPC cluster
5249
* AWS CLI
5350
* OC CLI
5451

55-
.Environment
52+
[id="environment_{context}"]
53+
=== Environment
5654

5755
* Prepare the environment variables:
5856
+
@@ -68,14 +66,15 @@ $ mkdir -p ${SCRATCH}
6866
$ echo "Cluster: ${ROSA_CLUSTER_NAME}, Region: ${REGION}, OIDC Endpoint: ${OIDC_ENDPOINT}, AWS Account ID: ${AWS_ACCOUNT_ID}"
6967
----
7068

71-
== AWS VPC and subnets
69+
[id="aws-vpc-subnets_{context}"]
70+
=== AWS VPC and subnets
7271

7372
[NOTE]
7473
====
75-
This section only applies to BYO VPC clusters, if you let ROSA create your VPCs you can skip to the following Installation section. You can skip this section if you already installed xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-a-cluster-quickly[a Multi-AZ ROSA Classic cluster].
74+
This section only applies to clusters that were deployed into existing VPCs. If you did not deploy your cluster into an existing VPC, skip this section and proceed to the installation section below.
7675
====
7776

78-
. Set Variables describing your VPC and Subnets:
77+
. Set the below variables to the proper values for your ROSA deployment:
7978
+
8079
[source,terminal]
8180
----
@@ -85,14 +84,14 @@ $ export PRIVATE_SUBNET_IDS=<private-subnets>
8584
$ export CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}")
8685
----
8786
+
88-
. Tag VPC with the cluster name:
87+
. Add a tag to your cluster's VPC with the cluster name:
8988
+
9089
[source,terminal]
9190
----
9291
$ aws ec2 create-tags --resources ${VPC_ID} --tags Key=kubernetes.io/cluster/${CLUSTER_NAME},Value=owned --region ${REGION}
9392
----
9493
+
95-
. Add tags to Public Subnets:
94+
. Add a tag to your public subnets:
9695
+
9796
[source,terminal]
9897
----
@@ -102,7 +101,7 @@ $ aws ec2 create-tags \
102101
--region ${REGION}
103102
----
104103
+
105-
. Add tags to Private Subnets:
104+
. Add a tag to your private subnets:
106105
+
107106
[source,terminal]
108107
----
@@ -112,13 +111,14 @@ $ aws ec2 create-tags \
112111
--region ${REGION}
113112
----
114113

114+
[id="installation_{context}"]
115115
== Installation
116116

117-
. Create Policy for the ALB Controller:
117+
. Create an AWS IAM policy for the AWS Load Balancer Controller:
118118
+
119119
[NOTE]
120120
====
121-
Policy is from link:https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.4.4/docs/install/iam_policy.json[ALB controller policy] plus subnet create tags permission. This is required by the Operator.
121+
The policy is sourced from link:https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.4.4/docs/install/iam_policy.json[the upstream AWS Load Balancer Controller policy] plus permission to create tags on subnets. This is required by the operator to function.
122122
====
123123
+
124124
[source,terminal]
@@ -138,7 +138,7 @@ fi
138138
$ echo $POLICY_ARN
139139
----
140140
+
141-
. Create trust policy for ALB Operator:
141+
. Create an AWS IAM trust policy for AWS Load Balancer Operator:
142142
+
143143
[source,terminal]
144144
----
@@ -163,7 +163,7 @@ $ cat <<EOF > "${SCRATCH}/trust-policy.json"
163163
EOF
164164
----
165165
+
166-
. Create Role for ALB Operator:
166+
. Create an AWS IAM role for the AWS Load Balancer Operator:
167167
+
168168
[source,terminal]
169169
----
@@ -176,7 +176,7 @@ $ aws iam attach-role-policy --role-name "${ROSA_CLUSTER_NAME}-alb-operator" \
176176
--policy-arn $POLICY_ARN
177177
----
178178
+
179-
. Create secret for ALB Operator:
179+
. Create a secret for the AWS Load Balancer Operator to assume our newly created AWS IAM role:
180180
+
181181
[source,terminal]
182182
----
@@ -194,7 +194,7 @@ stringData:
194194
EOF
195195
----
196196
+
197-
. Install Red Hat ALB Operator:
197+
. Install the Red Hat AWS Load Balancer Operator:
198198
+
199199
[source,terminal]
200200
----
@@ -222,7 +222,7 @@ spec:
222222
EOF
223223
----
224224
+
225-
. Install Red Hat ALB Controller:
225+
. Deploy an instance of the AWS Load Balancer Controller using the operator:
226226
+
227227
[NOTE]
228228
====
@@ -242,7 +242,7 @@ spec:
242242
EOF
243243
----
244244
+
245-
. Check the Operator and controller pods are both running:
245+
. Check the that the operator and controller pods are both running:
246246
+
247247
[source,terminal]
248248
----
@@ -258,7 +258,8 @@ aws-load-balancer-controller-cluster-6ddf658785-pdp5d 1/1 Running
258258
aws-load-balancer-operator-controller-manager-577d9ffcb9-w6zqn 2/2 Running 0 2m4s
259259
----
260260

261-
== Validate the deployment with a hello world application
261+
[id="validating-the-deployment_{context}"]
262+
== Validating the deployment
262263

263264
. Create a new project:
264265
+
@@ -274,7 +275,7 @@ $ oc new-project hello-world
274275
$ oc new-app -n hello-world --image=docker.io/openshift/hello-openshift
275276
----
276277
+
277-
. Configure a NodePort service for the ALB to connect to:
278+
. Configure a NodePort service for the AWS ALB to connect to:
278279
+
279280
[source,terminal]
280281
----
@@ -295,10 +296,11 @@ spec:
295296
EOF
296297
----
297298
+
298-
. Deploy an ALB using the Operator:
299+
. Deploy an AWS ALB using the AWS Load Balancer Operator:
299300
+
300301
[source,terminal]
301302
----
303+
$ cat << EOF | oc apply -f -
302304
apiVersion: networking.k8s.io/v1
303305
kind: Ingress
304306
metadata:
@@ -321,11 +323,11 @@ spec:
321323
EOF
322324
----
323325
+
324-
. Curl the ALB Ingress endpoint to verify the hello world application is accessible:
326+
. Curl the AWS ALB Ingress endpoint to verify the hello world application is accessible:
325327
+
326328
[NOTE]
327329
====
328-
ALB provisioning takes a few minutes. If you receive an error that says `curl: (6) Could not resolve host:`, please wait and try again.
330+
AWS ALB provisioning takes a few minutes. If you receive an error that says `curl: (6) Could not resolve host`, please wait and try again.
329331
====
330332
+
331333
[source,termnial]
@@ -341,7 +343,7 @@ $ curl "http://${INGRESS}"
341343
Hello OpenShift!
342344
----
343345

344-
. Next, deploy an NLB for your hello world application:
346+
. Deploy an AWS NLB for your hello world application:
345347
+
346348
[source,terminal]
347349
----
@@ -366,11 +368,11 @@ spec:
366368
EOF
367369
----
368370
+
369-
. Test the NLB endpoint:
371+
. Test the AWS NLB endpoint:
370372
+
371373
[NOTE]
372374
====
373-
NLB provisioning takes a few minutes. If you receive an error that says `curl: (6) Could not resolve host:`, please wait and try again.
375+
NLB provisioning takes a few minutes. If you receive an error that says `curl: (6) Could not resolve host`, please wait and try again.
374376
====
375377
+
376378
[source,terminal]
@@ -386,16 +388,17 @@ $ curl "http://${NLB}"
386388
Hello OpenShift!
387389
----
388390

389-
== Clean up
391+
[id="cleaning-up_{context}"]
392+
== Cleaning up
390393

391394
. Delete the hello world application namespace (and all the resources in the namespace):
392395
+
393396
[source,terminal]
394397
----
395-
$ oc delete ns hello-world
398+
$ oc delete project hello-world
396399
----
397400
+
398-
. Delete the Operator and the AWS roles:
401+
. Delete the AWS Load Balancer Operator and the AWS IAM roles:
399402
+
400403
[source,terminal]
401404
----
@@ -407,28 +410,7 @@ $ aws iam delete-role \
407410
--role-name "${ROSA_CLUSTER_NAME}-alb-operator"
408411
----
409412
+
410-
. You can delete the policy:
411-
+
412-
[source,terminal]
413-
----
414-
$ aws iam delete-policy --policy-arn $POLICY_ARN
415-
----
416-
417-
== Clean up
418-
419-
. Delete the Operator and the AWS roles:
420-
+
421-
[source,terminal]
422-
----
423-
$ oc delete subscription aws-load-balancer-operator -n aws-load-balancer-operator
424-
aws iam detach-role-policy \
425-
--role-name "${ROSA_CLUSTER_NAME}-alb-operator" \
426-
--policy-arn $POLICY_ARN
427-
aws iam delete-role \
428-
--role-name "${ROSA_CLUSTER_NAME}-alb-operator"
429-
----
430-
+
431-
. You can delete the policy:
413+
. Delete the AWS IAM policy:
432414
+
433415
[source,terminal]
434416
----

0 commit comments

Comments
 (0)