|
| 1 | +:_content-type: ASSEMBLY |
| 2 | +[id=“cloud-experts-using-aws-ack] |
| 3 | += Tutorial: Using AWS Controllers for Kubernetes on ROSA |
| 4 | +include::_attributes/attributes-openshift-dedicated.adoc[] |
| 5 | +:context: cloud-experts-using-aws-ack |
| 6 | + |
| 7 | +toc::[] |
| 8 | + |
| 9 | +//Mobb content metadata |
| 10 | +//Brought into ROSA product docs 2023-09-21 |
| 11 | +//--- |
| 12 | +//date: '2022-06-02' |
| 13 | +//title: Using AWS Controllers for Kubernetes (ACK) on ROSA |
| 14 | +//weight: 1 |
| 15 | +//tags: ["AWS", "ROSA"] |
| 16 | +//authors: |
| 17 | +// - Paul Czarkowski |
| 18 | +// - Connor Wooley |
| 19 | +//--- |
| 20 | + |
| 21 | +link:https://aws-controllers-k8s.github.io/community/[AWS Controllers for Kubernetes] (ACK) lets you define and use AWS service resources directly from {product-title} (ROSA). With ACK, you can take advantage of AWS-managed services for your applications without needing to define resources outside of the cluster or run services that provide supporting capabilities like databases or message queues within the cluster. |
| 22 | + |
| 23 | +Users can install various ACK Operators directly from OperatorHub. This makes it relatively easy to get started and start using it with your applications. This controller is a component of the AWS Controller for Kubernetes project. This project is currently in developer preview. |
| 24 | + |
| 25 | +This tutorial shows you how to use the ACK S3 Operator as an example, but can be adapted for any other ACK Operator in the OperatorHub of your cluster. |
| 26 | + |
| 27 | +== Prerequisites |
| 28 | + |
| 29 | +* A ROSA cluster |
| 30 | +* A user account with `cluster-admin` privileges |
| 31 | +* You have access to the OpenShift CLI (`oc`) |
| 32 | +* You have access to the AWS CLI (`aws`) |
| 33 | + |
| 34 | +=== Environment Setup |
| 35 | + |
| 36 | +. Configure the following environment variables: |
| 37 | ++ |
| 38 | +[source,terminal] |
| 39 | +---- |
| 40 | +$ export CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}" | sed 's/-[a-z0-9]\{5\}$//') |
| 41 | +$ export REGION=$(rosa describe cluster -c ${ROSA_CLUSTER_NAME} --output json | jq -r .region.id) |
| 42 | +$ export OIDC_ENDPOINT=$(oc get authentication.config.openshift.io cluster -o json | jq -r .spec.serviceAccountIssuer | sed 's|^https://||') |
| 43 | +$ export AWS_ACCOUNT_ID=`aws sts get-caller-identity --query Account --output text` |
| 44 | +$ export ACK_SERVICE=s3 |
| 45 | +$ export ACK_SERVICE_ACCOUNT=ack-${ACK_SERVICE}-controller |
| 46 | +$ export POLICY_ARN=arn:aws:iam::aws:policy/AmazonS3FullAccess |
| 47 | +$ export AWS_PAGER="" |
| 48 | +$ export SCRATCH="/tmp/${ROSA_CLUSTER_NAME}/ack" |
| 49 | +$ mkdir -p ${SCRATCH} |
| 50 | +$ echo "Cluster: ${ROSA_CLUSTER_NAME}, Region: ${REGION}, OIDC Endpoint: ${OIDC_ENDPOINT}, AWS Account ID: ${AWS_ACCOUNT_ID}" |
| 51 | +---- |
| 52 | + |
| 53 | +== Prepare AWS Account |
| 54 | + |
| 55 | +. Create an AWS IAM trust policy for the ACK Operator: |
| 56 | ++ |
| 57 | +[source,terminal] |
| 58 | +---- |
| 59 | +$ cat <<EOF > "${SCRATCH}/trust-policy.json" |
| 60 | +{ |
| 61 | + "Version": "2012-10-17", |
| 62 | + "Statement": [ |
| 63 | + { |
| 64 | + "Effect": "Allow", |
| 65 | + "Condition": { |
| 66 | + "StringEquals" : { |
| 67 | + "${OIDC_ENDPOINT}:sub": "system:serviceaccount:ack-system:${ACK_SERVICE_ACCOUNT}" |
| 68 | + } |
| 69 | + }, |
| 70 | + "Principal": { |
| 71 | + "Federated": "arn:aws:iam::$AWS_ACCOUNT_ID:oidc-provider/${OIDC_ENDPOINT}" |
| 72 | + }, |
| 73 | + "Action": "sts:AssumeRoleWithWebIdentity" |
| 74 | + } |
| 75 | + ] |
| 76 | +} |
| 77 | +EOF |
| 78 | +---- |
| 79 | ++ |
| 80 | +. Create an AWS IAM role for the ACK Operator to assume with the `AmazonS3FullAccess` policy attached: |
| 81 | ++ |
| 82 | +[NOTE] |
| 83 | +==== |
| 84 | +You can find the recommended policy in each project's GitHub repository, for example https://github.com/aws-controllers-k8s/s3-controller/blob/main/config/iam/recommended-policy-arn. |
| 85 | +==== |
| 86 | ++ |
| 87 | +[source,terminal] |
| 88 | +---- |
| 89 | +$ ROLE_ARN=$(aws iam create-role --role-name "ack-${ACK_SERVICE}-controller" \ |
| 90 | + --assume-role-policy-document "file://${SCRATCH}/trust-policy.json" \ |
| 91 | + --query Role.Arn --output text) |
| 92 | +$ echo $ROLE_ARN |
| 93 | +
|
| 94 | +$ aws iam attach-role-policy --role-name "ack-${ACK_SERVICE}-controller" \ |
| 95 | + --policy-arn ${POLICY_ARN} |
| 96 | +---- |
| 97 | + |
| 98 | +== Install the ACK S3 Controller |
| 99 | + |
| 100 | +. Create a project to install the ACK S3 Operator into: |
| 101 | ++ |
| 102 | +[source,terminal] |
| 103 | +---- |
| 104 | +$ oc new-project ack-system |
| 105 | +---- |
| 106 | ++ |
| 107 | +. Create a file with the ACK S3 Operator configuration: |
| 108 | ++ |
| 109 | +[NOTE] |
| 110 | +==== |
| 111 | +`ACK_WATCH_NAMESPACE` is purposefully left blank so the controller can properly watch all namespaces in the cluster. |
| 112 | +==== |
| 113 | ++ |
| 114 | +[source,terminal] |
| 115 | +---- |
| 116 | +$ cat <<EOF > "${SCRATCH}/config.txt" |
| 117 | +ACK_ENABLE_DEVELOPMENT_LOGGING=true |
| 118 | +ACK_LOG_LEVEL=debug |
| 119 | +ACK_WATCH_NAMESPACE= |
| 120 | +AWS_REGION=${REGION} |
| 121 | +AWS_ENDPOINT_URL= |
| 122 | +ACK_RESOURCE_TAGS=${CLUSTER_NAME} |
| 123 | +ENABLE_LEADER_ELECTION=true |
| 124 | +LEADER_ELECTION_NAMESPACE= |
| 125 | +EOF |
| 126 | +---- |
| 127 | ++ |
| 128 | +. Use the file from the previous step to create a ConfigMap: |
| 129 | ++ |
| 130 | +[source,terminal] |
| 131 | +---- |
| 132 | +$ oc -n ack-system create configmap \ |
| 133 | + --from-env-file=${SCRATCH}/config.txt ack-${ACK_SERVICE}-user-config |
| 134 | +---- |
| 135 | ++ |
| 136 | +. Install the ACK S3 Operator from OperatorHub: |
| 137 | ++ |
| 138 | +[source,terminal] |
| 139 | +---- |
| 140 | +$ cat << EOF | oc apply -f - |
| 141 | +apiVersion: operators.coreos.com/v1 |
| 142 | +kind: OperatorGroup |
| 143 | +metadata: |
| 144 | + name: ack-${ACK_SERVICE}-controller |
| 145 | + namespace: ack-system |
| 146 | +spec: |
| 147 | + upgradeStrategy: Default |
| 148 | +--- |
| 149 | +apiVersion: operators.coreos.com/v1alpha1 |
| 150 | +kind: Subscription |
| 151 | +metadata: |
| 152 | + name: ack-${ACK_SERVICE}-controller |
| 153 | + namespace: ack-system |
| 154 | +spec: |
| 155 | + channel: alpha |
| 156 | + installPlanApproval: Automatic |
| 157 | + name: ack-${ACK_SERVICE}-controller |
| 158 | + source: community-operators |
| 159 | + sourceNamespace: openshift-marketplace |
| 160 | +EOF |
| 161 | +---- |
| 162 | ++ |
| 163 | +. Annotate the ACK S3 Operator service account with the AWS IAM role to assume and restart the deployment: |
| 164 | ++ |
| 165 | +[source,terminal] |
| 166 | +---- |
| 167 | +$ oc -n ack-system annotate serviceaccount ${ACK_SERVICE_ACCOUNT} \ |
| 168 | + eks.amazonaws.com/role-arn=${ROLE_ARN} && \ |
| 169 | + oc -n ack-system rollout restart deployment ack-${ACK_SERVICE}-controller |
| 170 | +---- |
| 171 | ++ |
| 172 | +. Verify that the ACK S3 Operator is running: |
| 173 | ++ |
| 174 | +[source,terminal] |
| 175 | +---- |
| 176 | +$ oc -n ack-system get pods |
| 177 | +---- |
| 178 | +.Example output |
| 179 | ++ |
| 180 | +[source,text] |
| 181 | +---- |
| 182 | +NAME READY STATUS RESTARTS AGE |
| 183 | +ack-s3-controller-585f6775db-s4lfz 1/1 Running 0 51s |
| 184 | +---- |
| 185 | + |
| 186 | +== Validating the deployment |
| 187 | + |
| 188 | +. Deploy an S3 bucket resource: |
| 189 | ++ |
| 190 | +[source,terminal] |
| 191 | +---- |
| 192 | +$ cat << EOF | oc apply -f - |
| 193 | +apiVersion: s3.services.k8s.aws/v1alpha1 |
| 194 | +kind: Bucket |
| 195 | +metadata: |
| 196 | + name: ${CLUSTER-NAME}-bucket |
| 197 | + namespace: ack-system |
| 198 | +spec: |
| 199 | + name: ${CLUSTER-NAME}-bucket |
| 200 | +EOF |
| 201 | +---- |
| 202 | ++ |
| 203 | +. Verify the S3 bucket was created in AWS: |
| 204 | ++ |
| 205 | +[source,terminal] |
| 206 | +---- |
| 207 | +$ aws s3 ls | grep ${CLUSTER_NAME}-bucket |
| 208 | +---- |
| 209 | ++ |
| 210 | +.Example output |
| 211 | +[source,text] |
| 212 | +---- |
| 213 | +2023-10-04 14:51:45 mrmc-test-maz-bucket |
| 214 | +---- |
| 215 | + |
| 216 | +== Cleaning up |
| 217 | + |
| 218 | +. Delete the S3 bucket resource: |
| 219 | ++ |
| 220 | +[source,terminal] |
| 221 | +---- |
| 222 | +$ oc -n ack-system delete bucket.s3.services.k8s.aws/${CLUSTER-NAME}-bucket |
| 223 | +---- |
| 224 | ++ |
| 225 | +. Delete the ACK S3 Operator and the AWS IAM roles: |
| 226 | ++ |
| 227 | +[source,terminal] |
| 228 | +---- |
| 229 | +$ oc -n ack-system delete subscription ack-${ACK_SERVICE}-controller |
| 230 | +$ aws iam detach-role-policy \ |
| 231 | + --role-name "ack-${ACK_SERVICE}-controller" \ |
| 232 | + --policy-arn ${POLICY_ARN} |
| 233 | +$ aws iam delete-role \ |
| 234 | + --role-name "ack-${ACK_SERVICE}-controller" |
| 235 | +---- |
| 236 | ++ |
| 237 | +. Delete the `ack-system` project: |
| 238 | ++ |
| 239 | +[source,terminal] |
| 240 | +---- |
| 241 | +$ oc delete project ack-system |
| 242 | +---- |
0 commit comments