|
| 1 | +# Tekton CI for Instana Python Tracer |
| 2 | + |
| 3 | +## Basic Tekton setup |
| 4 | + |
| 5 | +### Get a cluster |
| 6 | + |
| 7 | +What you will need: |
| 8 | +* Full administrator access |
| 9 | +* Enough RAM and CPU on a cluster node to run all the pods of a single Pipelinerun on a single node. |
| 10 | + Multiple nodes increase the number of parallel `PipelineRun` instances. |
| 11 | + Currently one `PipelineRun` instance is capable of saturating a 8vCPU - 16GB RAM worker node. |
| 12 | + |
| 13 | +### Setup Tekton on your cluster |
| 14 | + |
| 15 | +1. Install latest stable Tekton Pipeline release |
| 16 | +```bash |
| 17 | + kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml |
| 18 | +``` |
| 19 | + |
| 20 | +2. Install Tekton Dashboard Full (the normal is read only, and doesn't allow for example to re-run). |
| 21 | + |
| 22 | +````bash |
| 23 | + kubectl apply --filename https://storage.googleapis.com/tekton-releases/dashboard/latest/release-full.yaml |
| 24 | +```` |
| 25 | + |
| 26 | +3. Access the dashboard |
| 27 | + |
| 28 | +```bash |
| 29 | +kubectl proxy |
| 30 | +``` |
| 31 | + |
| 32 | +Once the proxy is active, navigate your browser to the [dashboard url]( |
| 33 | +http://localhost:8001/api/v1/namespaces/tekton-pipelines/services/tekton-dashboard:http/proxy/) |
| 34 | + |
| 35 | +### Setup the python-tracer-ci-pipeline |
| 36 | + |
| 37 | +````bash |
| 38 | + kubectl apply --filename task.yaml && kubectl apply --filename pipeline.yaml |
| 39 | +```` |
| 40 | + |
| 41 | +### Run the pipeline manually |
| 42 | + |
| 43 | +#### From the Dashboard |
| 44 | +Navigate your browser to the [pipelineruns section of the dashboard]( |
| 45 | +http://localhost:8001/api/v1/namespaces/tekton-pipelines/services/tekton-dashboard:http/proxy/#/pipelineruns) |
| 46 | + |
| 47 | +1. Click `Create` |
| 48 | +2. Select the `Namespace` (where the `Pipeline` resource is created by default it is `default`) |
| 49 | +3. Select the `Pipeline` created in the `pipeline.yaml` right now it is `python-tracer-ci-pipeline` |
| 50 | +4. Fill in `Params`. The `revision` should be `master` for the `master` branch |
| 51 | +4. Select the `ServiceAccount` set to `default` |
| 52 | +5. Optionally, enter a `PipelineRun name` for example `my-master-test-pipeline`, |
| 53 | + but if you don't then the Dashboard will generate a unique one for you. |
| 54 | +6. As long as [the known issue with Tekton Dashboard Workspace binding]( |
| 55 | + https://github.com/tektoncd/dashboard/issues/1283), is not resolved. |
| 56 | + You have to go to `YAML Mode` and insert the workspace definition at the end of the file, |
| 57 | + with the exact same indentation: |
| 58 | + |
| 59 | +````yaml |
| 60 | + workspaces: |
| 61 | + - name: python-tracer-ci-pipeline-pvc-$(params.revision) |
| 62 | + volumeClaimTemplate: |
| 63 | + spec: |
| 64 | + accessModes: |
| 65 | + - ReadWriteOnce |
| 66 | + resources: |
| 67 | + requests: |
| 68 | + storage: 100Mi |
| 69 | + |
| 70 | +```` |
| 71 | +7. Click `Create` at the bottom of the page |
| 72 | + |
| 73 | + |
| 74 | +#### From kubectl CLI |
| 75 | +As an alternative to using the Dashboard, you can manually edit `pipelinerun.yaml` and create it with: |
| 76 | +````bash |
| 77 | + kubectl apply --filename pipelinerun.yaml |
| 78 | +```` |
| 79 | + |
| 80 | +### Clanup PipelineRun and associated PV resources |
| 81 | + |
| 82 | +`PipelineRuns` and workspace `PersistentVolume` resources by default are kept indefinitely, |
| 83 | +and repeated runs might exhaust the available resources, therefore they need to be cleaned up either |
| 84 | +automatically or manually. |
| 85 | + |
| 86 | +#### Manully from the Dashboard |
| 87 | + |
| 88 | +Navigate to `PipelineRuns` and check the checkbox next to the pipelinerun |
| 89 | +and then click `Delete` in the upper right corner. |
| 90 | + |
| 91 | +#### Manually from the CLI |
| 92 | + |
| 93 | +You can use either `kubectl` |
| 94 | +````bash |
| 95 | +kubectl get pipelinerun |
| 96 | +kubectl delete pipelinerun <selected-pipelinerun-here> |
| 97 | +```` |
| 98 | + |
| 99 | +or `tkn` cli |
| 100 | +````bash |
| 101 | +tkn pipelinerun list |
| 102 | +tkn pipelinerun delete <selected-pipelinerun-here> |
| 103 | +```` |
| 104 | + |
| 105 | +#### Automatic cleanup with a cronjob |
| 106 | + |
| 107 | +Install and configure resources from https://github.com/3scale-ops/tekton-pipelinerun-cleaner |
| 108 | + |
| 109 | + |
| 110 | +## Integrate with GitHub |
| 111 | + |
| 112 | +### GitHub PR Trigger & PR Check API integration |
| 113 | + |
| 114 | +The GitHub integration requires further Tekton Triggers and Interceptors to be installed |
| 115 | +````bash |
| 116 | +kubectl apply --filename \ |
| 117 | +https://storage.googleapis.com/tekton-releases/triggers/latest/release.yaml |
| 118 | +kubectl apply --filename \ |
| 119 | +https://storage.googleapis.com/tekton-releases/triggers/latest/interceptors.yaml |
| 120 | +```` |
| 121 | +#### Create a ServiceAccount |
| 122 | + |
| 123 | +Our future GitHub PR Event listener needs a service account, |
| 124 | +`tekton-triggers-eventlistener-serviceaccount` which authorizes it to |
| 125 | +perform operations specified in eventlistener `Role` and `ClusteRole`. |
| 126 | +Create the service account with the needed role bindings: |
| 127 | + |
| 128 | +````bash |
| 129 | + kubectl apply --filename tekton-triggers-eventlistener-serviceaccount.yaml |
| 130 | +```` |
| 131 | + |
| 132 | +#### Create the Secret for the GitHub repository webhook |
| 133 | + |
| 134 | +In order to authorize the incoming webhooks into our cluster, we need to share |
| 135 | +a secret between our webhook listener, and the GitHub repo. |
| 136 | +Generate a long, strong and random generated token, put it into `github-interceptor-secret.yaml`. |
| 137 | +Create the secret resource: |
| 138 | +````bash |
| 139 | + kubectl apply --filename github-interceptor-secret.yaml |
| 140 | +```` |
| 141 | + |
| 142 | +#### Create the Task and token to report PR Check status to GitHub |
| 143 | + |
| 144 | +The GitHub PR specific Tekton pipeline will want to send data to report the `PR Check Status`. |
| 145 | +That [GitHub API](https://docs.github.com/en/rest/commits/statuses?apiVersion=2022-11-28#create-a-commit-status |
| 146 | +) requires authentication, and therefore we need a token. |
| 147 | +The user which generates the token has to have `Write` access in the target repo, |
| 148 | +as part of the organisation. Check the repo access for this repo under |
| 149 | +https://github.com/instana/python-sensor/settings/access. |
| 150 | + |
| 151 | +With the proper user: |
| 152 | +1. Navigate to https://github.com/settings/tokens |
| 153 | +2. Click on `Generate new token` dropdown `Generate new token (classic)`. |
| 154 | +3. Fill in `Note` with for example `Tekton commit status`, |
| 155 | +4. Make sure if you set an expiration, than you remember to renew the token after expiry. |
| 156 | +5. Under `Select scopes` find `repo` and below that only select the checkbox next to `repo:status` - `Access commit status`. |
| 157 | + click `Generate token` |
| 158 | +6. Create the kubernetes secret with the token: |
| 159 | + |
| 160 | +````bash |
| 161 | + kubectl create secret generic githubtoken --from-literal token="MY_TOKEN" |
| 162 | +```` |
| 163 | + |
| 164 | +And we also make an HTTP POST with the status update data to GitHub. |
| 165 | +This is done in a `Task` called `github-set-status`, create it as such: |
| 166 | +````bash |
| 167 | + kubectl apply -f github-set-status-task.yaml |
| 168 | +```` |
| 169 | + |
| 170 | +#### Create the GitHub PR pipeline |
| 171 | + |
| 172 | +Create the new pipeline, which executes the previously created `python-tracer-ci-pipeline`, |
| 173 | +wrapped around with GitHub Check status reporting tasks. As long as [Pipelines in Pipelines]( |
| 174 | +https://tekton.dev/docs/pipelines/pipelines-in-pipelines/), remains an |
| 175 | +unimplemented `alpha` feature in Tekton, |
| 176 | +we will need the [yq](https://github.com/mikefarah/yq) (at least `4.0`) |
| 177 | +to pull the tasks from our previous `python-tracer-ci-pipeline` into the |
| 178 | +new pipeline `github-pr-python-tracer-ci-pipeline`. |
| 179 | + |
| 180 | +````bash |
| 181 | + (cat github-pr-pipeline.yaml.part && yq '{"a": {"b": .spec.tasks}}' pipeline.yaml| tail --lines=+3) | kubectl apply -f - |
| 182 | +```` |
| 183 | + |
| 184 | +#### Create the GitHub PR Event Listener, TriggerTemplate and TriggerBinding |
| 185 | + |
| 186 | +Once the new GitHub specific pipeline is created, we need a listener which starts |
| 187 | +a new `PipelineRun` based on GitHub events. |
| 188 | + |
| 189 | +````bash |
| 190 | + kubectl apply --filename github-pr-eventlistener.yaml |
| 191 | +```` |
| 192 | + |
| 193 | +After this ensure that there is a pod and a service created: |
| 194 | + |
| 195 | +````bash |
| 196 | + kubectl get pod | grep -i el-github-pr-eventlistener |
| 197 | + kubectl get svc | grep -i el-github-pr-eventlistener |
| 198 | +```` |
| 199 | + |
| 200 | +Do not continue if any of these missing. |
| 201 | + |
| 202 | +#### Create the Ingress for the GitHub Webhook to come through |
| 203 | + |
| 204 | +You will need an ingress controller for this. |
| 205 | +On IKS you might want to read these resources: |
| 206 | +* [managed ingress](https://cloud.ibm.com/docs/containers?topic=containers-managed-ingress-about) |
| 207 | +* Or unmanaged [ingress controller howto]( |
| 208 | +https://github.com/IBM-Cloud/iks-ingress-controller/blob/master/docs/installation.md |
| 209 | +). |
| 210 | + |
| 211 | +1. Check the available `ingressclass` resources on your cluster |
| 212 | + |
| 213 | +````bash |
| 214 | + kubectl get ingressclass |
| 215 | +```` |
| 216 | + |
| 217 | +* On `IKS` it will be `public-iks-k8s-nginx`. |
| 218 | +* On `EKS` with the `ALB` ingress controller, it might be just `alb` |
| 219 | +* On self hosted [nginx controller](https://kubernetes.github.io/ingress-nginx/deploy/) |
| 220 | + this might just be `nginx`. |
| 221 | + |
| 222 | +Edit and save the value of `ingressClassName:` in `github-webhook-ingress.yaml`. |
| 223 | + |
| 224 | +2. Find out your Ingress domain or subdomain name. |
| 225 | + |
| 226 | +* On `IKS`, go to `Clusters` select your cluster and then click `Overview`. |
| 227 | + The domain name is listed under `Ingress subdomain`. |
| 228 | + |
| 229 | +and create the resource: |
| 230 | + |
| 231 | +````bash |
| 232 | + kubectl apply --filename github-webhook-ingress.yaml |
| 233 | +```` |
| 234 | + |
| 235 | +Make sure that you can use the ingress with the `/hooks` path via `https`: |
| 236 | +````bash |
| 237 | + curl https://<INGRESS_DOMAIN_NAME>/hooks |
| 238 | +```` |
| 239 | + |
| 240 | +At this point this should respond this: |
| 241 | +```json |
| 242 | + { |
| 243 | + "eventListener":"github-pr-eventlistener", |
| 244 | + "namespace":"default", |
| 245 | + "eventListenerUID":"", |
| 246 | + "errorMessage":"Invalid event body format : unexpected end of JSON input" |
| 247 | + } |
| 248 | +``` |
| 249 | + |
| 250 | +#### Setup the webhook on GitHub |
| 251 | + |
| 252 | +In the GitHub repo go to `Settings` -> `Webhooks` and click `Add Webhook`. |
| 253 | +The fields we need to set are: |
| 254 | +* `Payload URL`: `https://<INGRESS_DOMAIN_NAME>/hooks` |
| 255 | +* `Content type`: application/json |
| 256 | +* `Secret`: XXXXXXX (the secret token from github-interceptor-secret.yaml) |
| 257 | + |
| 258 | +Under `SSL verification` select the radio button for `Enable SSL verification`. |
| 259 | +Under `Which events would you like to trigger this webhook?` select |
| 260 | +the radio button for `Let me select individual events.` and thick the checkbox next to |
| 261 | +`Pull requests` and ensure that the rest are unthicked. |
| 262 | + |
| 263 | +Click `Add webhook`. |
| 264 | + |
| 265 | +If the webhook has been set up correctly, then GitHub sends a ping message. |
| 266 | +Ensure that the ping is received from GitHub, and that it is filtered out so |
| 267 | +a simple ping event does not trigger any `PipelineRun` unnecessarily. |
| 268 | + |
| 269 | +````bash |
| 270 | +eventlistener_pod=$(kubectl get pods -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}' | grep el-github-pr) |
| 271 | +kubectl logs "${eventlistener_pod}" | grep 'event type ping is not allowed' |
| 272 | +```` |
0 commit comments