|
1 | | -# Configuration of Argo Workflow |
| 1 | +# Configuration of Argo Workflow entities |
| 2 | + |
| 3 | +## Prerequisites: |
| 4 | +- Minikube |
| 5 | +- `kubectl` command-line tool installed and configured to connect to your Kubernetes cluster. |
| 6 | +- Helm version 3.x installed. |
| 7 | + |
| 8 | +## Preparation steps: |
| 9 | + |
| 10 | +### 1. Start minikube and install Argo Workflows using Helm: |
| 11 | + |
| 12 | +```bash |
| 13 | +minikube start |
| 14 | +helm repo add argo https://argoproj.github.io/argo-helm |
| 15 | +helm install argo-workflows argo/argo-workflows |
| 16 | +``` |
| 17 | + |
| 18 | +This command installs Argo Workflows in the default namespace of your Kubernetes cluster. |
| 19 | + |
| 20 | +### 2. Verify the Installation: |
| 21 | + |
| 22 | +To check if the installation was successful, you can run: |
| 23 | + |
| 24 | +```bash |
| 25 | +kubectl get pods -n argo |
| 26 | +``` |
| 27 | + |
| 28 | +You should see a list of pods running with names prefixed with `workflow-controller` and `argo-server`. |
| 29 | + |
| 30 | +### 3. Patch argo-server authentication¶ |
| 31 | + |
| 32 | +As reported on the official documentation: https://argo-workflows.readthedocs.io/en/latest/quick-start/#patch-argo-server-authentication |
| 33 | + |
| 34 | +The argo-server (and thus the UI) defaults to client authentication, which requires clients to provide their Kubernetes bearer token to authenticate. For more information, refer to the Argo Server Auth Mode documentation. We will switch the authentication mode to server so that we can bypass the UI login for now: |
2 | 35 |
|
3 | 36 | ```bash |
4 | | -kubectl create rolebinding default-admin --clusterrole=admin --serviceaccount=argo:default -n argo |
| 37 | +kubectl patch deployment \ |
| 38 | + argo-server \ |
| 39 | + --namespace argo \ |
| 40 | + --type='json' \ |
| 41 | + -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/args", "value": [ |
| 42 | + "server", |
| 43 | + "--auth-mode=server" |
| 44 | +]}]' |
5 | 45 | ``` |
6 | 46 |
|
7 | | -But: <https://argoproj.github.io/argo-workflows/workflow-rbac/> |
| 47 | +### 4. Access Argo Workflows UI (Optional): |
| 48 | + |
| 49 | +Argo Workflows provides a web-based UI for managing and monitoring workflows. To access the UI, you need to expose it as a service: |
| 50 | + |
| 51 | +```bash |
| 52 | +kubectl port-forward svc/argo-server -n argo 2746:2746 |
| 53 | +``` |
| 54 | + |
| 55 | +Now you can access the Argo Workflows UI by navigating to `http://localhost:2746` in your web browser. |
| 56 | + |
| 57 | +### 5. Add privileges to Argo service accounts |
| 58 | + |
| 59 | +> Add this privileges to the Argo service accounts are recommended only for demo purposes. **IT'S STRONGLY NOT RECOMMENDED TO REPLICATE THIS CONFIGURATION IN PRODUCTION EVINRONMENTS.** |
| 60 | +
|
| 61 | +This command adds `cluster-admin` clusterrole to `argo:argo-server` and `argo:default`. In this way Argo Workflow can managed every kind of resources in every namespaces of the cluster. |
| 62 | + |
| 63 | +```bash |
| 64 | +kubectl create clusterrolebinding argo-admin-server --clusterrole=cluster-admin --serviceaccount=argo:argo-server -n argo |
| 65 | +kubectl create clusterrolebinding argo-admin-default --clusterrole=cluster-admin --serviceaccount=argo:default -n argo |
| 66 | +``` |
| 67 | +> In production evironments it's strongly recommended to create a dedicated role to these service accounts allowing only required verbs on the resources managed by the workflows. |
| 68 | +
|
| 69 | +### 6. Prepare secrets required by the pipelines |
| 70 | + |
| 71 | +Just in case of private Git repository you can run this command to allow the clone command executed by the pipeline `ci.yaml`: |
| 72 | + |
| 73 | +```bash |
| 74 | +kubectl create secret generic github-token -n argo --from-literal=token=......... |
| 75 | +``` |
| 76 | + |
| 77 | +This command create the secret that contains the credentials to push the Docker image to the registry: |
| 78 | + |
| 79 | +```bash |
| 80 | +export DOCKER_USERNAME=****** |
| 81 | +export DOCKER_TOKEN=****** |
| 82 | +kubectl create secret generic docker-config --from-literal="config.json={\"auths\": {\"https://ghcr.io/\": {\"auth\": \"$(echo -n $DOCKER_USERNAME:$DOCKER_TOKEN|base64)\"}}}" |
| 83 | +``` |
| 84 | + |
| 85 | +### 7. Add Argo WorkflowTemplate manifests |
| 86 | + |
| 87 | +```bash |
| 88 | +git clone https://github.com/banshee86vr/ephemeral-test-environment.git |
| 89 | +cd argo/workflow |
| 90 | + |
| 91 | +kubectl apply -f ci.yaml |
| 92 | +kubectl apply -f lang/go.yaml |
| 93 | +kubectl apply -f cd.yaml |
| 94 | +``` |
| 95 | +## Execution steps: |
| 96 | + |
| 97 | +With all prerequisites met and Argo Workflows successfully deployed and configured, you dive into the execution steps to start creating and managing workflows. |
| 98 | + |
| 99 | +### 8. Submit the CI pipeline |
| 100 | + |
| 101 | +To submit the CI pipeline, you can use the [official APIs](https://argo-workflows.readthedocs.io/en/latest/rest-api/): |
| 102 | + |
| 103 | +```bash |
| 104 | +<ArgoWorkflow URL>/api/v1/workflows/{namespace}/submit |
| 105 | +``` |
| 106 | + |
| 107 | +Or, alternatively, you can submit the workflow using the UI: |
| 108 | + |
| 109 | + |
| 110 | + |
| 111 | +The CI pipeline performs these steps: |
| 112 | + |
| 113 | +1. **Cloning Repository**: Fetches the source code from the git repository. |
| 114 | +2. **Building Application**: Utilizes the GoLang template to compile the Go application. |
| 115 | +3. **Building and Pushing Docker Image**: Packages the application into a Docker image and pushes it to the registry. |
| 116 | + |
| 117 | +After the completion of all steps you can check the correct status of every step: |
| 118 | + |
| 119 | + |
| 120 | + |
| 121 | +If all steps have been successfully completed, you can find a new version of the Docker image in your registry. |
| 122 | + |
| 123 | +### 9. Submit the CD pipeline |
| 124 | + |
| 125 | +To submit the CD pipeline, you can use the [official APIs](https://argo-workflows.readthedocs.io/en/latest/rest-api/): |
| 126 | + |
| 127 | +```bash |
| 128 | +<ArgoWorkflow URL>/api/v1/workflows/{namespace}/submit |
| 129 | +``` |
| 130 | + |
| 131 | +Or, alternatively, you can submit the workflow using the UI: |
| 132 | + |
| 133 | + |
| 134 | + |
| 135 | +The CD pipeline performs these steps: |
| 136 | + |
| 137 | +1. **Preparing an ephemeral environment**: Prepares an ephemeral environment using vCluster where the user can test the application inside an isolated Kubernetes cluster |
| 138 | +2. **Deploy the application**: Deploy the application Helm chart on the vCluster just created |
| 139 | + |
| 140 | +After the completion of all steps you can check the correct status of every step: |
| 141 | + |
| 142 | + |
| 143 | + |
| 144 | +If all steps have been successfully completed, you can check the status of your application deployed on the vCluster just created |
| 145 | + |
| 146 | +### 10. Access to the application |
| 147 | + |
| 148 | +To check how to access to the application deployed on vCluster, you can run this commands to list all vCluster and to access it: |
| 149 | + |
| 150 | +```bash |
| 151 | +$ vcluster list |
| 152 | + |
| 153 | + NAME | CLUSTER | NAMESPACE | STATUS | VERSION | CONNECTED | CREATED | AGE | DISTRO |
| 154 | + ------------------+----------+-----------------+---------+---------+-----------+-------------------------------+---------+--------- |
| 155 | + demo-pr-request | minikube | demo-pr-request | Running | 0.19.0 | | xxxx-xx-xx xx:xx:xx +0100 CET | 1h8m49s | OSS |
| 156 | + |
| 157 | +$ ➜ vcluster connect demo-pr-request --namespace demo-pr-request -- kubectl get pod -n demo-pr-request |
| 158 | + |
| 159 | +NAME READY STATUS RESTARTS AGE |
| 160 | +demo-pr-request-hello-world-7f6d78645f-bjmjc 1/1 Running 0 7s |
| 161 | +``` |
| 162 | + |
| 163 | +As reported [here](https://www.vcluster.com/docs/using-vclusters/access) you can expose in different way the ephemeral vCluster created. |
| 164 | + |
| 165 | +- **Via Ingress**: An Ingress Controller with SSL passthrough support will provide the best user experience, but there is a workaround if this feature is not natively supported. |
| 166 | + |
| 167 | + - Kubernetes Nginx |
| 168 | + - Traefik Proxy |
| 169 | + - Emissary |
| 170 | + |
| 171 | + Make sure your ingress controller is installed and healthy on the cluster that will host your virtual clusters. More details [here](https://www.vcluster.com/docs/using-vclusters/access#via-ingress) |
| 172 | +- **Via LoadBalancer service**: The easiest way is to use the flag `--expose` in vcluster create to tell vCluster to use a LoadBalancer service. It depens on the specific implementation of host kubernetes cluster. |
| 173 | +- **Via NodePort service**: You can also expose the vCluster via a NodePort service. In this case you have to create a NodePort service and change the `values.yaml` file to use for creation of the vCluster. More details [here](https://www.vcluster.com/docs/using-vclusters/access#via-nodeport-service) |
| 174 | +- **From Host Cluster**: In order to access the virtual cluster from within the host cluster, you can directly connect to the vCluster service. Make sure you can access that service and then create a kube config in the following form: |
| 175 | + |
| 176 | + ```bash |
| 177 | + vcluster connect my-vcluster -n my-vcluster --server=my-vcluster.my-vcluster --insecure --update-current=false |
| 178 | + ``` |
0 commit comments