Skip to content

Commit a082a97

Browse files
committed
fix: errors in README
1 parent cd98c84 commit a082a97

File tree

5 files changed

+24
-22
lines changed

5 files changed

+24
-22
lines changed

argo/workflow/README.md

Lines changed: 24 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,14 @@
11
# Configuration of Argo Workflow entities
22

3-
## Prerequisites:
3+
## Prerequisites
4+
45
- Minikube
56
- `kubectl` command-line tool installed and configured to connect to your Kubernetes cluster.
6-
- Helm version 3.x installed.
7+
- Helm version `3.x` installed.
78

8-
## Preparation steps:
9+
## Preparation steps
910

10-
### 1. Start minikube and install Argo Workflows using Helm:
11+
### 1. Start Minikube and install Argo Workflows using Helm
1112

1213
```bash
1314
minikube start
@@ -17,7 +18,7 @@ helm install argo-workflows argo/argo-workflows
1718

1819
This command installs Argo Workflows in the default namespace of your Kubernetes cluster.
1920

20-
### 2. Verify the Installation:
21+
### 2. Verify the Installation
2122

2223
To check if the installation was successful, you can run:
2324

@@ -27,11 +28,11 @@ kubectl get pods -n argo
2728

2829
You should see a list of pods running with names prefixed with `workflow-controller` and `argo-server`.
2930

30-
### 3. Patch argo-server authentication
31+
### 3. Patch argo-server authentication
3132

3233
As reported on the official documentation: https://argo-workflows.readthedocs.io/en/latest/quick-start/#patch-argo-server-authentication
3334

34-
The argo-server (and thus the UI) defaults to client authentication, which requires clients to provide their Kubernetes bearer token to authenticate. For more information, refer to the Argo Server Auth Mode documentation. We will switch the authentication mode to server so that we can bypass the UI login for now:
35+
The argo-server (and thus the UI) defaults to client authentication, which requires clients to provide their Kubernetes bearer token to authenticate. For more information, refer to the Argo Server Auth Mode documentation. We will switch the authentication mode to `server` so that we can bypass the UI login for now:
3536

3637
```bash
3738
kubectl patch deployment \
@@ -44,7 +45,7 @@ kubectl patch deployment \
4445
]}]'
4546
```
4647

47-
### 4. Access Argo Workflows UI (Optional):
48+
### 4. Access Argo Workflows UI (Optional)
4849

4950
Argo Workflows provides a web-based UI for managing and monitoring workflows. To access the UI, you need to expose it as a service:
5051

@@ -58,7 +59,7 @@ Now you can access the Argo Workflows UI by navigating to `http://localhost:2746
5859

5960
> Add this privileges to the Argo service accounts are recommended only for demo purposes. **IT'S STRONGLY NOT RECOMMENDED TO REPLICATE THIS CONFIGURATION IN PRODUCTION EVINRONMENTS.**
6061
61-
This command adds `cluster-admin` clusterrole to `argo:argo-server` and `argo:default`. In this way Argo Workflow can managed every kind of resources in every namespaces of the cluster.
62+
This command adds `cluster-admin` clusterrole to `argo:argo-server` and `argo:default`. In this way, Argo Workflow can manage every kind of resource in every namespace of the cluster.
6263

6364
```bash
6465
kubectl create clusterrolebinding argo-admin-server --clusterrole=cluster-admin --serviceaccount=argo:argo-server -n argo
@@ -68,13 +69,13 @@ kubectl create clusterrolebinding argo-admin-default --clusterrole=cluster-admin
6869
6970
### 6. Prepare secrets required by the pipelines
7071

71-
Just in case of private Git repository you can run this command to allow the clone command executed by the pipeline `ci.yaml`:
72+
Just in case of a private Git repository you can run this command to allow the clone command executed by the pipeline `ci.yaml`:
7273

7374
```bash
7475
kubectl create secret generic github-token -n argo --from-literal=token=.........
7576
```
7677

77-
This command create the secret that contains the credentials to push the Docker image to the registry:
78+
This command creates the secret that contains the credentials to push the Docker image to the registry:
7879

7980
```bash
8081
export DOCKER_USERNAME=******
@@ -92,7 +93,8 @@ kubectl apply -f ci.yaml
9293
kubectl apply -f lang/go.yaml
9394
kubectl apply -f cd.yaml
9495
```
95-
## Execution steps:
96+
97+
## Execution steps
9698

9799
With all prerequisites met and Argo Workflows successfully deployed and configured, you dive into the execution steps to start creating and managing workflows.
98100

@@ -104,7 +106,7 @@ To submit the CI pipeline, you can use the [official APIs](https://argo-workflow
104106
<ArgoWorkflow URL>/api/v1/workflows/{namespace}/submit
105107
```
106108

107-
Or, alternatively, you can submit the workflow using the UI:
109+
Alternatively, you can submit the workflow using the UI:
108110

109111
![Submit CI workflow via UI](images/1_ci_submit.png)
110112

@@ -114,11 +116,11 @@ The CI pipeline performs these steps:
114116
2. **Building Application**: Utilizes the GoLang template to compile the Go application.
115117
3. **Building and Pushing Docker Image**: Packages the application into a Docker image and pushes it to the registry.
116118

117-
After the completion of all steps you can check the correct status of every step:
119+
After the completion of all steps, you can check the correct status of every step:
118120

119121
![CI workflow graph](images/2_ci_graph.png)
120122

121-
If all steps have been successfully completed, you can find a new version of the Docker image in your registry.
123+
If all steps have been completed, you can find a new version of the Docker image in your registry.
122124

123125
### 9. Submit the CD pipeline
124126

@@ -137,15 +139,15 @@ The CD pipeline performs these steps:
137139
1. **Preparing an ephemeral environment**: Prepares an ephemeral environment using vCluster where the user can test the application inside an isolated Kubernetes cluster
138140
2. **Deploy the application**: Deploy the application Helm chart on the vCluster just created
139141

140-
After the completion of all steps you can check the correct status of every step:
142+
After the completion of all steps, you can check the correct status of every step:
141143

142144
![CD workflow graph](images/4_cd_graph.png)
143145

144-
If all steps have been successfully completed, you can check the status of your application deployed on the vCluster just created
146+
If all steps have been completed, you can check the status of your application deployed on the vCluster just created
145147

146148
### 10. Access to the application
147149

148-
To check how to access to the application deployed on vCluster, you can run this commands to list all vCluster and to access it:
150+
To check how to access the application deployed on vCluster, you can run these commands to list all vCluster and to access it:
149151

150152
```bash
151153
$ vcluster list
@@ -160,7 +162,7 @@ NAME READY STATUS RESTARTS AGE
160162
demo-pr-request-hello-world-7f6d78645f-bjmjc 1/1 Running 0 7s
161163
```
162164

163-
As reported [here](https://www.vcluster.com/docs/using-vclusters/access) you can expose in different way the ephemeral vCluster created.
165+
As reported [here](https://www.vcluster.com/docs/using-vclusters/access) you can expose in different ways the ephemeral vCluster created.
164166

165167
- **Via Ingress**: An Ingress Controller with SSL passthrough support will provide the best user experience, but there is a workaround if this feature is not natively supported.
166168

@@ -169,9 +171,9 @@ As reported [here](https://www.vcluster.com/docs/using-vclusters/access) you can
169171
- Emissary
170172

171173
Make sure your ingress controller is installed and healthy on the cluster that will host your virtual clusters. More details [here](https://www.vcluster.com/docs/using-vclusters/access#via-ingress)
172-
- **Via LoadBalancer service**: The easiest way is to use the flag `--expose` in vcluster create to tell vCluster to use a LoadBalancer service. It depens on the specific implementation of host kubernetes cluster.
173-
- **Via NodePort service**: You can also expose the vCluster via a NodePort service. In this case you have to create a NodePort service and change the `values.yaml` file to use for creation of the vCluster. More details [here](https://www.vcluster.com/docs/using-vclusters/access#via-nodeport-service)
174-
- **From Host Cluster**: In order to access the virtual cluster from within the host cluster, you can directly connect to the vCluster service. Make sure you can access that service and then create a kube config in the following form:
174+
- **Via LoadBalancer service**: The easiest way is to use the flag `--expose` in vcluster create to tell vCluster to use a LoadBalancer service. It depends on the specific implementation of the host Kubernetes cluster.
175+
- **Via NodePort service**: You can also expose the vCluster via a NodePort service. In this case, you have to create a NodePort service and change the `values.yaml` file to use for the creation of the vCluster. More details [here](https://www.vcluster.com/docs/using-vclusters/access#via-nodeport-service)
176+
- **From Host **Cluster**: To access the virtual cluster from within the host cluster, you can directly connect to the vCluster service. Make sure you can access that service and then create a kube config in the following form:
175177

176178
```bash
177179
vcluster connect my-vcluster -n my-vcluster --server=my-vcluster.my-vcluster --insecure --update-current=false
-47.3 KB
Loading
-6.14 KB
Loading
-54.2 KB
Loading
-8.33 KB
Loading

0 commit comments

Comments
 (0)