You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: examples/nginx-gateway-fabric/ARTICLE.md
+45-12Lines changed: 45 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -93,19 +93,25 @@ If you instead wish to use the NGINX Open Source edition of NGF, you will need t
93
93
1. Create a Secret to pull the NGF container from the F5 private registry. The secret is based on the contents of the trial JWT from MyF5. If you do not have a trial JWT, you can request one [here](https://www.f5.com/trials/free-trial-connectivity-stack-kubernetes).
1. Create a namespace for Argo Rollouts, and install it using manifests from the project's GitHub repo:
@@ -136,10 +142,12 @@ If you instead wish to use the NGINX Open Source edition of NGF, you will need t
136
142
137
143
1. Install the Argo Rollouts CLI using the [instructions](https://argoproj.github.io/argo-rollouts/installation/#kubectl-plugin-installation) for your client platform.
138
144
139
-
1. Install the Gateway API Plugin for Argo Rollouts:
145
+
1. Create a `gateway-plugin.yaml` file based on the instructions on the [Argo Rollouts Gateway API plugin](https://rollouts-plugin-trafficrouter-gatewayapi.readthedocs.io/en/latest/installation/#installing-the-plugin-via-https) page.
146
+
147
+
1. Install the Gateway API Plugin forArgo Rollouts from the yaml file you createdin the previous step:
time="YYY" level=info msg="Download complete, it took 7.792426599s"
156
164
```
157
165
@@ -199,6 +207,31 @@ Now that our cluster services are in place, we will now use NGF and Argo Rollout
199
207
***Why do we need two different services that contain the same app selector?***
200
208
When implementing a Canary deployment, Argo Rollouts requires two different Kubernetes services: A "stable" service, and a "canary" service. The "stable" service will direct traffic to the initial deployment of the application. In subsequent (or "canary") deployments, Argo Rollouts will transparently configure the "canary" service to use the endpoints exposed by the pods referenced in the new deployment. NGF will use both of these services to split traffic between these two defined services based on the Argo Rollout rules.
201
209
210
+
1. The argo-rollouts service account needs the ability to be able to view and modify HTTPRoutes as well as its existing permissions. Edit the `argo-rollouts` cluster role to add the following permissions to `rules:`:
211
+
212
+
```yaml
213
+
- apiGroups:
214
+
- gateway.networking.k8s.io
215
+
resources:
216
+
- httproutes
217
+
verbs:
218
+
- get
219
+
- list
220
+
- watch
221
+
- update
222
+
- patch
223
+
```
224
+
225
+
1. Additionally, the argo-rollouts service account needs the ability to be able to create ConfigMaps as well as its existing permissions. Edit the `argo-rollouts` cluster role to add the following permissions:
226
+
227
+
```yaml
228
+
- apiGroups: [""]:
229
+
resources:
230
+
- configmaps
231
+
verbs:
232
+
- create
233
+
```
234
+
202
235
1. Now we will deploy the `AnalysisTemplate` resource, provided by Argo Rollouts. This resource contains the rules to assess a deployment's health, and how to interpret this data. In this demo, we will be using Prometheus query to examine the canary service's upstream pods for the absence of 4xx and 5xx HTTP response codes as an indication of its health.
203
236
204
237
```shell
@@ -237,11 +270,11 @@ Now that our cluster services are in place, we will now use NGF and Argo Rollout
237
270
- There are 8 steps to this progressive rollout. Refer to the `rollout.yaml` file to see the configured stages.
238
271
- You should see that the rollout is `Progressing`, awaiting a canary rollout.
239
272
240
-
1. Open a new shell window, and set up a port forward to the NGF pod's NGINX container:
273
+
1. Open a new shell window, and set up a port forward to the NGF pod's NGINX container that is hosting the gateway:
241
274
242
275
```shell
243
-
NGINX_POD=`kubectl get pods --selector=app.kubernetes.io/name=nginx-gateway-fabric -n nginx-gateway -o=name`
NGINX_POD=`kubectl get pods --selector=app.kubernetes.io/name=rollouts-demo-gateway-nginx -o=name`
277
+
kubectl port-forward $NGINX_POD 8080:80
245
278
```
246
279
247
280
> Note: In a production scenario, we would not have used a port forward. Rather, we would have likely used a `Service`type`LoadBalancer` to reach the NGF instance. However, this requires additional setup, and varies greatly depending on where your cluster is hosted.
@@ -311,7 +344,7 @@ We have just seen what an ideal rollout looks like. What about a rollout where f
1. Wait at least a minute, and you should see something like this:
314
-
347
+
˜˜
315
348

316
349
317
350
> Note: You will see a portion of red service responses for about a minute, then reverts back to 100% yellow. Why? Look at the kubectl argo rollouts plugin to see what is going on. You may observe that the AnalysisRun has failed at least one time, triggering Argo Rollouts to perform an automatic rollback to the last successful rollout version. Cool, right?
@@ -320,4 +353,4 @@ We have just seen what an ideal rollout looks like. What about a rollout where f
320
353
321
354
## Conclusion
322
355
323
-
This was only a taste of what can be accomplished with Argo Rollouts and NGINX Gateway Fabric. However, I hope you have witnessed the benefits of adopting progressive delivery patters using tools such as these. I would encourage you to further explore what you are able to accomplish in your own environments.
356
+
This was only a taste of what can be accomplished with Argo Rollouts and NGINX Gateway Fabric. However, I hope you have witnessed the benefits of adopting progressive delivery patterns using tools such as these. I would encourage you to further explore what you are able to accomplish in your own environments.
0 commit comments