From 5db487809640eeb9ead16a2e6ef2581017e76665 Mon Sep 17 00:00:00 2001 From: Luca Comellini Date: Thu, 6 Feb 2025 13:11:02 -0800 Subject: [PATCH 1/9] Release NGF 1.6.1 --- content/ngf/get-started.md | 8 +++--- content/ngf/how-to/monitoring/prometheus.md | 4 +-- .../traffic-management/advanced-routing.md | 4 +-- .../traffic-management/client-settings.md | 6 ++--- .../request-response-headers.md | 6 ++--- .../ngf/how-to/traffic-management/snippets.md | 6 ++--- .../how-to/upgrade-apps-without-downtime.md | 4 +-- .../ngf/installation/building-the-images.md | 12 ++++----- .../ngf/installation/installing-ngf/helm.md | 12 ++++----- .../installation/installing-ngf/manifests.md | 26 +++++++++---------- content/ngf/installation/nginx-plus-jwt.md | 2 +- content/ngf/overview/gateway-architecture.md | 6 ++--- 12 files changed, 48 insertions(+), 48 deletions(-) diff --git a/content/ngf/get-started.md b/content/ngf/get-started.md index e8cb75fdb..353985895 100644 --- a/content/ngf/get-started.md +++ b/content/ngf/get-started.md @@ -90,7 +90,7 @@ make create-kind-cluster Use `kubectl` to add the API resources for NGINX Gateway Fabric with the following command: ```shell -kubectl kustomize "https://github.com/nginx/nginx-gateway-fabric/config/crd/gateway-api/standard?ref=v1.5.1" | kubectl apply -f - +kubectl kustomize "https://github.com/nginx/nginx-gateway-fabric/config/crd/gateway-api/standard?ref=v1.6.1" | kubectl apply -f - ``` ```text @@ -105,7 +105,7 @@ customresourcedefinition.apiextensions.k8s.io/referencegrants.gateway.networking To use experimental features, you'll need to install the API resources from the experimental channel instead. ```shell -kubectl kustomize "https://github.com/nginx/nginx-gateway-fabric/config/crd/gateway-api/experimental?ref=v1.5.1" | kubectl apply -f - +kubectl kustomize "https://github.com/nginx/nginx-gateway-fabric/config/crd/gateway-api/experimental?ref=v1.6.1" | kubectl apply -f - ``` {{< /note >}} @@ -121,7 +121,7 @@ helm install ngf oci://ghcr.io/nginx/charts/nginx-gateway-fabric --create-namesp ``` ```text -Pulled: ghcr.io/nginx/charts/nginx-gateway-fabric:1.5.1 +Pulled: ghcr.io/nginx/charts/nginx-gateway-fabric:1.6.1 Digest: sha256:9bbd1a2fcbfd5407ad6be39f796f582e6263512f1f3a8969b427d39063cc6fee NAME: ngf LAST DEPLOYED: Mon Oct 21 14:45:14 2024 @@ -159,7 +159,7 @@ metadata: labels: app.kubernetes.io/name: nginx-gateway-fabric app.kubernetes.io/instance: ngf - app.kubernetes.io/version: "1.5.1" + app.kubernetes.io/version: "1.6.1" spec: type: NodePort selector: diff --git a/content/ngf/how-to/monitoring/prometheus.md b/content/ngf/how-to/monitoring/prometheus.md index cc4424dd2..180d1cd26 100644 --- a/content/ngf/how-to/monitoring/prometheus.md +++ b/content/ngf/how-to/monitoring/prometheus.md @@ -119,13 +119,13 @@ You can configure monitoring metrics for NGINX Gateway Fabric using Helm or Mani ### Using Helm -If you're setting up NGINX Gateway Fabric with Helm, you can adjust the `metrics.*` parameters to fit your needs. For detailed options and instructions, see the [Helm README](https://github.com/nginx/nginx-gateway-fabric/blob/v1.5.1/charts/nginx-gateway-fabric/README.md). +If you're setting up NGINX Gateway Fabric with Helm, you can adjust the `metrics.*` parameters to fit your needs. For detailed options and instructions, see the [Helm README](https://github.com/nginx/nginx-gateway-fabric/blob/v1.6.1/charts/nginx-gateway-fabric/README.md). --- ### Using Kubernetes manifests -For setups using Kubernetes manifests, change the metrics configuration by editing the NGINX Gateway Fabric manifest that you want to deploy. You can find some examples in the [deploy](https://github.com/nginx/nginx-gateway-fabric/tree/v1.5.1/deploy) directory. +For setups using Kubernetes manifests, change the metrics configuration by editing the NGINX Gateway Fabric manifest that you want to deploy. You can find some examples in the [deploy](https://github.com/nginx/nginx-gateway-fabric/tree/v1.6.1/deploy) directory. --- diff --git a/content/ngf/how-to/traffic-management/advanced-routing.md b/content/ngf/how-to/traffic-management/advanced-routing.md index 29b3240fe..d98df2a21 100644 --- a/content/ngf/how-to/traffic-management/advanced-routing.md +++ b/content/ngf/how-to/traffic-management/advanced-routing.md @@ -45,7 +45,7 @@ The goal is to create a set of rules that will result in client requests being s Begin by deploying the `coffee-v1` and `coffee-v2` applications: ```shell -kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.0/examples/advanced-routing/coffee.yaml +kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.1/examples/advanced-routing/coffee.yaml ``` --- @@ -173,7 +173,7 @@ Let's deploy a different set of applications now called `tea` and `tea-post`. Th ### Deploy the Tea applications ```shell -kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.0/examples/advanced-routing/tea.yaml +kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.1/examples/advanced-routing/tea.yaml ``` --- diff --git a/content/ngf/how-to/traffic-management/client-settings.md b/content/ngf/how-to/traffic-management/client-settings.md index 8f694505d..5af006e39 100644 --- a/content/ngf/how-to/traffic-management/client-settings.md +++ b/content/ngf/how-to/traffic-management/client-settings.md @@ -51,19 +51,19 @@ For all the possible configuration options for `ClientSettingsPolicy`, see the [ - Create the coffee and tea example applications: ```yaml - kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.5.1/examples/client-settings-policy/app.yaml + kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.1/examples/client-settings-policy/app.yaml ``` - Create a Gateway: ```yaml - kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.5.1/examples/client-settings-policy/gateway.yaml + kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.1/examples/client-settings-policy/gateway.yaml ``` - Create HTTPRoutes for the coffee and tea applications: ```yaml - kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.5.1/examples/client-settings-policy/httproutes.yaml + kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.1/examples/client-settings-policy/httproutes.yaml ``` - Test the configuration: diff --git a/content/ngf/how-to/traffic-management/request-response-headers.md b/content/ngf/how-to/traffic-management/request-response-headers.md index d884b4049..14931fae5 100644 --- a/content/ngf/how-to/traffic-management/request-response-headers.md +++ b/content/ngf/how-to/traffic-management/request-response-headers.md @@ -67,7 +67,7 @@ This examples demonstrates how to configure traffic routing for a simple echo se Begin by deploying the example application `headers`. It is a simple application that returns the request headers which will be modified later. ```shell -kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.0/examples/http-request-header-filter/headers.yaml +kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.1/examples/http-request-header-filter/headers.yaml ``` This will create the headers Service and a Deployment with one Pod. Run the following command to verify the resources were created: @@ -179,7 +179,7 @@ kubectl delete httproutes.gateway.networking.k8s.io headers ``` ```shell -kubectl delete -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.0/examples/http-request-header-filter/headers.yaml +kubectl delete -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.1/examples/http-request-header-filter/headers.yaml ``` --- @@ -195,7 +195,7 @@ Begin by configuring an application with custom headers and a simple HTTPRoute. Begin by deploying the example application `headers`. It is a simple application that adds response headers that will be modified later. ```shell -kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.0/examples/http-response-header-filter/headers.yaml +kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.1/examples/http-response-header-filter/headers.yaml ``` This will create the headers Service and a Deployment with one Pod. Run the following command to verify the resources were created: diff --git a/content/ngf/how-to/traffic-management/snippets.md b/content/ngf/how-to/traffic-management/snippets.md index fe7821d21..6958c097e 100644 --- a/content/ngf/how-to/traffic-management/snippets.md +++ b/content/ngf/how-to/traffic-management/snippets.md @@ -75,19 +75,19 @@ We have outlined a few best practices to keep in mind when using `SnippetsFilter - Create the coffee and tea example applications: ```yaml - kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.5.1/examples/snippets-filter/app.yaml + kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.1/examples/snippets-filter/app.yaml ``` - Create a Gateway: ```yaml - kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.5.1/examples/snippets-filter/gateway.yaml + kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.1/examples/snippets-filter/gateway.yaml ``` - Create HTTPRoutes for the coffee and tea applications: ```yaml - kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.5.1/examples/snippets-filter/httproutes.yaml + kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.1/examples/snippets-filter/httproutes.yaml ``` - Test the configuration: diff --git a/content/ngf/how-to/upgrade-apps-without-downtime.md b/content/ngf/how-to/upgrade-apps-without-downtime.md index 3ec8a38a1..6c0865ca8 100644 --- a/content/ngf/how-to/upgrade-apps-without-downtime.md +++ b/content/ngf/how-to/upgrade-apps-without-downtime.md @@ -66,7 +66,7 @@ For example, an application can be exposed using a routing rule like below: port: 80 ``` -{{< note >}} See the [Cafe example](https://github.com/nginx/nginx-gateway-fabric/tree/v1.5.1/examples/cafe-example) for a basic example. {{< /note >}} +{{< note >}} See the [Cafe example](https://github.com/nginx/nginx-gateway-fabric/tree/v1.6.1/examples/cafe-example) for a basic example. {{< /note >}} The upgrade methods in the next sections cover: @@ -137,4 +137,4 @@ By updating the rule you can further increase the share of traffic the new versi weight: 1 ``` -See the [Traffic splitting example](https://github.com/nginx/nginx-gateway-fabric/tree/v1.5.1/examples/traffic-splitting) from our repository. +See the [Traffic splitting example](https://github.com/nginx/nginx-gateway-fabric/tree/v1.6.1/examples/traffic-splitting) from our repository. diff --git a/content/ngf/installation/building-the-images.md b/content/ngf/installation/building-the-images.md index 5acd5e8b1..5386eda7e 100644 --- a/content/ngf/installation/building-the-images.md +++ b/content/ngf/installation/building-the-images.md @@ -32,7 +32,7 @@ If building the NGINX Plus image, you will also need a valid NGINX Plus license 1. Clone the repo and change into the `nginx-gateway-fabric` directory: ```shell - git clone https://github.com/nginx/nginx-gateway-fabric.git --branch v1.5.1 + git clone https://github.com/nginx/nginx-gateway-fabric.git --branch v1.6.1 cd nginx-gateway-fabric ``` @@ -68,20 +68,20 @@ If building the NGINX Plus image, you will also need a valid NGINX Plus license ``` Set the `PREFIX` variable to the name of the registry you'd like to push the image to. By default, the images will be - named `nginx-gateway-fabric:1.5.1` and `nginx-gateway-fabric/nginx:1.5.1` or `nginx-gateway-fabric/nginx-plus:1.5.1`. + named `nginx-gateway-fabric:1.6.1` and `nginx-gateway-fabric/nginx:1.6.1` or `nginx-gateway-fabric/nginx-plus:1.6.1`. 1. Push the images to your container registry: ```shell - docker push myregistry.example.com/nginx-gateway-fabric:1.5.1 - docker push myregistry.example.com/nginx-gateway-fabric/nginx:1.5.1 + docker push myregistry.example.com/nginx-gateway-fabric:1.6.1 + docker push myregistry.example.com/nginx-gateway-fabric/nginx:1.6.1 ``` or ```shell - docker push myregistry.example.com/nginx-gateway-fabric:1.5.1 - docker push myregistry.example.com/nginx-gateway-fabric/nginx-plus:1.5.1 + docker push myregistry.example.com/nginx-gateway-fabric:1.6.1 + docker push myregistry.example.com/nginx-gateway-fabric/nginx-plus:1.6.1 ``` Make sure to substitute `myregistry.example.com/nginx-gateway-fabric` with your registry. diff --git a/content/ngf/installation/installing-ngf/helm.md b/content/ngf/installation/installing-ngf/helm.md index 9fc0edc2b..17b782f4c 100644 --- a/content/ngf/installation/installing-ngf/helm.md +++ b/content/ngf/installation/installing-ngf/helm.md @@ -174,7 +174,7 @@ helm install ngf oci://ghcr.io/nginx/charts/nginx-gateway-fabric --create-namesp #### Examples -You can find several examples of configuration options of the `values.yaml` file in the [helm examples](https://github.com/nginx/nginx-gateway-fabric/tree/v1.5.1/examples/helm) directory. +You can find several examples of configuration options of the `values.yaml` file in the [helm examples](https://github.com/nginx/nginx-gateway-fabric/tree/v1.6.1/examples/helm) directory. --- @@ -201,13 +201,13 @@ To upgrade your Gateway API resources, take the following steps: - To upgrade the Gateway API resources, run: ```shell - kubectl kustomize "https://github.com/nginx/nginx-gateway-fabric/config/crd/gateway-api/standard?ref=v1.5.1" | kubectl apply -f - + kubectl kustomize "https://github.com/nginx/nginx-gateway-fabric/config/crd/gateway-api/standard?ref=v1.6.1" | kubectl apply -f - ``` or, if you installed the from the experimental channel: ```shell - kubectl kustomize "https://github.com/nginx/nginx-gateway-fabric/config/crd/gateway-api/experimental?ref=v1.5.1" | kubectl apply -f - + kubectl kustomize "https://github.com/nginx/nginx-gateway-fabric/config/crd/gateway-api/experimental?ref=v1.6.1" | kubectl apply -f - ``` --- @@ -223,7 +223,7 @@ To upgrade the CRDs, take the following steps: 2. Upgrade the CRDs: ```shell - kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.5.1/deploy/crds.yaml + kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.1/deploy/crds.yaml ``` {{}}Ignore the following warning, as it is expected.{{}} @@ -354,7 +354,7 @@ Follow these steps to uninstall NGINX Gateway Fabric and Gateway API from your K ```shell kubectl delete ns nginx-gateway - kubectl delete -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.5.1/deploy/crds.yaml + kubectl delete -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.1/deploy/crds.yaml ``` 3. **Remove the Gateway API resources:** @@ -366,4 +366,4 @@ Follow these steps to uninstall NGINX Gateway Fabric and Gateway API from your K ## Additional configuration -For a full list of the Helm Chart configuration parameters, read [the NGINX Gateway Fabric Helm Chart](https://github.com/nginx/nginx-gateway-fabric/blob/v1.5.1/charts/nginx-gateway-fabric/README.md#configuration). +For a full list of the Helm Chart configuration parameters, read [the NGINX Gateway Fabric Helm Chart](https://github.com/nginx/nginx-gateway-fabric/blob/v1.6.1/charts/nginx-gateway-fabric/README.md#configuration). diff --git a/content/ngf/installation/installing-ngf/manifests.md b/content/ngf/installation/installing-ngf/manifests.md index 6bb844ac0..a04a237ab 100644 --- a/content/ngf/installation/installing-ngf/manifests.md +++ b/content/ngf/installation/installing-ngf/manifests.md @@ -63,7 +63,7 @@ Deploying NGINX Gateway Fabric with Kubernetes manifests takes only a few steps. #### Stable release ```shell -kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.5.1/deploy/crds.yaml +kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.1/deploy/crds.yaml ``` #### Edge version @@ -85,7 +85,7 @@ kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/ma Deploys NGINX Gateway Fabric with NGINX OSS. ```shell -kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.5.1/deploy/default/deploy.yaml +kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.1/deploy/default/deploy.yaml ``` {{% /tab %}} @@ -95,7 +95,7 @@ kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1 Deploys NGINX Gateway Fabric with NGINX OSS and an AWS Network Load Balancer service. ```shell -kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.5.1/deploy/aws-nlb/deploy.yaml +kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.1/deploy/aws-nlb/deploy.yaml ``` {{% /tab %}} @@ -105,7 +105,7 @@ kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1 Deploys NGINX Gateway Fabric with NGINX OSS and `nodeSelector` to deploy on Linux nodes. ```shell -kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.5.1/deploy/azure/deploy.yaml +kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.1/deploy/azure/deploy.yaml ``` {{% /tab %}} @@ -117,7 +117,7 @@ NGINX Plus Docker registry, and the `imagePullSecretName` is the name of the Sec The NGINX Plus JWT Secret used to run NGINX Plus is also specified in a volume mount and the `--usage-report-secret` parameter. These Secrets are created as part of the [Before you begin](#before-you-begin) section. ```shell -kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.5.1/deploy/nginx-plus/deploy.yaml +kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.1/deploy/nginx-plus/deploy.yaml ``` {{% /tab %}} @@ -127,7 +127,7 @@ kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1 Deploys NGINX Gateway Fabric with NGINX OSS and experimental features. ```shell -kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.5.1/deploy/experimental/deploy.yaml +kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.1/deploy/experimental/deploy.yaml ``` {{< note >}} Requires the Gateway APIs installed from the experimental channel. {{< /note >}} @@ -141,7 +141,7 @@ NGINX Plus Docker registry, and the `imagePullSecretName` is the name of the Sec The NGINX Plus JWT Secret used to run NGINX Plus is also specified in a volume mount and the `--usage-report-secret` parameter. These Secrets are created as part of the [Before you begin](#before-you-begin) section. ```shell -kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.5.1/deploy/nginx-plus-experimental/deploy.yaml +kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.1/deploy/nginx-plus-experimental/deploy.yaml ``` {{< note >}} Requires the Gateway APIs installed from the experimental channel. {{< /note >}} @@ -153,7 +153,7 @@ kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1 Deploys NGINX Gateway Fabric with NGINX OSS using a Service type of `NodePort`. ```shell -kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.5.1/deploy/nodeport/deploy.yaml +kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.1/deploy/nodeport/deploy.yaml ``` {{% /tab %}} @@ -163,7 +163,7 @@ kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1 Deploys NGINX Gateway Fabric with NGINX OSS on OpenShift. ```shell -kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.5.1/deploy/openshift/deploy.yaml +kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.1/deploy/openshift/deploy.yaml ``` {{% /tab %}} @@ -211,13 +211,13 @@ To upgrade NGINX Gateway Fabric and get the latest features and improvements, ta - To upgrade the Gateway API resources, run: ```shell - kubectl kustomize "https://github.com/nginx/nginx-gateway-fabric/config/crd/gateway-api/standard?ref=v1.5.1" | kubectl apply -f - + kubectl kustomize "https://github.com/nginx/nginx-gateway-fabric/config/crd/gateway-api/standard?ref=v1.6.1" | kubectl apply -f - ``` or, if you installed the from the experimental channel: ```shell - kubectl kustomize "https://github.com/nginx/nginx-gateway-fabric/config/crd/gateway-api/experimental?ref=v1.5.1" | kubectl apply -f - + kubectl kustomize "https://github.com/nginx/nginx-gateway-fabric/config/crd/gateway-api/experimental?ref=v1.6.1" | kubectl apply -f - ``` 1. **Upgrade NGINX Gateway Fabric CRDs:** @@ -225,7 +225,7 @@ To upgrade NGINX Gateway Fabric and get the latest features and improvements, ta - To upgrade the Custom Resource Definitions (CRDs), run: ```shell - kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.5.1/deploy/crds.yaml + kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.1/deploy/crds.yaml ``` 1. **Upgrade NGINX Gateway Fabric deployment:** @@ -300,7 +300,7 @@ Follow these steps to uninstall NGINX Gateway Fabric and Gateway API from your K ``` ```shell - kubectl delete -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.5.1/deploy/crds.yaml + kubectl delete -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.1/deploy/crds.yaml ``` 1. **Remove the Gateway API resources:** diff --git a/content/ngf/installation/nginx-plus-jwt.md b/content/ngf/installation/nginx-plus-jwt.md index bebfea9ea..8e6d8a106 100644 --- a/content/ngf/installation/nginx-plus-jwt.md +++ b/content/ngf/installation/nginx-plus-jwt.md @@ -250,7 +250,7 @@ Replace the contents of `` with the contents of the JWT token itself. You can then pull the image: ```shell -docker pull private-registry.nginx.com/nginx-gateway-fabric/nginx-plus:1.5.1 +docker pull private-registry.nginx.com/nginx-gateway-fabric/nginx-plus:1.6.1 ``` Once you have successfully pulled the image, you can tag it as needed, then push it to a different container registry. diff --git a/content/ngf/overview/gateway-architecture.md b/content/ngf/overview/gateway-architecture.md index 9c6810e08..25d706890 100644 --- a/content/ngf/overview/gateway-architecture.md +++ b/content/ngf/overview/gateway-architecture.md @@ -24,7 +24,7 @@ NGINX Gateway Fabric is an open source project that provides an implementation o For a list of supported Gateway API resources and features, see the [Gateway API Compatibility]({{< ref "/ngf/overview/gateway-api-compatibility.md" >}}) documentation. -We have more information regarding our [design principles](https://github.com/nginx/nginx-gateway-fabric/blob/v1.5.1/docs/developer/design-principles.md) in the project's GitHub repository. +We have more information regarding our [design principles](https://github.com/nginx/nginx-gateway-fabric/blob/v1.6.1/docs/developer/design-principles.md) in the project's GitHub repository. --- @@ -79,7 +79,7 @@ The following list describes the connections, preceeded by their types in parent 1. (HTTPS) - Read: _NGF_ reads the _Kubernetes API_ to get the latest versions of the resources in the cluster. - - Write: _NGF_ writes to the _Kubernetes API_ to update the handled resources' statuses and emit events. If there's more than one replica of _NGF_ and [leader election](https://github.com/nginx/nginx-gateway-fabric/tree/v1.5.1/charts/nginx-gateway-fabric#configuration) is enabled, only the _NGF_ pod that is leading will write statuses to the _Kubernetes API_. + - Write: _NGF_ writes to the _Kubernetes API_ to update the handled resources' statuses and emit events. If there's more than one replica of _NGF_ and [leader election](https://github.com/nginx/nginx-gateway-fabric/tree/v1.6.1/charts/nginx-gateway-fabric#configuration) is enabled, only the _NGF_ pod that is leading will write statuses to the _Kubernetes API_. 1. (HTTP, HTTPS) _Prometheus_ fetches the `controller-runtime` and NGINX metrics via an HTTP endpoint that _NGF_ exposes (`:9113/metrics` by default). Prometheus is **not** required by NGINX Gateway Fabric, and its endpoint can be turned off. 1. (File I/O) - Write: _NGF_ generates NGINX _configuration_ based on the cluster resources and writes them as `.conf` files to the mounted `nginx-conf` volume, located at `/etc/nginx/conf.d`. It also writes _TLS certificates_ and _keys_ from [TLS secrets](https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets) referenced in the accepted Gateway resource to the `nginx-secrets` volume at the path `/etc/nginx/secrets`. @@ -93,7 +93,7 @@ The following list describes the connections, preceeded by their types in parent 1. (File I/O) - Write: The _NGINX master_ writes to the auxiliary Unix sockets folder, which is located in the `/var/run/nginx` directory. - - Read: The _NGINX master_ reads the `nginx.conf` file from the `/etc/nginx` directory. This [file](https://github.com/nginx/nginx-gateway-fabric/blob/v1.5.1/internal/mode/static/nginx/conf/nginx.conf) contains the global and http configuration settings for NGINX. In addition, _NGINX master_ reads the NJS modules referenced in the configuration when it starts or during a reload. NJS modules are stored in the `/usr/lib/nginx/modules` directory. + - Read: The _NGINX master_ reads the `nginx.conf` file from the `/etc/nginx` directory. This [file](https://github.com/nginx/nginx-gateway-fabric/blob/v1.6.1/internal/mode/static/nginx/conf/nginx.conf) contains the global and http configuration settings for NGINX. In addition, _NGINX master_ reads the NJS modules referenced in the configuration when it starts or during a reload. NJS modules are stored in the `/usr/lib/nginx/modules` directory. 1. (File I/O) The _NGINX master_ sends logs to its _stdout_ and _stderr_, which are collected by the container runtime. 1. (File I/O) An _NGINX worker_ writes logs to its _stdout_ and _stderr_, which are collected by the container runtime. 1. (Signal) The _NGINX master_ controls the [lifecycle of _NGINX workers_](https://nginx.org/en/docs/control.html#reconfiguration) it creates workers with the new configuration and shutdowns workers with the old configuration. From d2ae7cdf0e3194e4d288c438db3f0eafb723cf88 Mon Sep 17 00:00:00 2001 From: Luca Comellini Date: Thu, 6 Feb 2025 14:31:34 -0800 Subject: [PATCH 2/9] Add missing docs --- .../upgrade-apps-without-downtime.md | 140 +++++++ .../traffic-management/upstream-settings.md | 395 ++++++++++++++++++ 2 files changed, 535 insertions(+) create mode 100644 content/ngf/how-to/traffic-management/upgrade-apps-without-downtime.md create mode 100644 content/ngf/how-to/traffic-management/upstream-settings.md diff --git a/content/ngf/how-to/traffic-management/upgrade-apps-without-downtime.md b/content/ngf/how-to/traffic-management/upgrade-apps-without-downtime.md new file mode 100644 index 000000000..cb3a11fba --- /dev/null +++ b/content/ngf/how-to/traffic-management/upgrade-apps-without-downtime.md @@ -0,0 +1,140 @@ +--- +title: "Upgrade applications without downtime" +toc: true +weight: 300 +type: how-to +product: NGF +docs: "DOCS-1420" +--- + +Learn how to use NGINX Gateway Fabric to upgrade applications without downtime. + +--- + +## Overview + +{{< note >}} See the [Architecture document]({{< relref "/overview/gateway-architecture.md" >}}) to learn more about NGINX Gateway Fabric architecture.{{< /note >}} + +NGINX Gateway Fabric allows upgrading applications without downtime. To understand the upgrade methods, you need to be familiar with the NGINX features that help prevent application downtime: Graceful configuration reloads and upstream server updates. + +--- + +### Graceful configuration reloads + +If a relevant gateway API or built-in Kubernetes resource is changed, NGINX Gateway Fabric will update NGINX by regenerating the NGINX configuration. NGINX Gateway Fabric then sends a reload signal to the master NGINX process to apply the new configuration. + +We call such an operation a "reload", during which client requests are not dropped - which defines it as a graceful reload. + +This process is further explained in the [NGINX configuration documentation](https://nginx.org/en/docs/control.html?#reconfiguration). + +--- + +### Upstream server updates + +Endpoints frequently change during application upgrades: Kubernetes creates pods for the new version of an application and removes the old ones, creating and removing the respective endpoints as well. + +NGINX Gateway Fabric detects changes to endpoints by watching their corresponding [EndpointSlices](https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/). + +In an NGINX configuration, a service is represented as an [upstream](https://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream), and an endpoint as an [upstream server](https://nginx.org/en/docs/http/ngx_http_upstream_module.html#server). + +Adding and removing endpoints are two of the most common cases: + +- If an endpoint is added, NGINX Gateway Fabric adds an upstream server to NGINX that corresponds to the endpoint, then reloads NGINX. Next, NGINX will start proxying traffic to that endpoint. +- If an endpoint is removed, NGINX Gateway Fabric removes the corresponding upstream server from NGINX. After a reload, NGINX will stop proxying traffic to that server. However, it will finish proxying any pending requests to that server before switching to another endpoint. + +As long as you have more than one endpoint ready, clients won't experience downtime during upgrades. + +{{< note >}}It is good practice to configure a [Readiness probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) in the deployment so that a pod can report when it is ready to receive traffic. Note that NGINX Gateway Fabric will not add any endpoint to NGINX that is not ready.{{< /note >}} + +--- + +## Prerequisites + +- You have deployed your application as a [deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) +- The pods of the deployment belong to a [service](https://kubernetes.io/docs/concepts/services-networking/service/) so that Kubernetes creates an [endpoint](https://kubernetes.io/docs/reference/kubernetes-api/service-resources/endpoints-v1/) for each pod. +- You have exposed the application to the clients via an [HTTPRoute](https://gateway-api.sigs.k8s.io/api-types/httproute/) resource that references that service. + +For example, an application can be exposed using a routing rule like below: + +```yaml +- matches: + - path: + type: PathPrefix + value: / + backendRefs: + - name: my-app + port: 80 +``` + +{{< note >}}See the [Cafe example](https://github.com/nginx/nginx-gateway-fabric/tree/v1.6.1/examples/cafe-example) for a basic example.{{< /note >}} + +The upgrade methods in the next sections cover: + +- Rolling deployment upgrades +- Blue-green deployments +- Canary releases + +--- + +## Rolling deployment upgrade + +To start a [rolling deployment upgrade](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment), you update the deployment to use the new version tag of the application. As a result, Kubernetes terminates the pods with the old version and create new ones. By default, Kubernetes also ensures that some number of pods always stay available during the upgrade. + +This upgrade will add new upstream servers to NGINX and remove the old ones. As long as the number of pods (ready endpoints) during an upgrade does not reach zero, NGINX will be able to proxy traffic, and therefore prevent any downtime. + +This method does not require you to update the **HTTPRoute**. + +--- + +## Blue-green deployments + +With this method, you deploy a new version of the application (blue version) as a separate deployment, while the old version (green) keeps running and handling client traffic. Next, you switch the traffic from the green version to the blue. If the blue works as expected, you terminate the green. Otherwise, you switch the traffic back to the green. + +There are two ways to switch the traffic: + +- Update the service selector to select the pods of the blue version instead of the green. As a result, NGINX Gateway Fabric removes the green upstream servers from NGINX and adds the blue ones. With this approach, it is not necessary to update the **HTTPRoute**. +- Create a separate service for the blue version and update the backend reference in the **HTTPRoute** to reference this service, which leads to the same result as with the previous option. + +--- + +## Canary releases + +Canary releases involve gradually introducing a new version of your application to a subset of nodes in a controlled manner, splitting the traffic between the old are new (canary) release. This allows for monitoring and testing the new release's performance and reliability before full deployment, helping to identify and address issues without impacting the entire user base. + +To support canary releases, you can implement an approach with two deployments behind the same service (see [Canary deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#canary-deployment) in the Kubernetes documentation). However, this approach lacks precision for defining the traffic split between the old and the canary version. You can greatly influence it by controlling the number of pods (for example, four pods of the old version and one pod of the canary). However, note that NGINX Gateway Fabric uses [`random two least_conn`](https://nginx.org/en/docs/http/ngx_http_upstream_module.html#random) load balancing method, which doesn't guarantee an exact split based on the number of pods (80/20 in the given example). + +A more flexible and precise way to implement canary releases is to configure a traffic split in an **HTTPRoute**. In this case, you create a separate deployment for the new version with a separate service. For example, for the rule below, NGINX will proxy 95% of the traffic to the old version endpoints and only 5% to the new ones. + +```yaml +- matches: + - path: + type: PathPrefix + value: / + backendRefs: + - name: my-app-old + port: 80 + weight: 95 + - name: my-app-new + port: 80 + weight: 5 +``` + +{{< note >}}Every request coming from the same client won't necessarily be sent to the same backend. NGINX will independently split each request among the backend references.{{< /note >}} + +By updating the rule you can further increase the share of traffic the new version gets and finally completely switch to the new version: + +```yaml +- matches: + - path: + type: PathPrefix + value: / + backendRefs: + - name: my-app-old + port: 80 + weight: 0 + - name: my-app-new + port: 80 + weight: 1 +``` + +See the [Traffic splitting example](https://github.com/nginx/nginx-gateway-fabric/tree/v1.6.1/examples/traffic-splitting) from our repository. diff --git a/content/ngf/how-to/traffic-management/upstream-settings.md b/content/ngf/how-to/traffic-management/upstream-settings.md new file mode 100644 index 000000000..e53b11a56 --- /dev/null +++ b/content/ngf/how-to/traffic-management/upstream-settings.md @@ -0,0 +1,395 @@ +--- +title: "Upstream Settings Policy API" +weight: 900 +toc: true +docs: "DOCS-000" +--- + +Learn how to use the `UpstreamSettingsPolicy` API. + +## Overview + +The `UpstreamSettingsPolicy` API allows Application Developers to configure the behavior of a connection between NGINX and the upstream applications. + +The settings in `UpstreamSettingsPolicy` correspond to the following NGINX directives: + +- [`zone`]() +- [`keepalive`]() +- [`keepalive_requests`]() +- [`keepalive_time`]() +- [`keepalive_timeout`]() + +`UpstreamSettingsPolicy` is a [Direct Policy Attachment](https://gateway-api.sigs.k8s.io/reference/policy-attachment/) that can be applied to one or more services in the same namespace as the policy. +`UpstreamSettingsPolicies` can only be applied to HTTP or gRPC services, in other words, services that are referenced by an HTTPRoute or GRPCRoute. + +See the [custom policies]({{< relref "overview/custom-policies.md" >}}) document for more information on policies. + +This guide will show you how to use the `UpstreamSettingsPolicy` API to configure the upstream zone size and keepalives for your applications. + +For all the possible configuration options for `UpstreamSettingsPolicy`, see the [API reference]({{< relref "reference/api.md" >}}). + +--- + +## Before you begin + +- [Install]({{< relref "/ngf/installation/" >}}) NGINX Gateway Fabric. +- Save the public IP address and port of NGINX Gateway Fabric into shell variables: + + ```text + GW_IP=XXX.YYY.ZZZ.III + GW_PORT= + ``` + +- Lookup the name of the NGINX Gateway Fabric pod and save into shell variable: + + ```text + NGF_POD_NAME= + ``` + + {{< note >}}In a production environment, you should have a DNS record for the external IP address that is exposed, and it should refer to the hostname that the gateway will forward for.{{< /note >}} + +--- + +## Setup + +Create the `coffee` and `tea` example applications: + +```yaml +kubectl apply -f - < 80/TCP 23h +service/tea ClusterIP 10.244.0.15 80/TCP 23h + +NAME READY STATUS RESTARTS AGE +pod/coffee-676c9f8944-n9g6n 1/1 Running 0 23h +pod/tea-6fbfdcb95d-cf84d 1/1 Running 0 23h +``` + +Create a Gateway: + +```yaml +kubectl apply -f - < +``` + +Next, verify that the policy has been applied to the `coffee` and `tea` upstreams by inspecting the NGINX configuration: + +```shell +kubectl exec -it -n nginx-gateway $NGF_POD_NAME -c nginx -- nginx -T +``` + +You should see the `zone` directive in the `coffee` and `tea` upstreams both specify the size `1m`: + +```text +upstream default_coffee_80 { + random two least_conn; + zone default_coffee_80 1m; + + server 10.244.0.14:8080; +} + +upstream default_tea_80 { + random two least_conn; + zone default_tea_80 1m; + + server 10.244.0.15:8080; +} +``` + +## Enable keepalive connections + +To enable keepalive connections for the `coffee` service, create the following `UpstreamSettingsPolicy`: + +```yaml +kubectl apply -f - < +``` + +Next, verify that the policy has been applied to the `coffee` upstreams, by inspecting the NGINX configuration: + +```shell +kubectl exec -it -n nginx-gateway $NGF_POD_NAME -c nginx -- nginx -T +``` + +You should see that the `coffee` upstream has the `keepalive` directive set to 32: + +```text +upstream default_coffee_80 { + random two least_conn; + zone default_coffee_80 1m; + + server 10.244.0.14:8080; + keepalive 32; +} +``` + +Notice, that the `tea` upstream does not contain the `keepalive` directive, since the `upstream-keepalives` policy does not target the `tea` service: + +```text +upstream default_tea_80 { + random two least_conn; + zone default_tea_80 1m; + + server 10.244.0.15:8080; +} +``` + +--- + +## Further reading + +- [Custom policies]({{< relref "overview/custom-policies.md" >}}): learn about how NGINX Gateway Fabric custom policies work. +- [API reference]({{< relref "reference/api.md" >}}): all configuration fields for the `UpstreamSettingsPolicy` API. From c84b94520a0d0f06787b7104d4c8dec4896d7829 Mon Sep 17 00:00:00 2001 From: Luca Comellini Date: Thu, 6 Feb 2025 14:35:10 -0800 Subject: [PATCH 3/9] Update API --- content/ngf/reference/api.md | 327 ++++++++++++++++++++++++++++++++++- 1 file changed, 326 insertions(+), 1 deletion(-) diff --git a/content/ngf/reference/api.md b/content/ngf/reference/api.md index 74ea08d3c..77eafa4ac 100644 --- a/content/ngf/reference/api.md +++ b/content/ngf/reference/api.md @@ -10,6 +10,9 @@ NGINX Gateway API Reference
  • gateway.nginx.org/v1alpha1
  • +
  • +gateway.nginx.org/v1alpha2 +
  • gateway.nginx.org/v1alpha1

    @@ -683,6 +686,7 @@ UpstreamKeepAlive

    TargetRefs identifies API object(s) to apply the policy to. Objects must be in the same namespace as the policy. Support: Service

    +

    TargetRefs must be distinct. The name field must be unique for all targetRef entries in the UpstreamSettingsPolicy.

    @@ -1825,7 +1829,8 @@ and the status of the SnippetsFilter with respect to each controller.

    (Appears on: Telemetry, -Tracing) +Tracing, +Tracing)

    SpanAttribute is a key value pair to be added to a tracing span.

    @@ -2290,6 +2295,326 @@ UpstreamKeepAlive

    TargetRefs identifies API object(s) to apply the policy to. Objects must be in the same namespace as the policy. Support: Service

    +

    TargetRefs must be distinct. The name field must be unique for all targetRef entries in the UpstreamSettingsPolicy.

    + + + + +
    +

    gateway.nginx.org/v1alpha2

    +

    +

    Package v1alpha2 contains API Schema definitions for the +gateway.nginx.org API group.

    +

    +Resource Types: + +

    ObservabilityPolicy + +

    +

    +

    ObservabilityPolicy is a Direct Attached Policy. It provides a way to configure observability settings for +the NGINX Gateway Fabric data plane. Used in conjunction with the NginxProxy CRD that is attached to the +GatewayClass parametersRef.

    +

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    FieldDescription
    +apiVersion
    +string
    + +gateway.nginx.org/v1alpha2 + +
    +kind
    +string +
    ObservabilityPolicy
    +metadata
    + + +Kubernetes meta/v1.ObjectMeta + + +
    +Refer to the Kubernetes API documentation for the fields of the +metadata field. +
    +spec
    + + +ObservabilityPolicySpec + + +
    +

    Spec defines the desired state of the ObservabilityPolicy.

    +
    +
    + + + + + + + + + +
    +tracing
    + + +Tracing + + +
    +(Optional) +

    Tracing allows for enabling and configuring tracing.

    +
    +targetRefs
    + + +[]sigs.k8s.io/gateway-api/apis/v1alpha2.LocalPolicyTargetReference + + +
    +

    TargetRefs identifies the API object(s) to apply the policy to. +Objects must be in the same namespace as the policy. +Support: HTTPRoute, GRPCRoute.

    +

    TargetRefs must be distinct. This means that the multi-part key defined by kind and name must +be unique across all targetRef entries in the ObservabilityPolicy.

    +
    +
    +status
    + + +sigs.k8s.io/gateway-api/apis/v1alpha2.PolicyStatus + + +
    +

    Status defines the state of the ObservabilityPolicy.

    +
    +

    ObservabilityPolicySpec + +

    +

    +(Appears on: +ObservabilityPolicy) +

    +

    +

    ObservabilityPolicySpec defines the desired state of the ObservabilityPolicy.

    +

    + + + + + + + + + + + + + + + + + +
    FieldDescription
    +tracing
    + + +Tracing + + +
    +(Optional) +

    Tracing allows for enabling and configuring tracing.

    +
    +targetRefs
    + + +[]sigs.k8s.io/gateway-api/apis/v1alpha2.LocalPolicyTargetReference + + +
    +

    TargetRefs identifies the API object(s) to apply the policy to. +Objects must be in the same namespace as the policy. +Support: HTTPRoute, GRPCRoute.

    +

    TargetRefs must be distinct. This means that the multi-part key defined by kind and name must +be unique across all targetRef entries in the ObservabilityPolicy.

    +
    +

    TraceContext +(string alias)

    +

    +

    +(Appears on: +Tracing) +

    +

    +

    TraceContext specifies how to propagate traceparent/tracestate headers.

    +

    + + + + + + + + + + + + + + + + +
    ValueDescription

    "extract"

    TraceContextExtract uses an existing trace context from the request, so that the identifiers +of a trace and the parent span are inherited from the incoming request.

    +

    "ignore"

    TraceContextIgnore skips context headers processing.

    +

    "inject"

    TraceContextInject adds a new context to the request, overwriting existing headers, if any.

    +

    "propagate"

    TraceContextPropagate updates the existing context (combines extract and inject).

    +
    +

    TraceStrategy +(string alias)

    +

    +

    +(Appears on: +Tracing) +

    +

    +

    TraceStrategy defines the tracing strategy.

    +

    + + + + + + + + + + + + +
    ValueDescription

    "parent"

    TraceStrategyParent enables tracing and only records spans if the parent span was sampled.

    +

    "ratio"

    TraceStrategyRatio enables ratio-based tracing, defaulting to 100% sampling rate.

    +
    +

    Tracing + +

    +

    +(Appears on: +ObservabilityPolicySpec) +

    +

    +

    Tracing allows for enabling and configuring OpenTelemetry tracing.

    +

    + + + + + + + + + + + + + + + + + + + + + + + + + + + From 15dc652f32e9f7524e541abe48e4264d4e815f2d Mon Sep 17 00:00:00 2001 From: Luca Comellini Date: Thu, 6 Feb 2025 14:44:21 -0800 Subject: [PATCH 4/9] more changes --- content/ngf/how-to/monitoring/prometheus.md | 2 +- content/ngf/how-to/monitoring/tracing.md | 10 +++------- content/ngf/overview/nginx-plus.md | 2 +- 3 files changed, 5 insertions(+), 9 deletions(-) diff --git a/content/ngf/how-to/monitoring/prometheus.md b/content/ngf/how-to/monitoring/prometheus.md index 180d1cd26..e1e38684e 100644 --- a/content/ngf/how-to/monitoring/prometheus.md +++ b/content/ngf/how-to/monitoring/prometheus.md @@ -83,7 +83,7 @@ NGINX Gateway Fabric provides a variety of metrics for monitoring and analyzing ### NGINX/NGINX Plus metrics -NGINX metrics cover specific NGINX operations such as the total number of accepted client connections. For a complete list of available NGINX/NGINX Plus metrics, refer to the [NGINX Prometheus Exporter developer docs](https://github.com/nginxinc/nginx-prometheus-exporter#exported-metrics). +NGINX metrics cover specific NGINX operations such as the total number of accepted client connections. For a complete list of available NGINX/NGINX Plus metrics, refer to the [NGINX Prometheus Exporter developer docs](https://github.com/nginx/nginx-prometheus-exporter#exported-metrics). These metrics use the `nginx_gateway_fabric` namespace and include the `class` label, indicating the NGINX Gateway class. For example, `nginx_gateway_fabric_connections_accepted{class="nginx"}`. diff --git a/content/ngf/how-to/monitoring/tracing.md b/content/ngf/how-to/monitoring/tracing.md index 8f42df39d..9113354f4 100644 --- a/content/ngf/how-to/monitoring/tracing.md +++ b/content/ngf/how-to/monitoring/tracing.md @@ -15,7 +15,7 @@ This guide explains how to enable tracing on HTTPRoutes in NGINX Gateway Fabric NGINX Gateway Fabric supports tracing using [OpenTelemetry](https://opentelemetry.io/). -The official [NGINX OpenTelemetry Module](https://github.com/nginxinc/nginx-otel) instruments the NGINX data plane to export traces to a configured collector. Tracing data can be used with an OpenTelemetry Protocol (OTLP) exporter, such as the [OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector). +The official [NGINX OpenTelemetry Module](https://github.com/nginxinc/nginx-otel) instruments the NGINX data plane to export traces to a configured collector. Tracing data can be used with an OpenTelemetry Protocol (OTLP) exporter, such as the [OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector). This collector can then export data to one or more upstream collectors like [Jaeger](https://www.jaegertracing.io/), [DataDog](https://docs.datadoghq.com/tracing/), and many others. This is called the [Agent model](https://opentelemetry.io/docs/collector/deployment/agent/). @@ -64,8 +64,6 @@ kubectl port-forward -n tracing svc/jaeger 16686:16686 & Visit [http://127.0.0.1:16686](http://127.0.0.1:16686) to view the dashboard. ---- - ## Enable tracing To enable tracing, you must configure two resources: @@ -104,7 +102,7 @@ The span attribute will be added to all tracing spans. To install: ```shell -helm install ngf oci://ghcr.io/nginxinc/charts/nginx-gateway-fabric --create-namespace -n nginx-gateway -f values.yaml +helm install ngf oci://ghcr.io/nginx/charts/nginx-gateway-fabric --create-namespace -n nginx-gateway -f values.yaml ``` You should see the following configuration: @@ -179,8 +177,6 @@ Save the public IP address and port of NGINX Gateway Fabric into shell variables You can now create the application, route, and tracing policy. ---- - ### Create the application and route Create the basic **coffee** application: @@ -283,7 +279,7 @@ To enable tracing for the coffee HTTPRoute, create the following policy: ```yaml kubectl apply -f - <}}) shows real-time metrics and information about your server infrastructure. - **Dynamic upstream configuration**: NGINX Plus can dynamically reconfigure upstream servers when applications in Kubernetes scale up and down, preventing the need for an NGINX reload. - **Support**: With an NGINX Plus license, you can take advantage of full [support](https://my.f5.com/manage/s/article/K000140156/) from NGINX, Inc. From d11b649ca90421f96a0caf1e76600c5bdf9f1fdd Mon Sep 17 00:00:00 2001 From: Luca Comellini Date: Thu, 6 Feb 2025 14:50:19 -0800 Subject: [PATCH 5/9] more changes --- .../traffic-management/upgrade-apps-without-downtime.md | 2 +- content/ngf/how-to/traffic-management/upstream-settings.md | 6 +++--- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/content/ngf/how-to/traffic-management/upgrade-apps-without-downtime.md b/content/ngf/how-to/traffic-management/upgrade-apps-without-downtime.md index cb3a11fba..80c959de4 100644 --- a/content/ngf/how-to/traffic-management/upgrade-apps-without-downtime.md +++ b/content/ngf/how-to/traffic-management/upgrade-apps-without-downtime.md @@ -13,7 +13,7 @@ Learn how to use NGINX Gateway Fabric to upgrade applications without downtime. ## Overview -{{< note >}} See the [Architecture document]({{< relref "/overview/gateway-architecture.md" >}}) to learn more about NGINX Gateway Fabric architecture.{{< /note >}} +{{< note >}} See the [Architecture document]({{< relref "/ngf/overview/gateway-architecture.md" >}}) to learn more about NGINX Gateway Fabric architecture.{{< /note >}} NGINX Gateway Fabric allows upgrading applications without downtime. To understand the upgrade methods, you need to be familiar with the NGINX features that help prevent application downtime: Graceful configuration reloads and upstream server updates. diff --git a/content/ngf/how-to/traffic-management/upstream-settings.md b/content/ngf/how-to/traffic-management/upstream-settings.md index e53b11a56..23b97f491 100644 --- a/content/ngf/how-to/traffic-management/upstream-settings.md +++ b/content/ngf/how-to/traffic-management/upstream-settings.md @@ -26,7 +26,7 @@ See the [custom policies]({{< relref "overview/custom-policies.md" >}}) document This guide will show you how to use the `UpstreamSettingsPolicy` API to configure the upstream zone size and keepalives for your applications. -For all the possible configuration options for `UpstreamSettingsPolicy`, see the [API reference]({{< relref "reference/api.md" >}}). +For all the possible configuration options for `UpstreamSettingsPolicy`, see the [API reference]({{< relref "/ngf/reference/api.md" >}}). --- @@ -391,5 +391,5 @@ upstream default_tea_80 { ## Further reading -- [Custom policies]({{< relref "overview/custom-policies.md" >}}): learn about how NGINX Gateway Fabric custom policies work. -- [API reference]({{< relref "reference/api.md" >}}): all configuration fields for the `UpstreamSettingsPolicy` API. +- [Custom policies]({{< relref "/ngf/overview/custom-policies.md" >}}): learn about how NGINX Gateway Fabric custom policies work. +- [API reference]({{< relref "/ngf/reference/api.md" >}}): all configuration fields for the `UpstreamSettingsPolicy` API. From 7777bcc3d93f47aaf66209dcf663aef16be51f2e Mon Sep 17 00:00:00 2001 From: Luca Comellini Date: Thu, 6 Feb 2025 14:52:23 -0800 Subject: [PATCH 6/9] fix link --- content/ngf/how-to/traffic-management/upstream-settings.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/ngf/how-to/traffic-management/upstream-settings.md b/content/ngf/how-to/traffic-management/upstream-settings.md index 23b97f491..0106fd2cd 100644 --- a/content/ngf/how-to/traffic-management/upstream-settings.md +++ b/content/ngf/how-to/traffic-management/upstream-settings.md @@ -22,7 +22,7 @@ The settings in `UpstreamSettingsPolicy` correspond to the following NGINX direc `UpstreamSettingsPolicy` is a [Direct Policy Attachment](https://gateway-api.sigs.k8s.io/reference/policy-attachment/) that can be applied to one or more services in the same namespace as the policy. `UpstreamSettingsPolicies` can only be applied to HTTP or gRPC services, in other words, services that are referenced by an HTTPRoute or GRPCRoute. -See the [custom policies]({{< relref "overview/custom-policies.md" >}}) document for more information on policies. +See the [custom policies]({{< relref "/ngf/overview/custom-policies.md" >}}) document for more information on policies. This guide will show you how to use the `UpstreamSettingsPolicy` API to configure the upstream zone size and keepalives for your applications. From 34e336d018ac079459f1f50caed5563676809e97 Mon Sep 17 00:00:00 2001 From: Luca Comellini Date: Thu, 6 Feb 2025 14:56:58 -0800 Subject: [PATCH 7/9] dashes --- content/ngf/how-to/monitoring/tracing.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/content/ngf/how-to/monitoring/tracing.md b/content/ngf/how-to/monitoring/tracing.md index 9113354f4..2d22f1e22 100644 --- a/content/ngf/how-to/monitoring/tracing.md +++ b/content/ngf/how-to/monitoring/tracing.md @@ -64,6 +64,8 @@ kubectl port-forward -n tracing svc/jaeger 16686:16686 & Visit [http://127.0.0.1:16686](http://127.0.0.1:16686) to view the dashboard. +--- + ## Enable tracing To enable tracing, you must configure two resources: @@ -177,6 +179,8 @@ Save the public IP address and port of NGINX Gateway Fabric into shell variables You can now create the application, route, and tracing policy. +--- + ### Create the application and route Create the basic **coffee** application: From c43cd730fe0c697193c1f667b2bd667304b5c7a7 Mon Sep 17 00:00:00 2001 From: Luca Comellini Date: Thu, 6 Feb 2025 15:07:18 -0800 Subject: [PATCH 8/9] fixes --- content/ngf/get-started.md | 38 ++--- content/ngf/how-to/monitoring/prometheus.md | 50 +++---- content/ngf/how-to/monitoring/tracing.md | 48 +++--- .../traffic-management/advanced-routing.md | 11 +- .../traffic-management/client-settings.md | 18 +-- .../request-response-headers.md | 16 +- .../ngf/how-to/traffic-management/snippets.md | 9 +- .../upgrade-apps-without-downtime.md | 140 ------------------ .../how-to/upgrade-apps-without-downtime.md | 46 +++--- .../ngf/installation/building-the-images.md | 13 +- .../ngf/installation/installing-ngf/helm.md | 12 +- .../installation/installing-ngf/manifests.md | 1 + content/ngf/installation/nginx-plus-jwt.md | 8 +- content/ngf/overview/gateway-architecture.md | 2 +- 14 files changed, 137 insertions(+), 275 deletions(-) delete mode 100644 content/ngf/how-to/traffic-management/upgrade-apps-without-downtime.md diff --git a/content/ngf/get-started.md b/content/ngf/get-started.md index 353985895..d54afedb2 100644 --- a/content/ngf/get-started.md +++ b/content/ngf/get-started.md @@ -36,14 +36,14 @@ Create the file _cluster-config.yaml_ with the following contents, noting the hi apiVersion: kind.x-k8s.io/v1alpha4 kind: Cluster nodes: -- role: control-plane - extraPortMappings: - - containerPort: 31437 - hostPort: 8080 - protocol: TCP - - containerPort: 31438 - hostPort: 8443 - protocol: TCP + - role: control-plane + extraPortMappings: + - containerPort: 31437 + hostPort: 8080 + protocol: TCP + - containerPort: 31438 + hostPort: 8443 + protocol: TCP ``` {{< warning >}} @@ -73,7 +73,7 @@ Thanks for using kind! 😊 ``` {{< note >}} -If you have cloned [the NGINX Gateway Fabric repository](https://github.com/nginx/nginx-gateway-fabric/tree/main), you can also create a kind cluster from the root folder with the following *make* command: +If you have cloned [the NGINX Gateway Fabric repository](https://github.com/nginx/nginx-gateway-fabric/tree/main), you can also create a kind cluster from the root folder with the following _make_ command: ```shell make create-kind-cluster @@ -166,16 +166,16 @@ spec: app.kubernetes.io/name: nginx-gateway-fabric app.kubernetes.io/instance: ngf ports: - - name: http - port: 80 - protocol: TCP - targetPort: 80 - nodePort: 31437 - - name: https - port: 443 - protocol: TCP - targetPort: 443 - nodePort: 31438 + - name: http + port: 80 + protocol: TCP + targetPort: 80 + nodePort: 31437 + - name: https + port: 443 + protocol: TCP + targetPort: 443 + nodePort: 31438 ``` Apply it using `kubectl`: diff --git a/content/ngf/how-to/monitoring/prometheus.md b/content/ngf/how-to/monitoring/prometheus.md index e1e38684e..c50d8a8d8 100644 --- a/content/ngf/how-to/monitoring/prometheus.md +++ b/content/ngf/how-to/monitoring/prometheus.md @@ -136,18 +136,18 @@ If you need to disable metrics: 1. Set the `-metrics-disable` [command-line argument]({{< ref "/ngf/reference/cli-help.md">}}) to `true` in the NGINX Gateway Fabric Pod's configuration. Remove any other `-metrics-*` arguments. 2. In the Pod template for NGINX Gateway Fabric, delete the metrics port entry from the container ports list: - ```yaml - - name: metrics - containerPort: 9113 - ``` + ```yaml + - name: metrics + containerPort: 9113 + ``` 3. Also, remove the following annotations from the NGINX Gateway Fabric Pod template: - ```yaml - annotations: - prometheus.io/scrape: "true" - prometheus.io/port: "9113" - ``` + ```yaml + annotations: + prometheus.io/scrape: "true" + prometheus.io/port: "9113" + ``` #### Changing the default port @@ -156,19 +156,19 @@ To change the default port for metrics: 1. Update the `-metrics-port` [command-line argument]({{< ref "/ngf/reference/cli-help.md">}}) in the NGINX Gateway Fabric Pod's configuration to your chosen port number. 2. In the Pod template, change the metrics port entry to reflect the new port: - ```yaml - - name: metrics - containerPort: - ``` + ```yaml + - name: metrics + containerPort: + ``` 3. Modify the `prometheus.io/port` annotation in the Pod template to match the new port: - ```yaml - annotations: - <...> - prometheus.io/port: "" - <...> - ``` + ```yaml + annotations: + <...> + prometheus.io/port: "" + <...> + ``` --- @@ -180,9 +180,9 @@ For enhanced security with HTTPS: 2. Add an HTTPS scheme annotation to the Pod template: - ```yaml - annotations: - <...> - prometheus.io/scheme: "https" - <...> - ``` + ```yaml + annotations: + <...> + prometheus.io/scheme: "https" + <...> + ``` diff --git a/content/ngf/how-to/monitoring/tracing.md b/content/ngf/how-to/monitoring/tracing.md index 2d22f1e22..ed7635670 100644 --- a/content/ngf/how-to/monitoring/tracing.md +++ b/content/ngf/how-to/monitoring/tracing.md @@ -123,8 +123,8 @@ spec: exporter: endpoint: otel-collector.tracing.svc:4317 spanAttributes: - - key: cluster-attribute-key - value: cluster-attribute-value + - key: cluster-attribute-key + value: cluster-attribute-value ``` ```shell @@ -144,24 +144,24 @@ spec: name: ngf-proxy-config status: conditions: - - lastTransitionTime: "2024-05-22T15:18:35Z" - message: GatewayClass is accepted - observedGeneration: 1 - reason: Accepted - status: "True" - type: Accepted - - lastTransitionTime: "2024-05-22T15:18:35Z" - message: Gateway API CRD versions are supported - observedGeneration: 1 - reason: SupportedVersion - status: "True" - type: SupportedVersion - - lastTransitionTime: "2024-05-22T15:18:35Z" - message: parametersRef resource is resolved - observedGeneration: 1 - reason: ResolvedRefs - status: "True" - type: ResolvedRefs + - lastTransitionTime: "2024-05-22T15:18:35Z" + message: GatewayClass is accepted + observedGeneration: 1 + reason: Accepted + status: "True" + type: Accepted + - lastTransitionTime: "2024-05-22T15:18:35Z" + message: Gateway API CRD versions are supported + observedGeneration: 1 + reason: SupportedVersion + status: "True" + type: SupportedVersion + - lastTransitionTime: "2024-05-22T15:18:35Z" + message: parametersRef resource is resolved + observedGeneration: 1 + reason: ResolvedRefs + status: "True" + type: ResolvedRefs ``` If you already have NGINX Gateway Fabric installed, then you can create the `NginxProxy` resource and link it to the GatewayClass `parametersRef`: @@ -172,10 +172,10 @@ kubectl edit gatewayclasses.gateway.networking.k8s.io nginx Save the public IP address and port of NGINX Gateway Fabric into shell variables: - ```text - GW_IP=XXX.YYY.ZZZ.III - GW_PORT= - ``` +```text +GW_IP=XXX.YYY.ZZZ.III +GW_PORT= +``` You can now create the application, route, and tracing policy. diff --git a/content/ngf/how-to/traffic-management/advanced-routing.md b/content/ngf/how-to/traffic-management/advanced-routing.md index d98df2a21..743fd824b 100644 --- a/content/ngf/how-to/traffic-management/advanced-routing.md +++ b/content/ngf/how-to/traffic-management/advanced-routing.md @@ -21,7 +21,6 @@ The following image shows the traffic flow that we will be creating with these r The goal is to create a set of rules that will result in client requests being sent to specific backends based on the request attributes. In this diagram, we have two versions of the `coffee` service. Traffic for v1 needs to be directed to the old application, while traffic for v2 needs to be directed towards the new application. We also have two `tea` services, one that handles GET operations and one that handles POST operations. Both the `tea` and `coffee` applications share the same Gateway. - --- ## Before you begin @@ -29,10 +28,10 @@ The goal is to create a set of rules that will result in client requests being s - [Install]({{< ref "/ngf/installation/" >}}) NGINX Gateway Fabric. - Save the public IP address and port of NGINX Gateway Fabric into shell variables: - ```text - GW_IP=XXX.YYY.ZZZ.III - GW_PORT= - ``` + ```text + GW_IP=XXX.YYY.ZZZ.III + GW_PORT= + ``` {{< note >}} In a production environment, you should have a DNS record for the external IP address that is exposed, and it should refer to the hostname that the gateway will forward for. {{< /note >}} @@ -117,7 +116,7 @@ This HTTPRoute has a few important properties: - The `parentRefs` references the gateway resource that we created, and specifically defines the `http` listener to attach to, via the `sectionName` field. - `cafe.example.com` is the hostname that is matched for all requests to the backends defined in this HTTPRoute. - The first rule defines that all requests with the path prefix `/coffee` and no other matching conditions are sent to the `coffee-v1` Service. -- The second rule defines two matching conditions. If *either* of these conditions match, requests are forwarded to the `coffee-v2` Service: +- The second rule defines two matching conditions. If _either_ of these conditions match, requests are forwarded to the `coffee-v2` Service: - Request with the path prefix `/coffee` and header `version=v2` - Request with the path prefix `/coffee` and the query parameter `TEST=v2` diff --git a/content/ngf/how-to/traffic-management/client-settings.md b/content/ngf/how-to/traffic-management/client-settings.md index 5af006e39..615bca682 100644 --- a/content/ngf/how-to/traffic-management/client-settings.md +++ b/content/ngf/how-to/traffic-management/client-settings.md @@ -17,11 +17,11 @@ The `ClientSettingsPolicy` API allows Cluster Operators and Application Develope The settings in `ClientSettingsPolicy` correspond to the following NGINX directives: -- [`client_max_body_size`]() -- [`client_body_timeout`]() -- [`keepalive_requests`]() -- [`keepalive_time`]() -- [`keepalive_timeout`]() +- [`client_max_body_size`](https://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size) +- [`client_body_timeout`](https://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_timeout) +- [`keepalive_requests`](https://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_requests) +- [`keepalive_time`](https://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_time) +- [`keepalive_timeout`](https://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout) `ClientSettingsPolicy` is an [Inherited PolicyAttachment](https://gateway-api.sigs.k8s.io/reference/policy-attachment/) that can be applied to a Gateway, HTTPRoute, or GRPCRoute in the same namespace as the `ClientSettingsPolicy`. @@ -41,7 +41,7 @@ For all the possible configuration options for `ClientSettingsPolicy`, see the [ - [Install]({{< ref "/ngf/installation/" >}}) NGINX Gateway Fabric. - Save the public IP address and port of NGINX Gateway Fabric into shell variables: - ```text + ```text GW_IP=XXX.YYY.ZZZ.III GW_PORT= ``` @@ -58,13 +58,13 @@ For all the possible configuration options for `ClientSettingsPolicy`, see the [ ```yaml kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.1/examples/client-settings-policy/gateway.yaml - ``` + ``` - Create HTTPRoutes for the coffee and tea applications: ```yaml kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.1/examples/client-settings-policy/httproutes.yaml - ``` + ``` - Test the configuration: @@ -87,7 +87,7 @@ For all the possible configuration options for `ClientSettingsPolicy`, see the [ ```shell curl --resolve cafe.example.com:$GW_PORT:$GW_IP http://cafe.example.com:$GW_PORT/tea - ``` + ``` This request should receive a response from the tea Pod: diff --git a/content/ngf/how-to/traffic-management/request-response-headers.md b/content/ngf/how-to/traffic-management/request-response-headers.md index 14931fae5..e6e9ad209 100644 --- a/content/ngf/how-to/traffic-management/request-response-headers.md +++ b/content/ngf/how-to/traffic-management/request-response-headers.md @@ -22,10 +22,10 @@ This guide describes how to configure the headers application to modify the head - [Install]({{< ref "/ngf/installation/" >}}) NGINX Gateway Fabric. - Save the public IP address and port of NGINX Gateway Fabric into shell variables: - ```text - GW_IP=XXX.YYY.ZZZ.III - GW_PORT= - ``` + ```text + GW_IP=XXX.YYY.ZZZ.III + GW_PORT= + ``` {{< note >}} In a production environment, you should have a DNS record for the external IP address that is exposed, and it should refer to the hostname that the gateway will forward for .{{< /note >}} @@ -132,9 +132,9 @@ This HTTPRoute has a few important properties: - The `match` rule defines that all requests with the path prefix `/headers` are sent to the `headers` Service. - It has a `RequestHeaderModifier` filter defined for the path prefix `/headers`. This filter: - 1. Sets the value for header `My-Overwrite-Header` to `this-is-the-only-value`. - 1. Appends the value `compress` to the `Accept-Encoding` header and `this-is-an-appended-value` to the `My-Cool-header`. - 1. Removes `User-Agent` header. + 1. Sets the value for header `My-Overwrite-Header` to `this-is-the-only-value`. + 1. Appends the value `compress` to the `Accept-Encoding` header and `this-is-an-appended-value` to the `My-Cool-header`. + 1. Removes `User-Agent` header. --- @@ -352,7 +352,7 @@ X-Header-Set: overwritten-value ok ``` -In the output above you can notice the modified response headers as the `X-Header-Unmodified` remains unchanged as we did not include it in the filter and `X-Header-Remove` header is absent. The header `X-Header-Add` gets appended with the new value and `X-Header-Set` gets overwritten to `overwritten-value` as defined in the *HttpRoute*. +In the output above you can notice the modified response headers as the `X-Header-Unmodified` remains unchanged as we did not include it in the filter and `X-Header-Remove` header is absent. The header `X-Header-Add` gets appended with the new value and `X-Header-Set` gets overwritten to `overwritten-value` as defined in the _HttpRoute_. --- diff --git a/content/ngf/how-to/traffic-management/snippets.md b/content/ngf/how-to/traffic-management/snippets.md index 6958c097e..bf995b7c6 100644 --- a/content/ngf/how-to/traffic-management/snippets.md +++ b/content/ngf/how-to/traffic-management/snippets.md @@ -59,13 +59,14 @@ We have outlined a few best practices to keep in mind when using `SnippetsFilter ## Setup - To enable Snippets, [install]({{< ref "/ngf/installation/" >}}) NGINX Gateway Fabric with these modifications: + - Using Helm: set the `nginxGateway.snippetsFilters.enable=true` Helm value. - Using Kubernetes manifests: set the `--snippets-filters` flag in the nginx-gateway container argument, add `snippetsfilters` to the RBAC rules with verbs `list` and `watch`, and add `snippetsfilters/status` to the RBAC rules with verb `update`. See this [example manifest](https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/main/deploy/snippets-filters/deploy.yaml) for clarification. - Save the public IP address and port of NGINX Gateway Fabric into shell variables: - ```text + ```text GW_IP= GW_PORT= ``` @@ -82,13 +83,13 @@ We have outlined a few best practices to keep in mind when using `SnippetsFilter ```yaml kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.1/examples/snippets-filter/gateway.yaml - ``` + ``` - Create HTTPRoutes for the coffee and tea applications: ```yaml kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.1/examples/snippets-filter/httproutes.yaml - ``` + ``` - Test the configuration: @@ -111,7 +112,7 @@ We have outlined a few best practices to keep in mind when using `SnippetsFilter ```shell curl --resolve cafe.example.com:$GW_PORT:$GW_IP http://cafe.example.com:$GW_PORT/tea - ``` + ``` This request should receive a response from the tea Pod: diff --git a/content/ngf/how-to/traffic-management/upgrade-apps-without-downtime.md b/content/ngf/how-to/traffic-management/upgrade-apps-without-downtime.md deleted file mode 100644 index 80c959de4..000000000 --- a/content/ngf/how-to/traffic-management/upgrade-apps-without-downtime.md +++ /dev/null @@ -1,140 +0,0 @@ ---- -title: "Upgrade applications without downtime" -toc: true -weight: 300 -type: how-to -product: NGF -docs: "DOCS-1420" ---- - -Learn how to use NGINX Gateway Fabric to upgrade applications without downtime. - ---- - -## Overview - -{{< note >}} See the [Architecture document]({{< relref "/ngf/overview/gateway-architecture.md" >}}) to learn more about NGINX Gateway Fabric architecture.{{< /note >}} - -NGINX Gateway Fabric allows upgrading applications without downtime. To understand the upgrade methods, you need to be familiar with the NGINX features that help prevent application downtime: Graceful configuration reloads and upstream server updates. - ---- - -### Graceful configuration reloads - -If a relevant gateway API or built-in Kubernetes resource is changed, NGINX Gateway Fabric will update NGINX by regenerating the NGINX configuration. NGINX Gateway Fabric then sends a reload signal to the master NGINX process to apply the new configuration. - -We call such an operation a "reload", during which client requests are not dropped - which defines it as a graceful reload. - -This process is further explained in the [NGINX configuration documentation](https://nginx.org/en/docs/control.html?#reconfiguration). - ---- - -### Upstream server updates - -Endpoints frequently change during application upgrades: Kubernetes creates pods for the new version of an application and removes the old ones, creating and removing the respective endpoints as well. - -NGINX Gateway Fabric detects changes to endpoints by watching their corresponding [EndpointSlices](https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/). - -In an NGINX configuration, a service is represented as an [upstream](https://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream), and an endpoint as an [upstream server](https://nginx.org/en/docs/http/ngx_http_upstream_module.html#server). - -Adding and removing endpoints are two of the most common cases: - -- If an endpoint is added, NGINX Gateway Fabric adds an upstream server to NGINX that corresponds to the endpoint, then reloads NGINX. Next, NGINX will start proxying traffic to that endpoint. -- If an endpoint is removed, NGINX Gateway Fabric removes the corresponding upstream server from NGINX. After a reload, NGINX will stop proxying traffic to that server. However, it will finish proxying any pending requests to that server before switching to another endpoint. - -As long as you have more than one endpoint ready, clients won't experience downtime during upgrades. - -{{< note >}}It is good practice to configure a [Readiness probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) in the deployment so that a pod can report when it is ready to receive traffic. Note that NGINX Gateway Fabric will not add any endpoint to NGINX that is not ready.{{< /note >}} - ---- - -## Prerequisites - -- You have deployed your application as a [deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) -- The pods of the deployment belong to a [service](https://kubernetes.io/docs/concepts/services-networking/service/) so that Kubernetes creates an [endpoint](https://kubernetes.io/docs/reference/kubernetes-api/service-resources/endpoints-v1/) for each pod. -- You have exposed the application to the clients via an [HTTPRoute](https://gateway-api.sigs.k8s.io/api-types/httproute/) resource that references that service. - -For example, an application can be exposed using a routing rule like below: - -```yaml -- matches: - - path: - type: PathPrefix - value: / - backendRefs: - - name: my-app - port: 80 -``` - -{{< note >}}See the [Cafe example](https://github.com/nginx/nginx-gateway-fabric/tree/v1.6.1/examples/cafe-example) for a basic example.{{< /note >}} - -The upgrade methods in the next sections cover: - -- Rolling deployment upgrades -- Blue-green deployments -- Canary releases - ---- - -## Rolling deployment upgrade - -To start a [rolling deployment upgrade](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment), you update the deployment to use the new version tag of the application. As a result, Kubernetes terminates the pods with the old version and create new ones. By default, Kubernetes also ensures that some number of pods always stay available during the upgrade. - -This upgrade will add new upstream servers to NGINX and remove the old ones. As long as the number of pods (ready endpoints) during an upgrade does not reach zero, NGINX will be able to proxy traffic, and therefore prevent any downtime. - -This method does not require you to update the **HTTPRoute**. - ---- - -## Blue-green deployments - -With this method, you deploy a new version of the application (blue version) as a separate deployment, while the old version (green) keeps running and handling client traffic. Next, you switch the traffic from the green version to the blue. If the blue works as expected, you terminate the green. Otherwise, you switch the traffic back to the green. - -There are two ways to switch the traffic: - -- Update the service selector to select the pods of the blue version instead of the green. As a result, NGINX Gateway Fabric removes the green upstream servers from NGINX and adds the blue ones. With this approach, it is not necessary to update the **HTTPRoute**. -- Create a separate service for the blue version and update the backend reference in the **HTTPRoute** to reference this service, which leads to the same result as with the previous option. - ---- - -## Canary releases - -Canary releases involve gradually introducing a new version of your application to a subset of nodes in a controlled manner, splitting the traffic between the old are new (canary) release. This allows for monitoring and testing the new release's performance and reliability before full deployment, helping to identify and address issues without impacting the entire user base. - -To support canary releases, you can implement an approach with two deployments behind the same service (see [Canary deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#canary-deployment) in the Kubernetes documentation). However, this approach lacks precision for defining the traffic split between the old and the canary version. You can greatly influence it by controlling the number of pods (for example, four pods of the old version and one pod of the canary). However, note that NGINX Gateway Fabric uses [`random two least_conn`](https://nginx.org/en/docs/http/ngx_http_upstream_module.html#random) load balancing method, which doesn't guarantee an exact split based on the number of pods (80/20 in the given example). - -A more flexible and precise way to implement canary releases is to configure a traffic split in an **HTTPRoute**. In this case, you create a separate deployment for the new version with a separate service. For example, for the rule below, NGINX will proxy 95% of the traffic to the old version endpoints and only 5% to the new ones. - -```yaml -- matches: - - path: - type: PathPrefix - value: / - backendRefs: - - name: my-app-old - port: 80 - weight: 95 - - name: my-app-new - port: 80 - weight: 5 -``` - -{{< note >}}Every request coming from the same client won't necessarily be sent to the same backend. NGINX will independently split each request among the backend references.{{< /note >}} - -By updating the rule you can further increase the share of traffic the new version gets and finally completely switch to the new version: - -```yaml -- matches: - - path: - type: PathPrefix - value: / - backendRefs: - - name: my-app-old - port: 80 - weight: 0 - - name: my-app-new - port: 80 - weight: 1 -``` - -See the [Traffic splitting example](https://github.com/nginx/nginx-gateway-fabric/tree/v1.6.1/examples/traffic-splitting) from our repository. diff --git a/content/ngf/how-to/upgrade-apps-without-downtime.md b/content/ngf/how-to/upgrade-apps-without-downtime.md index 6c0865ca8..e959a0f82 100644 --- a/content/ngf/how-to/upgrade-apps-without-downtime.md +++ b/content/ngf/how-to/upgrade-apps-without-downtime.md @@ -58,12 +58,12 @@ For example, an application can be exposed using a routing rule like below: ```yaml - matches: - - path: - type: PathPrefix - value: / + - path: + type: PathPrefix + value: / backendRefs: - - name: my-app - port: 80 + - name: my-app + port: 80 ``` {{< note >}} See the [Cafe example](https://github.com/nginx/nginx-gateway-fabric/tree/v1.6.1/examples/cafe-example) for a basic example. {{< /note >}} @@ -107,16 +107,16 @@ A more flexible and precise way to implement canary releases is to configure a t ```yaml - matches: - - path: - type: PathPrefix - value: / + - path: + type: PathPrefix + value: / backendRefs: - - name: my-app-old - port: 80 - weight: 95 - - name: my-app-new - port: 80 - weight: 5 + - name: my-app-old + port: 80 + weight: 95 + - name: my-app-new + port: 80 + weight: 5 ``` {{< note >}} Every request coming from the same client won't necessarily be sent to the same backend. NGINX will independently split each request among the backend references. {{< /note >}} @@ -125,16 +125,16 @@ By updating the rule you can further increase the share of traffic the new versi ```yaml - matches: - - path: - type: PathPrefix - value: / + - path: + type: PathPrefix + value: / backendRefs: - - name: my-app-old - port: 80 - weight: 0 - - name: my-app-new - port: 80 - weight: 1 + - name: my-app-old + port: 80 + weight: 0 + - name: my-app-new + port: 80 + weight: 1 ``` See the [Traffic splitting example](https://github.com/nginx/nginx-gateway-fabric/tree/v1.6.1/examples/traffic-splitting) from our repository. diff --git a/content/ngf/installation/building-the-images.md b/content/ngf/installation/building-the-images.md index 5386eda7e..3975aec8c 100644 --- a/content/ngf/installation/building-the-images.md +++ b/content/ngf/installation/building-the-images.md @@ -37,17 +37,18 @@ If building the NGINX Plus image, you will also need a valid NGINX Plus license ``` 1. Build the images: + - To build both the NGINX Gateway Fabric and NGINX images: - ```makefile - make PREFIX=myregistry.example.com/nginx-gateway-fabric build-prod-images - ``` + ```makefile + make PREFIX=myregistry.example.com/nginx-gateway-fabric build-prod-images + ``` - To build both the NGINX Gateway Fabric and NGINX Plus images: - ```makefile - make PREFIX=myregistry.example.com/nginx-gateway-fabric build-prod-images-with-plus - ``` + ```makefile + make PREFIX=myregistry.example.com/nginx-gateway-fabric build-prod-images-with-plus + ``` - To build just the NGINX Gateway Fabric image: diff --git a/content/ngf/installation/installing-ngf/helm.md b/content/ngf/installation/installing-ngf/helm.md index 17b782f4c..3d15c0869 100644 --- a/content/ngf/installation/installing-ngf/helm.md +++ b/content/ngf/installation/installing-ngf/helm.md @@ -24,6 +24,7 @@ To complete this guide, you'll need to install: - [Helm 3.0 or later](https://helm.sh/docs/intro/install/), for deploying and managing applications on Kubernetes. {{< important >}} If you’d like to use NGINX Plus, some additional setup is also required: {{}} +
    NGINX Plus JWT setup @@ -129,13 +130,13 @@ helm install ngf . --set nginx.image.repository=private-registry.nginx.com/nginx {{}} - `ngf` is the name of the release, and can be changed to any name you want. This name is added as a prefix to the Deployment name. +`ngf` is the name of the release, and can be changed to any name you want. This name is added as a prefix to the Deployment name. - To wait for the Deployment to be ready, you can either add the `--wait` flag to the `helm install` command, or run the following after installing: +To wait for the Deployment to be ready, you can either add the `--wait` flag to the `helm install` command, or run the following after installing: - ```shell - kubectl wait --timeout=5m -n nginx-gateway deployment/ngf-nginx-gateway-fabric --for=condition=Available - ``` +```shell +kubectl wait --timeout=5m -n nginx-gateway deployment/ngf-nginx-gateway-fabric --for=condition=Available +``` --- @@ -361,7 +362,6 @@ Follow these steps to uninstall NGINX Gateway Fabric and Gateway API from your K - {{< include "/ngf/installation/uninstall-gateway-api-resources.md" >}} - --- ## Additional configuration diff --git a/content/ngf/installation/installing-ngf/manifests.md b/content/ngf/installation/installing-ngf/manifests.md index a04a237ab..9726af98c 100644 --- a/content/ngf/installation/installing-ngf/manifests.md +++ b/content/ngf/installation/installing-ngf/manifests.md @@ -23,6 +23,7 @@ To complete this guide, you'll need to install: - [kubectl](https://kubernetes.io/docs/tasks/tools/), a command-line interface for managing Kubernetes clusters. {{< important >}} If you’d like to use NGINX Plus, some additional setup is also required: {{}} +
    NGINX Plus JWT setup diff --git a/content/ngf/installation/nginx-plus-jwt.md b/content/ngf/installation/nginx-plus-jwt.md index 8e6d8a106..6f456cb00 100644 --- a/content/ngf/installation/nginx-plus-jwt.md +++ b/content/ngf/installation/nginx-plus-jwt.md @@ -159,10 +159,10 @@ You also need to define the proper volume mount to mount the Secrets to the ngin - name: nginx-plus-usage-certs projected: sources: - - secret: - name: nim-ca - - secret: - name: nim-client + - secret: + name: nim-ca + - secret: + name: nim-client ``` and the following volume mounts to the `nginx` container: diff --git a/content/ngf/overview/gateway-architecture.md b/content/ngf/overview/gateway-architecture.md index 25d706890..8cb041f4e 100644 --- a/content/ngf/overview/gateway-architecture.md +++ b/content/ngf/overview/gateway-architecture.md @@ -89,7 +89,7 @@ The following list describes the connections, preceeded by their types in parent 1. (Signal) To reload NGINX, _NGF_ sends the [reload signal](https://nginx.org/en/docs/control.html) to the **NGINX master**. 1. (File I/O) - Write: The _NGINX master_ writes its PID to the `nginx.pid` file stored in the `nginx-run` volume. - - Read: The _NGINX master_ reads _configuration files_ and the _TLS cert and keys_ referenced in the configuration when it starts or during a reload. These files, certificates, and keys are stored in the `nginx-conf` and `nginx-secrets` volumes that are mounted to both the `nginx-gateway` and `nginx` containers. + - Read: The _NGINX master_ reads _configuration files_ and the _TLS cert and keys_ referenced in the configuration when it starts or during a reload. These files, certificates, and keys are stored in the `nginx-conf` and `nginx-secrets` volumes that are mounted to both the `nginx-gateway` and `nginx` containers. 1. (File I/O) - Write: The _NGINX master_ writes to the auxiliary Unix sockets folder, which is located in the `/var/run/nginx` directory. From 87f4b87a1e6a2c496ef299d1d6ff3816d85457db Mon Sep 17 00:00:00 2001 From: Luca Comellini Date: Thu, 6 Feb 2025 16:01:27 -0800 Subject: [PATCH 9/9] add type and product --- content/ngf/how-to/traffic-management/upstream-settings.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/content/ngf/how-to/traffic-management/upstream-settings.md b/content/ngf/how-to/traffic-management/upstream-settings.md index 0106fd2cd..08c815cc0 100644 --- a/content/ngf/how-to/traffic-management/upstream-settings.md +++ b/content/ngf/how-to/traffic-management/upstream-settings.md @@ -2,6 +2,8 @@ title: "Upstream Settings Policy API" weight: 900 toc: true +type: how-to +product: NGF docs: "DOCS-000" ---
    FieldDescription
    +strategy
    + + +TraceStrategy + + +
    +

    Strategy defines if tracing is ratio-based or parent-based.

    +
    +ratio
    + +int32 + +
    +(Optional) +

    Ratio is the percentage of traffic that should be sampled. Integer from 0 to 100. +By default, 100% of http requests are traced. Not applicable for parent-based tracing. +If ratio is set to 0, tracing is disabled.

    +
    +context
    + + +TraceContext + + +
    +(Optional) +

    Context specifies how to propagate traceparent/tracestate headers. +Default: https://nginx.org/en/docs/ngx_otel_module.html#otel_trace_context

    +
    +spanName
    + +string + +
    +(Optional) +

    SpanName defines the name of the Otel span. By default is the name of the location for a request. +If specified, applies to all locations that are created for a route. +Format: must have all ‘“’ escaped and must not contain any ‘$’ or end with an unescaped ‘\’ +Examples of invalid names: some-$value, quoted-“value”-name, unescaped

    +
    +spanAttributes
    + + +[]SpanAttribute + + +
    +(Optional) +

    SpanAttributes are custom key/value attributes that are added to each span.