diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md index 47978539f..09f9c2eea 100644 --- a/.github/pull_request_template.md +++ b/.github/pull_request_template.md @@ -1,35 +1,31 @@ ### Proposed changes -Write a clear and concise description that helps reviewers understand the purpose and impact of your changes. Use the -following format: +[//]: # "Write a clear and concise description of what the pull request changes." +[//]: # "You can use our Commit messages guidance for this." +[//]: # "https://github.com/nginx/documentation/blob/main/documentation/git-conventions.md#commit-messages" -Problem: Give a brief overview of the problem or feature being addressed. +[//]: # "First, explain what was changed, and why. This should be most of the detail." +[//]: # "Then how the changes were made, such as referring to existing styles and conventions." +[//]: # "Finish by noting anything beyond the scope of the PR changes that may be affected." -Solution: Explain the approach you took to implement the solution, highlighting any significant design decisions or -considerations. +[//]: # "Include information on testing if relevant and non-obvious from the deployment preview." +[//]: # "For expediency, you can use screenshots to show small before and after examples." -Testing: Describe any testing that you did. +[//]: # "If the changes were defined by a GitHub issue, reference it using keywords." +[//]: # "https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/using-keywords-in-issues-and-pull-requests" -Please focus on (optional): If you any specific areas where you would like reviewers to focus their attention or provide -specific feedback, add them here. - -If this PR addresses an [issue](https://github.com/nginx/documentation/issues) on GitHub, ensure that you link to it here: - -Closes #ISSUE +[//]: # "Do not like to any internal, non-public resources. This includes internal repository issues or anything in an intranet." +[//]: # "You can make reference to internal discussions without linking to them: see 'Referencing internal information'." +[//]: # "https://github.com/nginx/documentation/blob/main/documentation/closed-contributions.md#referencing-internal-information" ### Checklist -Before merging a pull request, run through this checklist and mark each as complete. +Before sharing this pull request, I completed the following checklist: -- [ ] I have read the [contributing guidelines](https://github.com/nginx/documentation/blob/main/CONTRIBUTING.md) -- [ ] I have signed the [F5 Contributor License Agreement (CLA)](https://github.com/f5/.github/blob/main/CLA/cla-markdown.md) -- [ ] I have rebased my branch onto main -- [ ] I have ensured my PR is targeting the main branch and pulling from my branch from my own fork -- [ ] I have ensured that the commit messages adhere to [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/#summary) -- [ ] I have ensured that documentation content adheres to [the style guide](/documentation/style-guide.md) -- [ ] If the change involves potentially sensitive changes[^1], I have assessed the possible impact -- [ ] If applicable, I have added tests that prove my fix is effective or that my feature works -- [ ] I have ensured that existing tests pass after adding my changes -- [ ] If applicable, I have updated [`README.md`](/README.md) +- [ ] I read the [Contributing guidelines](https://github.com/nginx/documentation/blob/main/CONTRIBUTING.md) +- [ ] My branch adheres to the [Git conventions](https://github.com/nginx/documentation/blob/main/documentation/git-conventions.md) +- [ ] My content changes adhere to the [F5 NGINX Documentation style guide](https://github.com/nginx/documentation/blob/main/documentation/style-guide.md) +- [ ] If my changes involve potentially sensitive information[^1], I have assessed the possible impact +- [ ] I have waited to ensure my changes pass tests, and addressed any discovered issues -[^1]: Potentially sensitive changes include anything involving code, personally identify information (PII), live URLs or significant amounts of new or revised documentation. Please refer to [our style guide](/documentation/style-guide.md) for guidance about placeholder content. \ No newline at end of file +[^1]: Potentially sensitive information includes personally identify information (PII), authentication credentials, and live URLs. Refer to the [style guide](https://github.com/nginx/documentation/blob/main/documentation/style-guide.md) for guidance about placeholder content. diff --git a/.github/workflows/build-push.yml b/.github/workflows/build-push.yml index aedf4214e..2bca0af56 100644 --- a/.github/workflows/build-push.yml +++ b/.github/workflows/build-push.yml @@ -101,6 +101,11 @@ jobs: value: `${{ github.event.client_payload.author }}`, short: true }, + { + title: 'Description', + value: `${{ github.event.client_payload.description }}`, + short: false + }, { title: 'Preview URL', value: `${{ env.PREVIEW_URL }}`, diff --git a/.github/workflows/ossf_scorecard.yml b/.github/workflows/ossf_scorecard.yml index 864924b3a..bd9ae35af 100644 --- a/.github/workflows/ossf_scorecard.yml +++ b/.github/workflows/ossf_scorecard.yml @@ -56,6 +56,6 @@ jobs: # Upload the results to GitHub's code scanning dashboard. - name: Upload SARIF results to code scanning - uses: github/codeql-action/upload-sarif@181d5eefc20863364f96762470ba6f862bdef56b # v3.29.2 + uses: github/codeql-action/upload-sarif@4e828ff8d448a8a6e532957b1811f387a63867e8 # v3.29.4 with: sarif_file: results.sarif diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 7a0af1f48..3988f3345 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -41,6 +41,7 @@ To understand how we use Git in this repository, read our [Git conventions](/doc The broad workflow is as follows: - Fork the NGINX repository + - If you're an F5/NGINX user, you can work from a clone - Create a branch - Implement your changes in your branch - Submit a pull request (PR) when your changes are ready for review diff --git a/_banners/eos-acm.md b/_banners/eos-acm.md new file mode 100644 index 000000000..e69f51b6b --- /dev/null +++ b/_banners/eos-acm.md @@ -0,0 +1,8 @@ +{{< banner "warning" "End of Sale Notice:" >}} +
+ F5 NGINX is announcing the End of Sale (EoS) for NGINX Management Suite API Connectivity Manager Module, effective January 1, 2024. +

+ F5 maintains generous lifecycle policies that allow customers to continue support and receive product updates. Existing API Connectivity Manager Module customers can continue to use the product past the EoS date. License renewals are not available after September 30, 2024. +

+ See our End of Sale announcement for more details. +{{}} \ No newline at end of file diff --git a/_banners/eos-mesh.md b/_banners/eos-mesh.md index e69f51b6b..f2b11d535 100644 --- a/_banners/eos-mesh.md +++ b/_banners/eos-mesh.md @@ -1,8 +1,8 @@ {{< banner "warning" "End of Sale Notice:" >}} -
- F5 NGINX is announcing the End of Sale (EoS) for NGINX Management Suite API Connectivity Manager Module, effective January 1, 2024. -

- F5 maintains generous lifecycle policies that allow customers to continue support and receive product updates. Existing API Connectivity Manager Module customers can continue to use the product past the EoS date. License renewals are not available after September 30, 2024. -

- See our End of Sale announcement for more details. +

+ Commercial support for NGINX Service Mesh is available to customers who currently have active NGINX Microservices Bundle subscriptions. F5 NGINX announced the End of Sale (EoS) for the NGINX Microservices Bundles as of July 1, 2023. +

+

+ See our End of Sale announcement for more details. +

{{}} \ No newline at end of file diff --git a/content/agent/installation-upgrade/container-environments/docker-images.md b/content/agent/installation-upgrade/container-environments/docker-images.md index f21c23929..080d405b5 100644 --- a/content/agent/installation-upgrade/container-environments/docker-images.md +++ b/content/agent/installation-upgrade/container-environments/docker-images.md @@ -116,7 +116,7 @@ docker run --name nginx-agent -d nginx-agent ### Enable the gRPC interface -To connect your NGINX Agent container to your NGINX One or NGINX Instance Manager instance, you must enable the gRPC interface. To do this, you must edit the NGINX Agent configuration file, *nginx-agent.conf*. For example: +To connect your NGINX Agent container to your NGINX One Console or NGINX Instance Manager instance, you must enable the gRPC interface. To do this, you must edit the NGINX Agent configuration file, *nginx-agent.conf*. For example: ```yaml server: diff --git a/content/agent/installation-upgrade/installation-oss.md b/content/agent/installation-upgrade/installation-oss.md index ab4016d08..e58b0d6be 100644 --- a/content/agent/installation-upgrade/installation-oss.md +++ b/content/agent/installation-upgrade/installation-oss.md @@ -50,10 +50,10 @@ Before you install NGINX Agent for the first time on your system, you need to se module_hotfixes=true ``` -1. To install `nginx-agent`, run the following command: +1. To install `nginx-agent` with a specific version (example: 2.42.0): ```shell - sudo yum install nginx-agent + sudo yum install -y nginx-agent-2.42.0 ``` When prompted to accept the GPG key, verify that the fingerprint matches `8540 A6F1 8833 A80E 9C16 53A4 2FD2 1310 B49F 6B46`, `573B FD6B 3D8F BC64 1079 A6AB ABF5 BD82 7BD9 BF62`, `9E9B E90E ACBC DE69 FE9B 204C BCDC D8A3 8D88 A2B3`, and if so, accept it. @@ -105,11 +105,13 @@ Before you install NGINX Agent for the first time on your system, you need to se | sudo tee /etc/apt/sources.list.d/nginx-agent.list ``` -1. To install `nginx-agent`, run the following commands: - +1. To install `nginx-agent` with a specific version (example: 2.42.0): + + Update your package index and install a specific version of the nginx-agent. Replace with your current Ubuntu codename (e.g., jammy, noble). + ```shell sudo apt update - sudo apt install nginx-agent + sudo apt install -y nginx-agent=2.42.0~ ``` 1. Verify the installation: @@ -166,12 +168,13 @@ Before you install NGINX Agent for the first time on your system, you need to se http://packages.nginx.org/nginx-agent/debian/ `lsb_release -cs` agent" \ | sudo tee /etc/apt/sources.list.d/nginx-agent.list ``` -1. To install `nginx-agent`, run the following commands: +1. To install `nginx-agent` with a specific version (example: 2.42.0): + Update your package index and install a specific version of the nginx-agent. Replace with your current Debian codename (e.g., bullseye). + ```shell sudo apt update - sudo apt install nginx-agent - ``` + sudo apt install -y nginx-agent=2.42.0~ 1. Verify the installation: @@ -229,10 +232,10 @@ Before you install NGINX Agent for the first time on your system, you need to se sudo rpmkeys --import /tmp/nginx_signing.key ``` -1. To install `nginx-agent`, run the following command: +1. To install `nginx-agent` with a specific version (example: 2.42.0): ```shell - sudo zypper install nginx-agent + sudo zypper install -y nginx-agent=2.42.0 ``` 1. Verify the installation: @@ -303,10 +306,10 @@ Before you install NGINX Agent for the first time on your system, you need to se sudo mv /tmp/nginx_signing.rsa.pub /etc/apk/keys/ ``` -1. To install `nginx-agent`, run the following command: +1. To install `nginx-agent` with a specific version (example: 2.42.0): ```shell - sudo apk add nginx-agent + sudo apk add nginx-agent=2.42.0 ``` 1. Verify the installation: @@ -334,10 +337,10 @@ Before you install NGINX Agent for the first time on your system, you need to se module_hotfixes=true ``` -1. To install `nginx-agent`, run the following command: +1. To install `nginx-agent` with a specific version (example: 2.42.0): ```shell - sudo dnf install nginx-agent + sudo dnf install -y nginx-agent-2.42.0 ``` 1. When prompted to accept the GPG key, verify that the fingerprint matches @@ -370,10 +373,10 @@ Before you install NGINX Agent for the first time on your system, you need to se module_hotfixes=true ``` -1. To install `nginx-agent`, run the following command: +1. To install `nginx-agent` with a specific version (example: 2.42.0): ```shell - sudo yum install nginx-agent + sudo yum install -y nginx-agent-2.42.0 ``` 1. When prompted to accept the GPG key, verify that the fingerprint matches `8540 A6F1 8833 A80E 9C16 53A4 2FD2 1310 B49F 6B46`, `573B FD6B 3D8F BC64 1079 A6AB ABF5 BD82 7BD9 BF62`, `9E9B E90E ACBC DE69 FE9B 204C BCDC D8A3 8D88 A2B3`, and if so, accept it. @@ -396,10 +399,10 @@ Before you install NGINX Agent for the first time on your system, you need to se } ``` -1. To install `nginx-agent`, run the following command: +1. To install `nginx-agent` with a specific version (example: 2.42.0): ```shell - sudo pkg install nginx-agent + sudo pkg install nginx-agent-2.42.0 ``` 1. Verify the installation: diff --git a/content/agent/installation-upgrade/installation-plus.md b/content/agent/installation-upgrade/installation-plus.md index add1272dc..744fbde26 100644 --- a/content/agent/installation-upgrade/installation-plus.md +++ b/content/agent/installation-upgrade/installation-plus.md @@ -73,10 +73,10 @@ Before you install NGINX Agent for the first time on your system, you need to se enabled=1 ``` -1. To install `nginx-agent`, run the following command: +1. To install `nginx-agent` with a specific version (example: 2.42.0): ```shell - sudo yum install nginx-agent + sudo yum install -y nginx-agent-2.42.0 ``` When prompted to accept the GPG key, verify that the fingerprint matches `8540 A6F1 8833 A80E 9C16 53A4 2FD2 1310 B49F 6B46`, `573B FD6B 3D8F BC64 1079 A6AB ABF5 BD82 7BD9 BF62`, `9E9B E90E ACBC DE69 FE9B 204C BCDC D8A3 8D88 A2B3`, and if so, accept it. @@ -131,11 +131,13 @@ Before you install NGINX Agent for the first time on your system, you need to se | sudo tee /etc/apt/sources.list.d/nginx-agent.list ``` -1. To install `nginx-agent`, run the following commands: +1. To install `nginx-agent` with a specific version (example: 2.42.0): + Update your package index and install a specific version of the nginx-agent. Replace with your current Ubuntu codename (e.g., jammy, noble). + ```shell sudo apt update - sudo apt install nginx-agent + sudo apt install -y nginx-agent=2.42.0~ ``` 1. Verify the installation: @@ -183,12 +185,14 @@ Before you install NGINX Agent for the first time on your system, you need to se Acquire::https::pkgs.nginx.com::SslKey "/etc/ssl/nginx/nginx-repo.key"; ``` -1. To install `nginx-agent`, run the following commands: +1. To install `nginx-agent` with a specific version (example: 2.42.0): + Update your package index and install a specific version of the nginx-agent. Replace with your current Debian codename (e.g., bullseye). + ```shell sudo apt update - sudo apt install nginx-agent - ``` + sudo apt install -y nginx-agent=2.42.0~ + 1. Verify the installation: @@ -265,10 +269,10 @@ Before you install NGINX Agent for the first time on your system, you need to se sudo rpmkeys --import /tmp/nginx_signing.key ``` -1. To install `nginx-agent`, run the following command: +1. To install `nginx-agent` with a specific version (example: 2.42.0): ```shell - sudo zypper install nginx-agent + sudo zypper install -y nginx-agent=2.42.0 ``` 1. Verify the installation: @@ -394,10 +398,10 @@ Before you install NGINX Agent for the first time on your system, you need to se enabled=1 ``` -1. To install `nginx-agent`, run the following command: +1. To install `nginx-agent` with a specific version (example: 2.42.0): ```shell - sudo dnf install nginx-agent + sudo dnf install -y nginx-agent-2.42.0 ``` 1. When prompted to accept the GPG key, verify that the fingerprint matches `8540 A6F1 8833 A80E 9C16 53A4 2FD2 1310 B49F 6B46`, `573B FD6B 3D8F BC64 1079 A6AB ABF5 BD82 7BD9 BF62`, `9E9B E90E ACBC DE69 FE9B 204C BCDC D8A3 8D88 A2B3`, and if so, accept it. @@ -442,10 +446,10 @@ Before you install NGINX Agent for the first time on your system, you need to se enabled=1 ``` -1. To install `nginx-agent`, run the following command: +1. To install `nginx-agent` with a specific version (example: 2.42.0): ```shell - sudo yum install nginx-agent + sudo yum install -y nginx-agent-2.42.0 ``` 1. When prompted to accept the GPG key, verify that the fingerprint matches `8540 A6F1 8833 A80E 9C16 53A4 2FD2 1310 B49F 6B46`, `573B FD6B 3D8F BC64 1079 A6AB ABF5 BD82 7BD9 BF62`, `9E9B E90E ACBC DE69 FE9B 204C BCDC D8A3 8D88 A2B3`, and if so, accept it. @@ -496,10 +500,10 @@ Before you install NGINX Agent for the first time on your system, you need to se SSL_CLIENT_KEY_FILE: "/etc/ssl/nginx/nginx-repo.key" } ``` -1. To install `nginx-agent`, run the following command: +1. To install `nginx-agent` with a specific version (example: 2.42.0): ```shell - sudo pkg install nginx-agent + sudo pkg install nginx-agent-2.42.0 ``` 1. Verify the installation: diff --git a/content/agent/installation-upgrade/upgrade.md b/content/agent/installation-upgrade/upgrade.md index f942db01f..71beebb96 100644 --- a/content/agent/installation-upgrade/upgrade.md +++ b/content/agent/installation-upgrade/upgrade.md @@ -60,10 +60,10 @@ To upgrade NGINX Agent to a specific **v2.x version**, follow these steps: sudo apt-get install -y nginx-agent= -o Dpkg::Options::="--force-confold" ``` - Example (to upgrade to version 2.41.1~noble): + Example (to upgrade to version 2.42.0~noble): ```shell - sudo apt-get install -y nginx-agent=2.41.1~noble -o Dpkg::Options::="--force-confold" + sudo apt-get install -y nginx-agent=2.42.0~noble -o Dpkg::Options::="--force-confold" ``` - CentOS, RHEL, RPM-Based @@ -72,10 +72,10 @@ To upgrade NGINX Agent to a specific **v2.x version**, follow these steps: sudo yum install -y nginx-agent- ``` - Example (to upgrade to version `2.41.1`): + Example (to upgrade to version `2.42.0`): ```shell - sudo yum install -y nginx-agent-2.41.1 + sudo yum install -y nginx-agent-2.42.0 ``` 1. Verify the installed version: diff --git a/content/agent/technical-specifications.md b/content/agent/technical-specifications.md index b4273c922..c1fda1546 100644 --- a/content/agent/technical-specifications.md +++ b/content/agent/technical-specifications.md @@ -14,10 +14,10 @@ This document provides technical specifications for NGINX Agent. It includes inf {{< bootstrap-table "table table-striped table-bordered" >}} | NGINX Product | Agent Version | |------------------------------|----------------| -| **NGINX One Console** | 2.x | +| **NGINX One Console** | 3.x | | **NGINX Gateway Fabric** | 3.x | | **NGINX Plus** | 2.x, 3.x | -| **NGINX Ingress Controller** | 2.x | +| **NGINX Ingress Controller** | 2.x, 3.x | | **NGINX Instance Manager** | 2.x | {{< /bootstrap-table >}} @@ -26,16 +26,16 @@ This document provides technical specifications for NGINX Agent. It includes inf NGINX Agent can run in most environments. We support the following distributions: {{< bootstrap-table "table table-striped table-bordered" >}} -| | AlmaLinux | Alpine Linux | Amazon Linux | Amazon Linux 2 | CentOS | Debian | -|-|-----------|--------------|--------------|----------------|--------|--------| -|**Version**|8

9 | 3.16

3.17

3.18

3.19| 2023| LTS| 7.4+| 11

12| +| | AlmaLinux | Alpine Linux | Amazon Linux | Amazon Linux 2| Debian | +|-|-----------|--------------|--------------|----------------|--------| +|**Version**|8

9

10| 3.19

3.20

3.21

3.22| 2023| LTS| 11

12| |**Architecture**| x86_84

aarch64| x86_64

aarch64 | x86_64

aarch64 | x86_64

aarch64 | x86_64

aarch64 | x86_64

aarch64 | {{< /bootstrap-table >}} {{< bootstrap-table "table table-striped table-bordered" >}} | |FreeBSD | Oracle Linux | Red Hat
Enterprise Linux
(RHEL) | Rocky Linux | SUSE Linux
Enterprise Server
(SLES) | Ubuntu | |-|--------|--------------|---------------------------------|-------------|-------------------------------------|--------| -|**Version**|13

14|7.4+

8.1+

9|7.4+

8.1+

9.0+|8

9|12 SP5

15 SP2|20.04 LTS

22.04 LTS| +|**Version**|13

14|8.1+

9

10|8.1+

9.0+

10|8

9

10|15 SP2|22.04 LTS

24.04 LTS

25.04 LTS| |**Architecture**|amd64|x86_64|x86_64

aarch64|x86_64

aarch64|x86_64|x86_64

aarch64| {{< /bootstrap-table >}} diff --git a/content/includes/agent/architecture.md b/content/includes/agent/architecture.md index d09a138be..2dcb6c722 100644 --- a/content/includes/agent/architecture.md +++ b/content/includes/agent/architecture.md @@ -8,7 +8,7 @@ files: The figure shows: - An NGINX instance running on bare metal, virtual machine or container -- NGINX One Cloud Console includes: +- NGINX One Console includes: - Command Server to manage NGINX configurations, push new/updated configuration files remotely, and perform integrity tests. - OpenTelemetry (OTel) Receiver that receives observability data from connected Agent instances. @@ -16,7 +16,7 @@ The figure shows: - An NGINX Agent process running on the NGINX instance. NGINX Agent is responsible for: - Watching, applying, validating, automatically roll back to last good configuration if issues are detected. - - Embedding an OpenTelemetry Collector, collecting metrics from NGINX processes, host system performance data, then securely passing metric data to NGINX One Cloud Console. + - Embedding an OpenTelemetry Collector, collecting metrics from NGINX processes, host system performance data, then securely passing metric data to NGINX One Console. - Collection and monitoring of host metrics (CPU usage, Memory utilization, Disk I/O) by the Agent OTel collector. -- Collected data is made available on NGINX One Cloud Console for monitoring, alerting, troubleshooting, and capacity planning purposes. +- Collected data is made available on NGINX One Console for monitoring, alerting, troubleshooting, and capacity planning purposes. diff --git a/content/includes/agent/installation/update-container.md b/content/includes/agent/installation/update-container.md index a974cfbce..f5d3fe10a 100644 --- a/content/includes/agent/installation/update-container.md +++ b/content/includes/agent/installation/update-container.md @@ -14,11 +14,11 @@ wget https://raw.githubusercontent.com/nginx/agent/refs/heads/v3/scripts/package ./upgrade-agent-config.sh --v2-config-file=./nginx-agent-v2.conf --v3-config-file=nginx-agent-v3.conf ``` -If your NGINX Agent container was previously a member of a config sync group, then your NGINX Agent config must be manually updated to add the config sync group label. +If your NGINX Agent container was previously a member of a Config Sync Group, then your NGINX Agent config must be manually updated to add the Config Sync Group label. See [Add Config Sync Group]({{< ref "/nginx-one/nginx-configs/config-sync-groups/manage-config-sync-groups.md" >}}) for more information. ### Rolling back from NGINX Agent v3 to v2 If you need to roll back your environment to NGINX Agent v2, the upgrade process creates a backup of the NGINX Agent v2 config in the file `/etc/nginx-agent/nginx-agent-v2-backup.conf`. -Replace the contents of `/etc/nginx-agent/nginx-agent.conf` with the contents of `/etc/nginx-agent/nginx-agent-v2-backup.conf` and then reinstall an older version of NGINX Agent. \ No newline at end of file +Replace the contents of `/etc/nginx-agent/nginx-agent.conf` with the contents of `/etc/nginx-agent/nginx-agent-v2-backup.conf` and then reinstall an older version of NGINX Agent. diff --git a/content/includes/agent/installation/update.md b/content/includes/agent/installation/update.md index 6b6834ae7..4ef634810 100644 --- a/content/includes/agent/installation/update.md +++ b/content/includes/agent/installation/update.md @@ -10,12 +10,12 @@ files: - `sudo systemctl stop nginx-agent` -And start it again after the update: +And start it again after the update or upgrade: - `sudo systemctl start nginx-agent` {{< /note >}} -Follow the steps below to update NGINX Agent to the latest version. +Follow the steps below to update or upgrade NGINX Agent to the latest version. The same steps apply if you are **upgrading from NGINX Agent v2 to NGINX Agent v3**. 1. Open an SSH connection to the server where you've installed NGINX Agent. diff --git a/content/includes/agent/tech-specs.md b/content/includes/agent/tech-specs.md index 129745c66..b141d923b 100644 --- a/content/includes/agent/tech-specs.md +++ b/content/includes/agent/tech-specs.md @@ -6,12 +6,27 @@ files: --- NGINX Agent is designed to operate efficiently on any system that meets the standard -hardware requirements for running NGINX Plus itself. This ensures compatibility, stability, +hardware requirements for running NGINX itself. This ensures compatibility, stability, and performance aligned with the NGINX core platform: ### Supported distributions -{{< include "nginx-plus/supported-distributions.md" >}} +{{}} +| Distribution | Supported on Agent | +|-------------------------------------|------------------------------------------------------------------------------------------------------------| +| AlmaLinux | 8 (x86_64, aarch64)
9 (x86_64, aarch64)
10 (x86_64, aarch64) **(new)** | +| Alpine Linux | 3.19 (x86_64, aarch64)
3.20 (x86_64, aarch64)
3.21 (x86_64, aarch64)
3.22 (x86_64, aarch64) | +| Amazon Linux | 2023 (x86_64, aarch64) | +| Amazon Linux 2 | LTS (x86_64, aarch64) | +| CentOS | **Not supported** | +| Debian | 11 (x86_64, aarch64)
12 (x86_64, aarch64) | +| FreeBSD | **Not supported** | +| Oracle Linux | 8.1+ (x86_64, aarch64)
9 (x86_64)
10 (x86_64) **(new)** | +| Red Hat Enterprise Linux (RHEL) | 8.1+ (x86_64, aarch64)
9.0+ (x86_64, aarch64)
10.0+ (x86_64, aarch64) **(new)** | +| Rocky Linux | 8 (x86_64, aarch64)
9 (x86_64, aarch64)
10 (x86_64, aarch64) **(new)** | +| SUSE Linux Enterprise Server (SLES) | 15 SP2+ (x86_64) | +| Ubuntu | 22.04 LTS (x86_64, aarch64)
24.04 LTS (x86_64, aarch64)
25.04 LTS (x86_64, aarch64) **(new)** | +{{
}} To see the detailed technical specifications for NGINX Plus, refer to the official [NGINX Plus documentation]({{< ref "/nginx/technical-specs.md" >}}). diff --git a/content/includes/licensing-and-reporting/apply-jwt.md b/content/includes/licensing-and-reporting/apply-jwt.md index febdf7a19..d04116675 100644 --- a/content/includes/licensing-and-reporting/apply-jwt.md +++ b/content/includes/licensing-and-reporting/apply-jwt.md @@ -1,12 +1,15 @@ --- -docs: file: - content/solutions/about-subscription-licenses.md - content/nap-waf/v5/admin-guide/install.md --- -1. Copy the license file to `/etc/nginx/license.jwt` on Linux or `/usr/local/etc/nginx/license.jwt` on FreeBSD for each NGINX Plus instance. -2. Reload NGINX: +1. Copy the license file to: + + - `/etc/nginx/license.jwt` on Linux + - `/usr/local/etc/nginx/license.jwt` on FreeBSD + +1. Reload NGINX: ```shell systemctl reload nginx diff --git a/content/includes/licensing-and-reporting/deploy-jwt-with-csgs.md b/content/includes/licensing-and-reporting/deploy-jwt-with-csgs.md new file mode 100644 index 000000000..913c1862d --- /dev/null +++ b/content/includes/licensing-and-reporting/deploy-jwt-with-csgs.md @@ -0,0 +1,16 @@ +--- +file: + - content/solutions/about-subscription-licenses.md +--- + +1. In the NGINX One Console, go to **Manage > Config Sync Groups**, then select your group. + + If you haven't created a Config Sync Group yet, see [Manage Config Sync Groups]({{< ref "/nginx-one/nginx-configs/config-sync-groups/manage-config-sync-groups.md" >}}) for setup instructions. +2. Select the **Configuration** tab, then choose **Edit Configuration**. +3. Select **Add File**, then choose **New Configuration File**. +4. In the **File name** field, enter: + - On Linux: `/etc/nginx/license.jwt` + - On FreeBSD: `/usr/local/etc/nginx/license.jwt` + The name must be exact. +5. Paste the contents of your JWT license file into the editor. +6. Select **Next** to preview the diff, then **Save and Publish** to apply the update. \ No newline at end of file diff --git a/content/includes/licensing-and-reporting/download-certificates-from-myf5.md b/content/includes/licensing-and-reporting/download-certificates-from-myf5.md new file mode 100644 index 000000000..36597020c --- /dev/null +++ b/content/includes/licensing-and-reporting/download-certificates-from-myf5.md @@ -0,0 +1,9 @@ +--- +files: +- content/includes/use-cases/credential-download-instructions.md +--- + +1. Log in to [MyF5](https://my.f5.com/manage/s/). +1. Go to **My Products & Plans > Subscriptions** to see your active subscriptions. +1. Find your NGINX subscription, and select the **Subscription ID** for details. +1. Download the **SSL Certificate** and **Private Key** files from the subscription page. \ No newline at end of file diff --git a/content/includes/licensing-and-reporting/download-jwt-from-myf5.md b/content/includes/licensing-and-reporting/download-jwt-from-myf5.md index af947d320..1ede09099 100644 --- a/content/includes/licensing-and-reporting/download-jwt-from-myf5.md +++ b/content/includes/licensing-and-reporting/download-jwt-from-myf5.md @@ -1,8 +1,19 @@ --- docs: +files: +- content/includes/nim/docker/docker-registry-login.md +- content/includes/use-cases/credential-download-instructions.md +- content/nap-waf/v5/admin-guide/install.md +- content/nginx/admin-guide/installing-nginx/installing-nginx-plus.md +- content/nginx-one/connect-instances/connect-nginx-plus-container-images-to-nginx-one.md +- content/nim/admin-guide/add-license.md +- content/nim/deploy/docker/deploy-nginx-plus-and-agent-docker.md +- content/nim/disconnected/add-license-disconnected-deployment.md +- content/solutions/about-subscription-licenses.md +- content/solutions/r33-pre-release-guidance-for-automatic-upgrades.md --- 1. Log in to [MyF5](https://my.f5.com/manage/s/). -2. Go to **My Products & Plans > Subscriptions** to see your active subscriptions. -3. Find your NGINX products or services subscription, and select the **Subscription ID** for details. -4. Download the **JSON Web Token** from the subscription page. +1. Go to **My Products & Plans > Subscriptions** to see your active subscriptions. +1. Find your NGINX subscription, and select the **Subscription ID** for details. +1. Download the **JSON Web Token** file from the subscription page. diff --git a/content/includes/nap-waf/config/common/nginx-app-protect-waf-terminology.md b/content/includes/nap-waf/config/common/nginx-app-protect-waf-terminology.md index f1f28bb0c..2630c54ec 100644 --- a/content/includes/nap-waf/config/common/nginx-app-protect-waf-terminology.md +++ b/content/includes/nap-waf/config/common/nginx-app-protect-waf-terminology.md @@ -1,5 +1,8 @@ --- nd-docs: "DOCS-1605" +files: + - content/nap-waf/v5/configuration-guide/configuration.md + - content/nginx-one/glossary.md --- This guide assumes that you have some familiarity with various Layer 7 (L7) Hypertext Transfer Protocol (HTTP) concepts, such as Uniform Resource Identifier (URI)/Uniform Resource Locator (URL), method, header, cookie, status code, request, response, and parameters. @@ -26,4 +29,4 @@ This guide assumes that you have some familiarity with various Layer 7 (L7) Hype |Tuning | Making manual changes to an existing security policy to reduce false positives and increase the policy’s security level. | |URI/URL | The Uniform Resource Identifier (URI) specifies the name of a web object in a request. A Uniform Resource Locator (URL) specifies the location of an object on the Internet. For example, in the web address, `http://www.siterequest.com/index.html`, index.html is the URI, and the URL is `http://www.siterequest.com/index.html`. In NGINX App Protect WAF, the terms URI and URL are used interchangeably. | |Violation | Violations occur when some aspect of a request or response does not comply with the security policy. You can configure the blocking settings for any violation in a security policy. When a violation occurs, the system can Alarm or Block a request (blocking is only available when the enforcement mode is set to Blocking). | -{{}} \ No newline at end of file +{{}} diff --git a/content/includes/nginx-one/add-file/new-ssl-bundle.md b/content/includes/nginx-one/add-file/new-ssl-bundle.md index c078213e5..4c948fe52 100644 --- a/content/includes/nginx-one/add-file/new-ssl-bundle.md +++ b/content/includes/nginx-one/add-file/new-ssl-bundle.md @@ -4,8 +4,7 @@ docs: First you can select the toggle to allow NGINX One Console to manage the new certificate or bundle. -In the screen that appears, you can add a certificate name. If you don't add a name, NGINX One will add a name for you, based on the expiration date for the certificate. - +In the screen that appears, you can add a certificate name. If you don't add a name, NGINX One Console will add a name for you, based on the expiration date for the certificate. You can add certificates in the following formats: - **SSL Certificate and Key** diff --git a/content/includes/nginx-one/alert-labels.md b/content/includes/nginx-one/alert-labels.md new file mode 100644 index 000000000..ce062d965 --- /dev/null +++ b/content/includes/nginx-one/alert-labels.md @@ -0,0 +1,27 @@ +--- +files: + - content/nginx-one/secure-your-fleet/secure.md + - content/nginx-one/glossary.md +--- + + +You can configure a variety of NGINX alerts in the F5 Distributed Cloud. If you have access to the [F5 Distributed Cloud]({{< ref "/nginx-one/getting-started.md#confirm-access-to-the-f5-distributed-cloud" >}}), log in and select the **Audit Logs & Alerts** tile. + +Go to **Notifications > Alerts**. Select the gear icon and select **Alert Name > Active Alerts**. You may see one or more of the following alerts in the **Audit Logs & Alerts** Console. + +{{}} + +### Alert Labels + +| **Alertname** | **Description** | **Alert Level** | **Action** | +|--------------------------------|----------------------------------------------------------------------|-----------------|------------------------------------------------------------------------------------------------------------------| +| HighCVENGINX | A high-severity CVE is impacting an NGINX instance | Critical | Review the CVE details in the NGINX One Console. Apply updates or change configurations to resolve the vulnerability. | +| MediumCVENGINX | A medium-severity CVE is impacting an NGINX instance | Major | Review the CVE details in the NGINX One Console. Apply updates or configuration changes as needed. | +| LowCVENGINX | A low-severity CVE is impacting an NGINX instance | Minor | Review the CVE details in the NGINX One Console. Consider updates or configuration changes to maintain security. | +| SecurityRecommendationNGINX | A security recommendation has been found for an NGINX configuration | Critical | Review the configuration issue in the NGINX One Console. Follow the recommendations to secure the instance or Config Sync Group. | +| OptimizationRecommendationNGINX| An optimization recommendation has been found for an NGINX configuration| Major | Review the optimization details in the NGINX One Console. Update the configuration to for the instance or Config Sync Group to enhance performance. | +| BestPracticeRecommendationNGINX| A best practice recommendation has been found for an NGINX configuration | Minor | Review the best practice recommendation in the NGINX One Console. Update the configuration for the instance or Config Sync Group to align with industry standards. | +| NGINXOffline | An NGINX instance is now offline | Major | Verify the host is online. Check the NGINX Agent's status on the instance and ensure it is connected to the NGINX One Console. | +| NGINXUnavailable | An NGINX instance is now unavailable | Major | Ensure the NGINX Agent and host are active. Verify the NGINX Agent can connect to the NGINX One Console and resolve any network issues. | +| NewNGINX | A new NGINX instance has connected to NGINX One | Minor | Review the instance details in the NGINX One Console. Confirm availability, CVEs, and recommendations to ensure the instance is operational. | +{{}} diff --git a/content/includes/nginx-one/conf/nginx-agent-conf.md b/content/includes/nginx-one/conf/nginx-agent-conf.md index 0fcbc57ea..f45844015 100644 --- a/content/includes/nginx-one/conf/nginx-agent-conf.md +++ b/content/includes/nginx-one/conf/nginx-agent-conf.md @@ -8,7 +8,7 @@ files: ```yaml command: server: - host: "" # Command server host + host: "agent.connect.nginx.com" # Command server host port: 443 # Command server port auth: token: "" # Authentication token for the command server @@ -16,7 +16,4 @@ command: skip_verify: false ``` -Replace the placeholder values: - -- ``: The URL of your NGINX One Console instance, typically https://INSERT_YOUR_TENANT_NAME.console.ves.volterra.io/ . -- ``: Your Data Plane key. +Replace `` with your Data Plane key. diff --git a/content/includes/nginx-plus/nginx-openid-repo-note.txt b/content/includes/nginx-plus/nginx-openid-repo-note.md similarity index 100% rename from content/includes/nginx-plus/nginx-openid-repo-note.txt rename to content/includes/nginx-plus/nginx-openid-repo-note.md diff --git a/content/includes/nginx-plus/supported-distributions.md b/content/includes/nginx-plus/supported-distributions.md index 86439439b..0c56d90e9 100644 --- a/content/includes/nginx-plus/supported-distributions.md +++ b/content/includes/nginx-plus/supported-distributions.md @@ -3,18 +3,17 @@ docs: --- {{}} -| Distribution | Supported on R33 | Supported on R32 | -|-------------------------------------|-----------------------------------------------|-----------------------------------------------| -| AlmaLinux | 8 (x86_64, aarch64)
9 (x86_64, aarch64) | 8 (x86_64, aarch64)
9 (x86_64, aarch64) | -| Alpine Linux | 3.17 (x86_64, aarch64) **(deprecated)**
3.18 (x86_64, aarch64)
3.19 (x86_64, aarch64)
3.20 (x86_64, aarch64) **(new)** | 3.16 (x86_64, aarch64) **(deprecated)**
3.17 (x86_64, aarch64)
3.18 (x86_64, aarch64)
3.19 (x86_64, aarch64) | -| Amazon Linux | 2023 (x86_64, aarch64) | 2023 (x86_64, aarch64) | -| Amazon Linux 2 | LTS (x86_64, aarch64) | LTS (x86_64, aarch64) | -| CentOS | **Not supported** | 7.4+ (x86_64) **(deprecated)** | -| Debian | 11 (x86_64, aarch64)
12 (x86_64, aarch64) | 11 (x86_64, aarch64)
12 (x86_64, aarch64) | -| FreeBSD | 13 (amd64)
14 (amd64) | 13 (amd64)
14 (amd64) | -| Oracle Linux | 8.1+ (x86_64, aarch64)
9 (x86_64) | 7.4+ (x86_64) **(deprecated)**
8.1+ (x86_64, aarch64)
9 (x86_64) | -| Red Hat Enterprise Linux (RHEL) | 8.1+ (x86_64, aarch64)
9.0+ (x86_64, aarch64) | 7.4+ (x86_64) **(deprecated)**
8.1+ (x86_64, aarch64)
9.0+ (x86_64, aarch64) | -| Rocky Linux | 8 (x86_64, aarch64)
9 (x86_64, aarch64) | 8 (x86_64, aarch64)
9 (x86_64, aarch64) | -| SUSE Linux Enterprise Server (SLES) | 12 SP5 (x86_64) **(deprecated)**
15 SP2+ (x86_64) | 12 SP5 (x86_64)
15 SP2+ (x86_64) | -| Ubuntu | 20.04 LTS (x86_64, aarch64)
22.04 LTS (x86_64, aarch64)
24.04 LTS (x86_64, aarch64) | 20.04 LTS (x86_64, aarch64)
22.04 LTS (x86_64, aarch64)
24.04 LTS (x86_64, aarch64 **(new)** | -{{
}} \ No newline at end of file +| Distribution | Supported on R34 | Supported on R33 | +|-------------------------------------|----------------------------------------------------|--------------------------------------------------------| +| AlmaLinux | 8 (x86_64, aarch64)
9 (x86_64, aarch64) | 8 (x86_64, aarch64)
9 (x86_64, aarch64) | +| Alpine Linux | 3.18 (x86_64, aarch64) **(deprecated)**
3.19 (x86_64, aarch64)
3.20 (x86_64, aarch64)
3.21 (x86_64, aarch64) **(new)** | 3.17 (x86_64, aarch64) **(deprecated)**
3.18 (x86_64, aarch64)
3.19 (x86_64, aarch64)
3.20 (x86_64, aarch64) **(new)** | +| Amazon Linux | 2023 (x86_64, aarch64) | 2023 (x86_64, aarch64) | +| Amazon Linux 2 | LTS (x86_64, aarch64) | LTS (x86_64, aarch64) | +| Debian | 11 (x86_64, aarch64)
12 (x86_64, aarch64) | 11 (x86_64, aarch64)
12 (x86_64, aarch64) | +| FreeBSD | 13 (amd64)
14 (amd64) | 13 (amd64)
14 (amd64) | +| Oracle Linux | 8.1+ (x86_64, aarch64)
9 (x86_64) | 8.1+ (x86_64, aarch64)
9 (x86_64) | +| Red Hat Enterprise Linux (RHEL) | 8.1+ (x86_64, aarch64)
9.0+ (x86_64, aarch64) | 8.1+ (x86_64, aarch64)
9.0+ (x86_64, aarch64) | +| Rocky Linux | 8 (x86_64, aarch64)
9 (x86_64, aarch64) | 8 (x86_64, aarch64)
9 (x86_64, aarch64) | +| SUSE Linux Enterprise Server (SLES) | 15 SP2+ (x86_64) | 12 SP5 (x86_64) **(deprecated)**
15 SP2+ (x86_64) | +| Ubuntu | 20.04 LTS (x86_64, aarch64) **(deprecated)**
22.04 LTS (x86_64, aarch64)
24.04 LTS (x86_64, aarch64) | 20.04 LTS (x86_64, aarch64)
22.04 LTS (x86_64, aarch64)
24.04 LTS (x86_64, aarch64) | +{{}} diff --git a/content/includes/nic/configuration/global-configuration/configmap-resource.md b/content/includes/nic/configuration/global-configuration/configmap-resource.md index 28296e291..b01c18e98 100644 --- a/content/includes/nic/configuration/global-configuration/configmap-resource.md +++ b/content/includes/nic/configuration/global-configuration/configmap-resource.md @@ -80,6 +80,7 @@ For more information, view the [VirtualServer and VirtualServerRoute resources]( |*proxy-buffering* | Enables or disables [buffering of responses](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering) from the proxied server. | *True* | | |*proxy-buffers* | Sets the value of the [proxy_buffers](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffers) directive. | Depends on the platform. | | |*proxy-buffer-size* | Sets the value of the [proxy_buffer_size](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffer_size) and [grpc_buffer_size](https://nginx.org/en/docs/http/ngx_http_grpc_module.html#grpc_buffer_size) directives. | Depends on the platform. | | +|*proxy-busy-buffers-size* | Sets the value of the [proxy_busy_buffers_size](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_busy_buffers_size) directive. | Depends on the platform. | | |*proxy-max-temp-file-size* | Sets the value of the [proxy_max_temp_file_size](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_max_temp_file_size) directive. | *1024m* | | |*set-real-ip-from* | Sets the value of the [set_real_ip_from](https://nginx.org/en/docs/http/ngx_http_realip_module.html#set_real_ip_from) directive. | N/A | | |*real-ip-header* | Sets the value of the [real_ip_header](https://nginx.org/en/docs/http/ngx_http_realip_module.html#real_ip_header) directive. | *X-Real-IP* | | diff --git a/content/includes/nic/configuration/ingress-resources/advanced-configuration-with-annotations.md b/content/includes/nic/configuration/ingress-resources/advanced-configuration-with-annotations.md index 952b8ebb1..29c2598de 100644 --- a/content/includes/nic/configuration/ingress-resources/advanced-configuration-with-annotations.md +++ b/content/includes/nic/configuration/ingress-resources/advanced-configuration-with-annotations.md @@ -108,6 +108,7 @@ The table below summarizes the available annotations. | *nginx.org/proxy-buffering* | *proxy-buffering* | Enables or disables [buffering of responses](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering) from the proxied server. | *True* | | | *nginx.org/proxy-buffers* | *proxy-buffers* | Sets the value of the [proxy_buffers](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffers) directive. | Depends on the platform. | | | *nginx.org/proxy-buffer-size* | *proxy-buffer-size* | Sets the value of the [proxy_buffer_size](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffer_size) and [grpc_buffer_size](https://nginx.org/en/docs/http/ngx_http_grpc_module.html#grpc_buffer_size) directives. | Depends on the platform. | | +| *nginx.org/proxy-busy-buffers-size* | *proxy-busy-buffers-size* | Sets the value of the [proxy_busy_buffers_size](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_busy_buffers_size) directive. | Depends on the platform. | | | *nginx.org/proxy-max-temp-file-size* | *proxy-max-temp-file-size* | Sets the value of the [proxy_max_temp_file_size](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_max_temp_file_size) directive. | *1024m* | | | *nginx.org/server-tokens* | *server-tokens* | Enables or disables the [server_tokens](https://nginx.org/en/docs/http/ngx_http_core_module.html#server_tokens) directive. Additionally, with the NGINX Plus, you can specify a custom string value, including the empty string value, which disables the emission of the “Server” field. | *True* | | | *nginx.org/path-regex* | N/A | Enables regular expression modifiers for Ingress path parameter. This translates to the NGINX [location](https://nginx.org/en/docs/http/ngx_http_core_module.html#location) directive. You can specify one of these values: "case_sensitive", "case_insensitive", or "exact". The annotation is applied to the entire Ingress resource and its paths. While using Master and Minion Ingresses i.e. Mergeable Ingresses, this annotation can be specified on Minion types. The `path-regex` annotation specified on Master is ignored, and has no effect on paths defined on Minions. | N/A | [path-regex](https://github.com/nginx/kubernetes-ingress/tree/v{{< nic-version >}}/examples/ingress-resources/path-regex) | diff --git a/content/includes/nic/configuration/virtualserver-and-virtualserverroute-resources.md b/content/includes/nic/configuration/virtualserver-and-virtualserverroute-resources.md index a129629ed..06509075c 100644 --- a/content/includes/nic/configuration/virtualserver-and-virtualserverroute-resources.md +++ b/content/includes/nic/configuration/virtualserver-and-virtualserverroute-resources.md @@ -371,6 +371,7 @@ tls: |``buffering`` | Enables buffering of responses from the upstream server. See the [proxy_buffering](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering) directive. The default is set in the ``proxy-buffering`` ConfigMap key. | ``boolean`` | No | |``buffers`` | Configures the buffers used for reading a response from the upstream server for a single connection. | [buffers](#upstreambuffers) | No | |``buffer-size`` | Sets the size of the buffer used for reading the first part of a response received from the upstream server. See the [proxy_buffer_size](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffer_size) directive. The default is set in the ``proxy-buffer-size`` ConfigMap key. | ``string`` | No | +|``busy-buffer-size`` | Sets the size of the buffer used for reading a response from the upstream server when the response is larger than the ``buffer-size``. See the [proxy_busy_buffers_size](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_busy_buffers_size) directive. The default is set in the ``proxy-busy-buffers-size`` ConfigMap key. | ``string`` | No | |``ntlm`` | Allows proxying requests with NTLM Authentication. See the [ntlm](https://nginx.org/en/docs/http/ngx_http_upstream_module.html#ntlm) directive. In order for NTLM authentication to work, it is necessary to enable keepalive connections to upstream servers using the ``keepalive`` field. Note: this feature is supported only in NGINX Plus.| ``boolean`` | No | |``type`` |The type of the upstream. Supported values are ``http`` and ``grpc``. The default is ``http``. For gRPC, it is necessary to enable HTTP/2 in the [ConfigMap](/nginx-ingress-controller/configuration/global-configuration/configmap-resource/#listeners) and configure TLS termination in the VirtualServer. | ``string`` | No | |``backup`` | The name of the backup service of type [ExternalName](https://kubernetes.io/docs/concepts/services-networking/service/#externalname). This will be used when the primary servers are unavailable. Note: The parameter cannot be used along with the ``random`` , ``hash`` or ``ip_hash`` load balancing methods. | ``string`` | No | diff --git a/content/includes/nim/tech-specs/supported-distros.md b/content/includes/nim/tech-specs/supported-distros.md index 7ede72fc3..1abfed6ae 100644 --- a/content/includes/nim/tech-specs/supported-distros.md +++ b/content/includes/nim/tech-specs/supported-distros.md @@ -8,14 +8,12 @@ The following table lists the Linux distributions supported by NGINX Instance Ma {{}} -| Distribution | Version | Architecture | NGINX Instance Manager Support | NGINX App Protect Support | -|-----------------|----------------------------------------|------------------|---------------------------------------------------|----------------------------------------------------| -| Amazon Linux | 2 LTS | x86_64 | Supported | **Support discontinued as of 2.18.0** | -| CentOS | 7.4 and later in the 7.x family | x86_64 | **Support discontinued as of 2.17.0** | Supported | -| Debian | 11
12 | x86_64
x86_64 | Supported
Supported on 2.13.0+ | Supported
Supported | -| Oracle Linux | 7.4 and later in the 7.x family
8.0 and later in the 8.x family | x86_64
x86_64 | Supported
Supported on 2.6.0+ | Supported
Supported | -| RHEL | 7.4 and later in the 7.x family
8.x and later in the 8.x family
9.x and later in the 9.x family | x86_64
x86_64
x86_64 | **Support discontinued as of 2.17.0**
Supported
Supported on 2.6.0+ | Supported
Supported
Supported | -| Ubuntu | 20.04
22.04
24.04 | x86_64
x86_64
x86_64 | Supported
Supported on 2.3.0+
Supported on 2.18.0+ | Supported
Supported
Supported | +| Distribution | Version | Architecture | NGINX Instance Manager Support | NGINX App Protect Support | +|-----------------|----------------------------------------|------------------|-----------------------------------------------------|----------------------------------------------------| +| Debian | 11
12 | x86_64
x86_64 | Supported
Supported | Supported
Supported | +| Oracle Linux | 8.0 and later in the 8.x family | x86_64 | Supported | Supported | +| RHEL | 8.0 and later in the 8.x family
9.0 and later in the 9.x family | x86_64
x86_64 | Supported
Supported | Supported
Supported | +| Ubuntu | 22.04
24.04 | x86_64
x86_64 | Supported
Supported on 2.18.0+ | Supported
Supported | {{
}} diff --git a/content/includes/use-cases/credential-download-instructions.md b/content/includes/use-cases/credential-download-instructions.md new file mode 100644 index 000000000..672bdfb0f --- /dev/null +++ b/content/includes/use-cases/credential-download-instructions.md @@ -0,0 +1,25 @@ +--- +files: +- content/nginx/admin-guide/installing-nginx/installing-nginx-docker.md +- content/nic/installation/nic-images/registry-download.md +--- + +In order to obtain a container image, you will need the JSON Web Token file or SSL certificate and private key files provided with your NGINX Plus subscription. + +These files grant access to the package repository from which the script will download the NGINX Plus package: + +{{< tabs name="product_keys" >}} + +{{% tab name="JSON Web Token" %}} + +{{< include "licensing-and-reporting/download-jwt-from-myf5.md" >}} + +{{% /tab %}} + +{{% tab name="SSL" %}} + +{{< include "licensing-and-reporting/download-certificates-from-myf5.md" >}} + +{{% /tab %}} + +{{< /tabs >}} \ No newline at end of file diff --git a/content/includes/use-cases/docker-registry-instructions.md b/content/includes/use-cases/docker-registry-instructions.md new file mode 100644 index 000000000..5f7e6af73 --- /dev/null +++ b/content/includes/use-cases/docker-registry-instructions.md @@ -0,0 +1,49 @@ +--- +files: +- content/nginx/admin-guide/installing-nginx/installing-nginx-docker.md +- content/nic/installation/nic-images/registry-download.md +--- + +This step describes how to use Docker to communicate with the F5 Container Registry located at `private-registry.nginx.com`. + +{{< call-out "note" >}} + +The steps provided are for Linux. For Mac or Windows, see the [Docker for Mac](https://docs.docker.com/docker-for-mac/#add-client-certificates) or [Docker for Windows](https://docs.docker.com/docker-for-windows/#how-do-i-add-client-certificates) documentation. + +For more details on Docker Engine security, you can refer to the [Docker Engine Security documentation](https://docs.docker.com/engine/security/). + +{{< /call-out >}} + +{{< tabs name="docker_login" >}} + +{{% tab name="JSON Web Token"%}} + +Open the JSON Web Token file previously downloaded from [MyF5](https://my.f5.com) customer portal (for example, `nginx-repo-12345abc.jwt`) and copy its contents. + +Log in to the Docker registry using the contents of the JSON Web Token file: + +```shell +docker login private-registry.nginx.com --username= --password=none +``` + +{{% /tab %}} + +{{% tab name="SSL" %}} + +Create a directory and copy your certificate and key to this directory: + +```shell +mkdir -p /etc/docker/certs.d/private-registry.nginx.com +cp /etc/docker/certs.d/private-registry.nginx.com/client.cert +cp /etc/docker/certs.d/private-registry.nginx.com/client.key +``` + +Log in to the Docker registry: + +```shell +docker login private-registry.nginx.com +``` + +{{% /tab %}} + +{{< /tabs >}} \ No newline at end of file diff --git a/content/includes/use-cases/monitoring/n1c-dashboard-overview.md b/content/includes/use-cases/monitoring/n1c-dashboard-overview.md index 3018b83d8..050b104d9 100644 --- a/content/includes/use-cases/monitoring/n1c-dashboard-overview.md +++ b/content/includes/use-cases/monitoring/n1c-dashboard-overview.md @@ -15,15 +15,15 @@ Navigating the dashboard: {{}} -**NGINX One dashboard metrics** +**NGINX One Console dashboard metrics** | Metric | Description | Details | |---|---|---| -| **Instance availability** | Understand the operational status of your NGINX instances. | - **Online**: The NGINX instance is actively connected and functioning properly.
- **Offline**: NGINX Agent is connected but the NGINX instance isn't running, isn't installed, or can't communicate with NGINX Agent.
- **Unavailable**: The connection between NGINX Agent and NGINX One has been lost or the instance has been decommissioned.
- **Unknown**: The current state can't be determined at the moment. | +| **Instance availability** | Understand the operational status of your NGINX instances. | - **Online**: The NGINX instance is actively connected and functioning properly.
- **Offline**: NGINX Agent is connected but the NGINX instance isn't running, isn't installed, or can't communicate with NGINX Agent.
- **Unavailable**: The connection between NGINX Agent and NGINX One Console has been lost or the instance has been decommissioned.
- **Unknown**: The current state can't be determined at the moment. | | **NGINX versions by instance** | See which NGINX versions are in use across your instances. | | | **Operating systems** | Find out which operating systems your instances are running on. | | | **Certificates** | Monitor the status of your SSL certificates to know which are expiring soon and which are still valid. | | | **Config recommendations** | Get configuration recommendations to optimize your instances' settings. | | -| **CVEs (Common Vulnerabilities and Exposures)** | Evaluate the severity and number of potential security threats in your instances. | - **Major**: Indicates a high-severity threat that needs immediate attention.
- **Medium**: Implies a moderate threat level.
- **Minor** and **Low**: Represent less critical issues that still require monitoring.
- **Other**: Encompasses any threats that don't fit the standard categories. | +| **CVEs (Common Vulnerabilities and Exposures)** | Evaluate the severity and number of potential security threats in your instances. | - **High**: Indicates a high-severity threat that needs immediate attention. NGINX CVSS score = 7.0-10.0
- **Medium**: Implies a moderate threat level. NGINX CVSS score = 4.0-6.9
- **Low**: Represent less critical issues that still require monitoring. NGINX CVSS score = 0.1-3.9.
- **None**: NGINX CVSS score = 0.0| | **CPU utilization** | Track CPU usage trends and pinpoint instances with high CPU demand. | | | **Memory utilization** | Watch memory usage patterns to identify instances using significant memory. | | | **Disk space utilization** | Monitor how much disk space your instances are using and identify those nearing capacity. | | diff --git a/content/mesh/releases/release-notes-0.9.0.md b/content/mesh/releases/release-notes-0.9.0.md index e288cf92f..a9044eecd 100644 --- a/content/mesh/releases/release-notes-0.9.0.md +++ b/content/mesh/releases/release-notes-0.9.0.md @@ -2,7 +2,7 @@ title: Release Notes 0.9.0 draft: false toc: true -description: Release information for F5 GINX Service Mesh, a configurable, low‑latency +description: Release information for F5 NGINX Service Mesh, a configurable, low‑latency infrastructure layer designed to handle a high volume of network‑based interprocess communication among application infrastructure services using application programming interfaces (APIs). Lists of new features and known issues are provided. diff --git a/content/ngf/get-started.md b/content/ngf/get-started.md index 83bd9029f..aead2c963 100644 --- a/content/ngf/get-started.md +++ b/content/ngf/get-started.md @@ -132,13 +132,76 @@ The YAML code in the following sections can be found in the [cafe-example folder ### Create the application resources -Create the file _cafe.yaml_ with the following contents: - -{{< ghcode `https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/refs/heads/main/examples/cafe-example/cafe.yaml`>}} - -Apply it using `kubectl`: - -```shell +Run the following command to create the file _cafe.yaml_, which is then used to deploy the *coffee* application to your cluster: + +```yaml +cat < cafe.yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: coffee +spec: + replicas: 1 + selector: + matchLabels: + app: coffee + template: + metadata: + labels: + app: coffee + spec: + containers: + - name: coffee + image: nginxdemos/nginx-hello:plain-text + ports: + - containerPort: 8080 +--- +apiVersion: v1 +kind: Service +metadata: + name: coffee +spec: + ports: + - port: 80 + targetPort: 8080 + protocol: TCP + name: http + selector: + app: coffee +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: tea +spec: + replicas: 1 + selector: + matchLabels: + app: tea + template: + metadata: + labels: + app: tea + spec: + containers: + - name: tea + image: nginxdemos/nginx-hello:plain-text + ports: + - containerPort: 8080 +--- +apiVersion: v1 +kind: Service +metadata: + name: tea +spec: + ports: + - port: 80 + targetPort: 8080 + protocol: TCP + name: http + selector: + app: tea +EOF kubectl apply -f cafe.yaml ``` @@ -163,13 +226,22 @@ tea-6fbfdcb95d-9lhbj 1/1 Running 0 9s ### Create Gateway and HTTPRoute resources -Create the file _gateway.yaml_ with the following contents: - -{{< ghcode `https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/refs/heads/main/examples/cafe-example/gateway.yaml`>}} - -Apply it using `kubectl`: - -```shell +Run the following command to create the file _gateway.yaml_, which is then used to deploy a Gateway to your cluster: + +```yaml +cat < gateway.yaml +apiVersion: gateway.networking.k8s.io/v1 +kind: Gateway +metadata: + name: gateway +spec: + gatewayClassName: nginx + listeners: + - name: http + port: 80 + protocol: HTTP + hostname: "*.example.com" +EOF kubectl apply -f gateway.yaml ``` @@ -190,13 +262,48 @@ gateway-nginx-66b5d78f8f-4fmtb 1/1 Running 0 13s tea-6fbfdcb95d-9lhbj 1/1 Running 0 31s ``` -Create the file _cafe-routes.yaml_ with the following contents: - -{{< ghcode `https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/refs/heads/main/examples/cafe-example/cafe-routes.yaml`>}} - -Apply it using `kubectl`: - -```shell +Run the following command to create the file _cafe-routes.yaml_. It is then used to deploy two *HTTPRoute* resources in your cluster: one each for _/coffee_ and _/tea_. + +```yaml +cat < cafe-routes.yaml +apiVersion: gateway.networking.k8s.io/v1 +kind: HTTPRoute +metadata: + name: coffee +spec: + parentRefs: + - name: gateway + sectionName: http + hostnames: + - "cafe.example.com" + rules: + - matches: + - path: + type: PathPrefix + value: /coffee + backendRefs: + - name: coffee + port: 80 +--- +apiVersion: gateway.networking.k8s.io/v1 +kind: HTTPRoute +metadata: + name: tea +spec: + parentRefs: + - name: gateway + sectionName: http + hostnames: + - "cafe.example.com" + rules: + - matches: + - path: + type: Exact + value: /tea + backendRefs: + - name: tea + port: 80 +EOF kubectl apply -f cafe-routes.yaml ``` diff --git a/content/ngf/reference/permissions.md b/content/ngf/reference/permissions.md new file mode 100644 index 000000000..ad95e3bef --- /dev/null +++ b/content/ngf/reference/permissions.md @@ -0,0 +1,110 @@ +--- +title: Permissions +description: NGINX Gateway Fabric permissions required by components. +weight: 300 +toc: true +type: reference +product: NGF +--- + +## Overview + +NGINX Gateway Fabric uses a split-plane architecture with three components that require different permissions: + +- **Control Plane**: Manages Kubernetes APIs and data plane deployments. Needs broad API access but handles no user traffic. +- **Data Plane**: Processes user traffic. Requires minimal permissions since configuration comes from control plane via secure gRPC. +- **Certificate Generator**: One-time job that creates TLS certificates for inter-plane communication. + +## Security Context + +All components share these security settings: + +- **User ID**: 101 (non-root) +- **Group ID**: 1001 +- **Capabilities**: All dropped (`drop: ALL`) +- **Root Filesystem**: Read-only except for specific writable volumes +- **Seccomp**: Runtime default profile + +## Control Plane + +Runs as a single container in the `nginx-gateway` deployment. + +**Additional Security Settings:** +- **Privilege Escalation**: Disabled + +**Volumes:** +- Secret mounts for TLS certificates + +**RBAC Permissions:** +- **Secrets, ConfigMaps, Services**: Create, update, delete, list, get, watch +- **Deployments, DaemonSets**: Create, update, delete, list, get, watch +- **ServiceAccounts**: Create, update, delete, list, get, watch +- **Namespaces, Pods**: Get, list, watch +- **Events**: Create, patch +- **EndpointSlices**: List, watch +- **Gateway API resources**: List, watch (read-only) + update status subresources only +- **NGF Custom resources**: Get, list, watch (read-only) + update status subresources only +- **Leases**: Create, get, update (for leader election) +- **CustomResourceDefinitions**: List, watch +- **TokenReviews**: Create (for authentication) + +## Data Plane + +NGINX containers managed by the control plane. No RBAC permissions needed since configuration comes via secure gRPC. + +**Additional Security Settings:** +- **Privilege Escalation**: Disabled +- **Sysctl**: `net.ipv4.ip_unprivileged_port_start=0` (enables binding to ports < 1024) + +**Volumes:** +- EmptyDir volumes for NGINX configuration, runtime files, logs, and cache +- Secret mounts for TLS certificates and the NGINX Plus JWT token +- Projected token mounts for service account authentication + +**Volume Permissions:** +- **EmptyDir**: Read-write (required for NGINX operation) +- **Secret/ConfigMap/Projected**: Read-only + +## Certificate Generator + +Kubernetes Job that creates initial TLS certificates. + +**RBAC Permissions:** +- **Secrets**: Create, update, get (control plane namespace only) + +## Platform-Specific Considerations + +### OpenShift Compatibility + +NGINX Gateway Fabric includes Security Context Constraints (SCCs) for OpenShift: + +**Control Plane SCC:** +- **Privilege Escalation**: Disabled +- **Host Access**: Disabled (network, IPC, PID, ports) +- **User ID Range**: 101-101 (fixed) +- **Group ID Range**: 1001-1001 (fixed) +- **Volumes**: Secret only + +**Data Plane SCC:** +Same restrictions as control plane, plus additional volume types: +- **Additional Volumes**: EmptyDir, ConfigMap, Projected + +### Linux Capabilities + +NGINX Gateway Fabric drops ALL Linux capabilities and adds none, following security best practices. + +**How It Works Without Capabilities:** +- **Process Management**: Standard Unix signals (no elevated privileges needed) +- **Port Binding**: Uses sysctl `net.ipv4.ip_unprivileged_port_start=0` for ports < 1024 +- **File Operations**: Volume mounts provide necessary write access + + +## Security Features + +- **Separation of concerns**: Control plane (API access, no traffic) vs data plane (traffic, no API access) +- **Non-root execution**: All components run as unprivileged user (UID 101) +- **Zero capabilities**: All Linux capabilities dropped +- **Read-only root filesystem**: Prevents runtime modifications +- **Ephemeral storage**: Temporary volumes only, no persistent storage +- **Least privilege RBAC**: Minimal required permissions per component +- **Secure communication**: mTLS-encrypted gRPC (TLS 1.3+) between planes \ No newline at end of file diff --git a/content/nginx-one/_index.md b/content/nginx-one/_index.md index 3cfd13708..b17183e2b 100644 --- a/content/nginx-one/_index.md +++ b/content/nginx-one/_index.md @@ -19,9 +19,10 @@ F5 NGINX One Console makes it easy to manage NGINX instances across locations an [//]: # "You can add a maximum of three cards: any extra will not display." [//]: # "One card will take full width page: two will take half width each. Three will stack like an inverse pyramid." [//]: # "Some examples of content could be the latest release note, the most common install path, and a popular new feature." + {{}} {{}} - {{}} + {{}} Get up and running with NGINX One Console {{}} {{}} @@ -36,12 +37,12 @@ F5 NGINX One Console makes it easy to manage NGINX instances across locations an {{}} Manage one instance or groups of instances. Monitor certificates. Set up metrics. {{}} - {{}} - Assign responsibilities with role-based access control - {{}} - {{}} - Manage your NGINX fleet over REST + {{}} + Set up security policies by instance and group {{}} + {{}} + Monitor deployments for CVEs and certificates + {{}} {{}} {{}} @@ -49,6 +50,15 @@ F5 NGINX One Console makes it easy to manage NGINX instances across locations an {{}} {{}} + {{}} + Monitor deployments for CVEs and certificates + {{}} + {{}} + Assign responsibilities with role-based access control + {{}} + {{}} + Manage your NGINX fleet over REST + {{}} {{}} See latest updates: New features, improvements, and bug fixes {{}} @@ -61,33 +71,33 @@ F5 NGINX One Console makes it easy to manage NGINX instances across locations an ## NGINX One components [//]: # "You can add any extra content for the page here, such as additional cards, diagrams or text." -{{< card-layout >}} +{{}} {{< card-section title="Kubernetes Solutions">}} - {{< card title="NGINX Ingress Controller" titleUrl="/nginx-ingress-controller/" brandIcon="NGINX-Ingress-Controller-product-icon">}} + {{< card title="NGINX Ingress Controller" titleUrl="/nginx-ingress-controller/" brandIcon="NGINX-Ingress-Controller-product-icon.png">}} Kubernetes traffic management with API gateway, identity, and observability features. {{}} - {{< card title="NGINX Gateway Fabric" titleUrl="/nginx-gateway-fabric" brandIcon="NGINX-product-icon">}} + {{< card title="NGINX Gateway Fabric" titleUrl="/nginx-gateway-fabric" brandIcon="NGINX-product-icon.png">}} Next generation Kubernetes connectivity using the Gateway API. {{}} {{}} {{< card-section title="Local Console Option">}} - {{< card title="NGINX Instance Manager" titleUrl="/nginx-instance-manager" brandIcon="NGINX-Instance-Manager-product-icon">}} + {{< card title="NGINX Instance Manager" titleUrl="/nginx-instance-manager" brandIcon="NGINX-Instance-Manager-product-icon.png">}} Track and control NGINX Open Source and NGINX Plus instances. {{}} {{}} {{< card-section title="Modern App Delivery">}} - {{< card title="NGINX Plus" titleUrl="/nginx" brandIcon="NGINX-Plus-product-icon-RGB">}} + {{< card title="NGINX Plus" titleUrl="/nginx" brandIcon="NGINX-Plus-product-icon-RGB.png">}} The all-in-one load balancer, reverse proxy, web server, content cache, and API gateway. {{}} - {{< card title="NGINX Open Source" titleUrl="https://nginx.org" brandIcon="NGINX-product-icon">}} + {{< card title="NGINX Open Source" titleUrl="https://nginx.org" brandIcon="NGINX-product-icon.png">}} The open source all-in-one load balancer, content cache, and web server {{}} {{}} {{< card-section title="Security">}} - {{< card title="NGINX App Protect WAF" titleUrl="/nginx-app-protect-waf" brandIcon="NGINX-App-Protect-WAF-product-icon">}} + {{< card title="NGINX App Protect WAF" titleUrl="/nginx-app-protect-waf" brandIcon="NGINX-App-Protect-WAF-product-icon.png">}} Lightweight, high-performance, advanced protection against Layer 7 attacks on your apps and APIs. {{}} - {{< card title="NGINX App Protect DoS" titleUrl="/nginx-app-protect-dos" brandIcon="NGINX-App-Protect-DoS-product-icon">}} + {{< card title="NGINX App Protect DoS" titleUrl="/nginx-app-protect-dos" brandIcon="NGINX-App-Protect-DoS-product-icon.png">}} Defend, adapt, and mitigate against Layer 7 denial-of-service attacks on your apps and APIs. {{}} {{}} diff --git a/content/nginx-one/agent/configure-instance-reporting/configuration-overview.md b/content/nginx-one/agent/configure-instance-reporting/configuration-overview.md index 5bfb8e7f9..dcfb1a636 100644 --- a/content/nginx-one/agent/configure-instance-reporting/configuration-overview.md +++ b/content/nginx-one/agent/configure-instance-reporting/configuration-overview.md @@ -49,8 +49,8 @@ sudo docker run \ --env=NGINX_AGENT_LOG_LEVEL=debug \ -d agent ``` -
-NGINX Agent configuration options + +### NGINX Agent configuration options {{< bootstrap-table "table table-striped table-bordered" >}} | **Environment Variable** | **Command-Line Option** | **Description** | **Default Value** | @@ -83,5 +83,4 @@ sudo docker run \ | NGINX_AGENT_COLLECTOR_EXTENSIONS_TLS_CERT | --collector-extensions-health-tls-cert | TLS Certificate file path for communication with OTel health server. | N/A | | NGINX_AGENT_COLLECTOR_EXTENSIONS_TLS_KEY | --collector-extensions-health-tls-key | File path for TLS key used when connecting with OTel health server. | N/A | | NGINX_AGENT_COLLECTOR_PROCESSORS_BATCH_SEND_BATCH_TIMEOUT | --collector-processors-batch-send-batch-timeout | Maximum time duration for sending batch data metrics regardless of size. | 200ms -{{< /bootstrap-table >}} |% -
\ No newline at end of file +{{< /bootstrap-table >}} diff --git a/content/nginx-one/agent/install-upgrade/install-from-github.md b/content/nginx-one/agent/install-upgrade/install-from-github.md index 30795537f..38568021f 100644 --- a/content/nginx-one/agent/install-upgrade/install-from-github.md +++ b/content/nginx-one/agent/install-upgrade/install-from-github.md @@ -54,12 +54,6 @@ Use your system's package manager to install the package. Some examples: sudo apk add nginx-agent-.apk ``` -- FreeBSD - - ```shell - sudo pkg add nginx-agent-.pkg - ``` - ### Manually connect NGINX Agent to NGINX One Console {{< include "agent/installation/manually-connect-to-console" >}} diff --git a/content/nginx-one/agent/install-upgrade/install-from-oss-repo.md b/content/nginx-one/agent/install-upgrade/install-from-oss-repo.md index f362ec8d5..512fd78f9 100644 --- a/content/nginx-one/agent/install-upgrade/install-from-oss-repo.md +++ b/content/nginx-one/agent/install-upgrade/install-from-oss-repo.md @@ -81,14 +81,6 @@ NGINX Agent from the repository. {{< include "/agent/installation/oss/oss-amazon-linux.md" >}} - -
-{{< fa "brands fa-freebsd" >}} Install NGINX Agent on FreeBSD - -### Install NGINX Agent on FreeBSD - -{{< include "/agent/installation/oss/oss-freebsd.md" >}} -
### Manually connect NGINX Agent to NGINX One Console diff --git a/content/nginx-one/agent/install-upgrade/install-from-plus-repo.md b/content/nginx-one/agent/install-upgrade/install-from-plus-repo.md index eb20efab1..ffa7e446a 100644 --- a/content/nginx-one/agent/install-upgrade/install-from-plus-repo.md +++ b/content/nginx-one/agent/install-upgrade/install-from-plus-repo.md @@ -83,15 +83,6 @@ NGINX Agent from the repository. -
-{{< fa "brands fa-freebsd" >}} Install NGINX Agent on FreeBSD - -### Install NGINX Agent on FreeBSD - -{{< include "/agent/installation/plus/plus-freebsd.md" >}} - -
- ### Manually connect NGINX Agent to NGINX One Console {{< include "agent/installation/manually-connect-to-console" >}} diff --git a/content/nginx-one/agent/install-upgrade/uninstall.md b/content/nginx-one/agent/install-upgrade/uninstall.md index 3a7c95aef..a553f81d3 100644 --- a/content/nginx-one/agent/install-upgrade/uninstall.md +++ b/content/nginx-one/agent/install-upgrade/uninstall.md @@ -71,12 +71,3 @@ Complete the following steps on each host where you've installed NGINX Agent {{< include "/agent/installation/uninstall/uninstall-amazon-linux.md" >}} - -
-{{< fa "brands fa-freebsd" >}} Uninstall NGINX Agent on FreeBSD - -### Uninstall NGINX Agent on FreeBSD - -{{< include "/agent/installation/uninstall/uninstall-freebsd.md" >}} - -
diff --git a/content/nginx-one/agent/install-upgrade/update.md b/content/nginx-one/agent/install-upgrade/update.md index a6f5cfd10..37b395af6 100644 --- a/content/nginx-one/agent/install-upgrade/update.md +++ b/content/nginx-one/agent/install-upgrade/update.md @@ -1,5 +1,5 @@ --- -title: Update NGINX Agent +title: Upgrade NGINX Agent toc: true weight: 400 docs: DOCS-000 diff --git a/content/nginx-one/agent/metrics/configure-otel-metrics.md b/content/nginx-one/agent/metrics/configure-otel-metrics.md index 78824f3bc..98ab9fe81 100644 --- a/content/nginx-one/agent/metrics/configure-otel-metrics.md +++ b/content/nginx-one/agent/metrics/configure-otel-metrics.md @@ -27,7 +27,7 @@ You can validate that metrics are successfully exported by using the methods bel - **NGINX One dashboard** - - When an instance has connected to NGINX One Console [See: Connect to NGINX One Console]({{< ref "/nginx-one/connect-instances/add-instance.md" >}}), you should see metrics showing on the NGINX One Dashboard. + - When an instance has connected to NGINX One Console [See: Connect to NGINX One Console]({{< ref "/nginx-one/connect-instances/add-instance.md" >}}), you should see metrics showing on the NGINX One Console Dashboard. - **Agent logs** diff --git a/content/nginx-one/agent/overview/_index.md b/content/nginx-one/agent/overview/_index.md index 25d1de757..b38b31aa3 100644 --- a/content/nginx-one/agent/overview/_index.md +++ b/content/nginx-one/agent/overview/_index.md @@ -1,5 +1,5 @@ --- title: "Overview" weight: 100 -url: /nginx-one/agent/install-upgrade/ ---- \ No newline at end of file +url: /nginx-one/agent/overview/ +--- diff --git a/content/nginx-one/api/_index.md b/content/nginx-one/api/_index.md index 5b3284d5e..3a1598f3f 100644 --- a/content/nginx-one/api/_index.md +++ b/content/nginx-one/api/_index.md @@ -1,6 +1,6 @@ --- title: Automate with the NGINX One API description: -weight: 700 +weight: 800 url: /nginx-one/api --- diff --git a/content/nginx-one/changelog.md b/content/nginx-one/changelog.md index dd9cf1216..e66dd29f8 100644 --- a/content/nginx-one/changelog.md +++ b/content/nginx-one/changelog.md @@ -30,6 +30,24 @@ h2 { Stay up-to-date with what's new and improved in the F5 NGINX One Console. +## July 15, 2025 + +### Set up F5 NGINX App Protect WAF security policies + +You can now incorporate [NGINX App Protect WAF]({{< ref "/nap-waf/" >}}) in NGINX One Console UI. For details, see [Secure with NGINX App Protect]({{< ref "/nginx-one/nap-integration/" >}}). + +In NGINX One Console, you can: + +- Toggle between [Default policy bundles]({{< ref "/nap-waf/v5/configuration-guide/configuration/#updating-default-policy-bundles" >}}) +- Set a blocking or transparant [Policy enforcement mode]({{< ref "/nap-waf/v5/configuration-guide/configuration/#policy-enforcement-modes" >}}) + +### Monitor F5 NGINX Ingress Controller deployments + +You can now monitor your NGINX Ingress Controller deployments. For details, see how +you can [Connect to NGINX One Console]({{< ref "/nginx-one/k8s/add-nic.md" >}}). + +Unlike other NGINX instances, when you connect NGINX Ingress Controller to NGINX One Console, access is read-only. Refer to our [NGINX Ingress Controller]({{< ref "/nic/" >}}) for details on how to modify these instances. + ## July 1, 2025 ### NGINX Agent version 3 support diff --git a/content/nginx-one/connect-instances/connect-nginx-plus-container-images-to-nginx-one.md b/content/nginx-one/connect-instances/connect-nginx-plus-container-images-to-nginx-one.md index 5f4d251b5..31c796b67 100644 --- a/content/nginx-one/connect-instances/connect-nginx-plus-container-images-to-nginx-one.md +++ b/content/nginx-one/connect-instances/connect-nginx-plus-container-images-to-nginx-one.md @@ -73,13 +73,13 @@ For more details, see [About subscription licenses]({{< ref "solutions/about-sub ```sh sudo docker run \ --env=NGINX_LICENSE_JWT="YOUR_JWT_HERE" \ ---env=NGINX_AGENT_SERVER_GRPCPORT=443 \ ---env=NGINX_AGENT_SERVER_HOST=agent.connect.nginx.com \ ---env=NGINX_AGENT_SERVER_TOKEN="YOUR_NGINX_ONE_DATA_PLANE_KEY_HERE" \ ---env=NGINX_AGENT_TLS_ENABLE=true \ +--env=NGINX_AGENT_COMMAND_SERVER_PORT=443 \ +--env=NGINX_AGENT_COMMAND_SERVER_HOST=agent.connect.nginx.com \ +--env=NGINX_AGENT_COMMAND_AUTH_TOKEN="DPK" \ +--env=NGINX_AGENT_COMMAND_TLS_SKIP_VERIFY=false \ --restart=always \ --runtime=runc \ --d private-registry.nginx.com/nginx-plus/agent: +-d private-registry.nginx.com/nginx-plus/agentv3: ```
@@ -90,13 +90,13 @@ To start the container with the `debian` image: ```sh sudo docker run \ --env=NGINX_LICENSE_JWT="YOUR_JWT_HERE" \ ---env=NGINX_AGENT_SERVER_GRPCPORT=443 \ ---env=NGINX_AGENT_SERVER_HOST=agent.connect.nginx.com \ ---env=NGINX_AGENT_SERVER_TOKEN="YOUR_NGINX_ONE_DATA_PLANE_KEY_HERE" \ ---env=NGINX_AGENT_TLS_ENABLE=true \ +--env=NGINX_AGENT_COMMAND_SERVER_PORT=443 \ +--env=NGINX_AGENT_COMMAND_SERVER_HOST=agent.connect.nginx.com \ +--env=NGINX_AGENT_COMMAND_AUTH_TOKEN="DPK" \ +--env=NGINX_AGENT_COMMAND_TLS_SKIP_VERIFY=false \ --restart=always \ --runtime=runc \ --d private-registry.nginx.com/nginx-plus/agent:debian +-d private-registry.nginx.com/nginx-plus/agentv3:debian ``` {{}} diff --git a/content/nginx-one/connect-instances/set-up-nginx-proxy-for-nginx-one.md b/content/nginx-one/connect-instances/set-up-nginx-proxy-for-nginx-one.md index c29aec28b..a4f94db76 100644 --- a/content/nginx-one/connect-instances/set-up-nginx-proxy-for-nginx-one.md +++ b/content/nginx-one/connect-instances/set-up-nginx-proxy-for-nginx-one.md @@ -69,19 +69,38 @@ To set up your other NGINX instances to use the proxy instance to connect to NGI 2. Open the NGINX Agent configuration file (**/etc/nginx-agent/nginx-agent.conf**) with a text editor. 3. Add the following configuration. Replace `YOUR_DATA_PLANE_KEY_HERE` with your actual data plane key and `YOUR_PROXY_IP_ADDRESS_HERE` with the IP address of the NGINX proxy instance. - ```yaml + {{< tabs name="Configure NGINX Agent to use the proxy" >}} + + {{%tab name="NGINX Agent 3.x"%}} + + ```yaml + command: + server: + # Replace YOUR_PROXY_IP_ADDRESS_HERE with the IP address of the NGINX proxy instance. + host: YOUR_PROXY_IP_ADDRESS_HERE + port: 5000 + auth: + # Replace YOUR_DATA_PLANE_KEY_HERE with your NGINX One Console data plane key. + token: "YOUR_DATA_PLANE_KEY_HERE" + tls: + skip_verify: False + ``` + + {{%/tab%}} + {{%tab name="NGINX Agent 2.x"%}} + ```yaml server: # Replace YOUR_DATA_PLANE_KEY_HERE with your NGINX One Data Plane Key. token: "YOUR_DATA_PLANE_KEY_HERE" # Replace YOUR_PROXY_IP_ADDRESS_HERE with the IP address of the NGINX proxy instance. host: YOUR_PROXY_IP_ADDRESS_HERE grpcPort: 5000 - command: agent.connect.nginx.com - metrics: agent.connect.nginx.com tls: - enable: true - skip_verify: false - ``` + enable: True + skip_verify: False + ``` + {{%/tab%}} + {{%/tabs%}} 4. Restart NGINX Agent: diff --git a/content/nginx-one/getting-started.md b/content/nginx-one/getting-started.md index 8f5605201..19535d36a 100644 --- a/content/nginx-one/getting-started.md +++ b/content/nginx-one/getting-started.md @@ -176,8 +176,6 @@ The `install` script writes an `nginx-agent.conf` file to the `/etc/nginx-agent/ {{< include "/nginx-one/conf/nginx-agent-conf.md" >}} - - {{}} We recommend keeping `dataplane.status.poll_interval` between `30s` and `60s` in the NGINX Agent config (`/etc/nginx-agent/nginx-agent.conf`). If the interval is set above `60s`, NGINX One Console may report incorrect instance statuses.{{}}
diff --git a/content/nginx-one/glossary.md b/content/nginx-one/glossary.md index 69c264c63..8a6324a99 100644 --- a/content/nginx-one/glossary.md +++ b/content/nginx-one/glossary.md @@ -3,13 +3,13 @@ description: '' nd-docs: DOCS-1396 title: Glossary toc: true -weight: 800 -type: -- reference +weight: 1000 +nd-content-type: reference --- This glossary defines terms used in the F5 NGINX One Console and F5 Distributed Cloud. +## General terms {{}} | Term | Definition | @@ -24,6 +24,14 @@ This glossary defines terms used in the F5 NGINX One Console and F5 Distributed | **Tenant** | A tenant in F5 Distributed Cloud is an entity that owns a specific set of configuration and infrastructure. It is fundamental for isolation, meaning a tenant cannot access objects or infrastructure of other tenants. Tenants can be either individual or enterprise, with the latter allowing multiple users with role-based access control (RBAC). | {{}} +## NGINX App Protect WAF terminology + +{{< include "nap-waf/config/common/nginx-app-protect-waf-terminology.md" >}} + +## NGINX Alerts + +{{< include "/nginx-one/alert-labels.md" >}} + ## Legal notice: Licensing agreements for NGINX products Using NGINX One is subject to our End User Service Agreement (EUSA). For [NGINX Plus]({{< ref "/nginx" >}}), usage is governed by the End User License Agreement (EULA). Open source projects, including [NGINX Agent](https://github.com/nginx/agent) and [NGINX Open Source](https://github.com/nginx/nginx), are covered under their respective licenses. For more details on these licenses, follow the provided links. diff --git a/content/nginx-one/k8s/_index.md b/content/nginx-one/k8s/_index.md new file mode 100644 index 000000000..794456588 --- /dev/null +++ b/content/nginx-one/k8s/_index.md @@ -0,0 +1,8 @@ +--- +title: Connect Kubernetes deployments +description: +weight: 700 +url: /nginx-one/k8s +nd-product: NGINX One +--- + diff --git a/content/nginx-one/k8s/add-nic.md b/content/nginx-one/k8s/add-nic.md new file mode 100644 index 000000000..2eaeb6b68 --- /dev/null +++ b/content/nginx-one/k8s/add-nic.md @@ -0,0 +1,173 @@ +--- +title: Connect to NGINX One Console +toc: true +weight: 200 +nd-content-type: how-to +nd-product: NGINX One +--- + +This document explains how to connect F5 NGINX Ingress Controller to F5 NGINX One Console using NGINX Agent. +Connecting NGINX Ingress Controller to NGINX One Console enables centralized monitoring of all controller instances. + +Once connected, you'll see a **read-only** configuration of NGINX Ingress Controller. For each instance, you can review: + +- Read-only configuration file +- Unmanaged SSL/TLS certificates for Control Planes + +## Before you begin + +Before connecting NGINX Ingress Controller to NGINX One Console, you need to create a Kubernetes Secret with the data plane key. Use the following command: + +```shell +kubectl create secret generic dataplane-key \ + --from-literal=dataplane.key= \ + -n +``` + +When you create a Kubernetes Secret, use the same namespace where NGINX Ingress Controller is running. +If you use [`-watch-namespace`]({{< ref "/nic/configuration/global-configuration/command-line-arguments.md#watch-namespace-string" >}}) or [`watch-secret-namespace`]({{< ref "/nic/configuration/global-configuration/command-line-arguments.md#watch-secret-namespace-string" >}}) arguments with NGINX Ingress Controller, +you need to add the dataplane key secret to the watched namespaces. This secret will take approximately 60 - 90 seconds to reload on the pod. + +{{}} +You can also create a data plane key through the NGINX One Console. Once loggged in, select **Manage > Control Planes > Add Control Plane**, and follow the steps shown. +{{}} + +## Deploy NGINX Ingress Controller with NGINX Agent + +{{}} +{{%tab name="Helm"%}} + +Upgrade or install NGINX Ingress Controller with the following command to configure NGINX Agent and connect to NGINX One Console: + +- For NGINX: + + ```shell + helm upgrade --install my-release oci://ghcr.io/nginx/charts/nginx-ingress --version {{< nic-helm-version >}} \ + --set nginxAgent.enable=true \ + --set nginxAgent.dataplaneKeySecretName= \ + --set nginxAgent.endpointHost=agent.connect.nginx.com + ``` + +- For NGINX Plus: (This assumes you have pushed NGINX Ingress Controller image `nginx-plus-ingress` to your private registry `myregistry.example.com`) + + ```shell + helm upgrade --install my-release oci://ghcr.io/nginx/charts/nginx-ingress --version {{< nic-helm-version >}} \ + --set controller.image.repository=myregistry.example.com/nginx-plus-ingress \ + --set controller.nginxplus=true \ + --set nginxAgent.enable=true \ + --set nginxAgent.dataplaneKeySecretName= \ + --set nginxAgent.endpointHost=agent.connect.nginx.com + ``` + +The `dataplaneKeySecretName` is used to authenticate the agent with NGINX One Console. See the [NGINX One Console Docs]({{< ref "/nginx-one/connect-instances/create-manage-data-plane-keys.md" >}}) +for instructions on how to generate your dataplane key from the NGINX One Console. + +Follow the [Installation with Helm]({{< ref "/nic/installation/installing-nic/installation-with-helm.md" >}}) instructions to deploy NGINX Ingress Controller. + +{{%/tab%}} +{{%tab name="Manifests"%}} + +Add the following flag to the Deployment/DaemonSet file of NGINX Ingress Controller: + +```yaml +args: +- -agent=true +``` + +Create a `ConfigMap` with an `nginx-agent.conf` file: + +```yaml +kind: ConfigMap +apiVersion: v1 +metadata: + name: nginx-agent-config + namespace: +data: + nginx-agent.conf: |- + log: + # set log level (error, info, debug; default "info") + level: info + # set log path. if empty, don't log to file. + path: "" + + allowed_directories: + - /etc/nginx + - /usr/lib/nginx/modules + + features: + - certificates + - connection + - metrics + - file-watcher + + ## command server settings + command: + server: + host: agent.connect.nginx.com + port: 443 + auth: + tokenpath: "/etc/nginx-agent/secrets/dataplane.key" + tls: + skip_verify: false +``` + +Make sure to set the namespace in the nginx-agent.config to the same namespace as NGINX Ingress Controller. +Mount the ConfigMap to the Deployment/DaemonSet file of NGINX Ingress Controller: + +```yaml +volumeMounts: +- name: nginx-agent-config + mountPath: /etc/nginx-agent/nginx-agent.conf + subPath: nginx-agent.conf +- name: dataplane-key + mountPath: /etc/nginx-agent/secrets +volumes: +- name: nginx-agent-config + configMap: + name: nginx-agent-config +- name: dataplane-key + secret: + secretName: "" +``` + +Follow the [Installation with Manifests]({{< ref "/nic/installation/installing-nic/installation-with-manifests.md" >}}) instructions to deploy NGINX Ingress Controller. + +{{%/tab%}} +{{}} + +## Verify a connection to NGINX One Console + +After deploying NGINX Ingress Controller with NGINX Agent, you can verify the connection to NGINX One Console. +Log in to your F5 Distributed Cloud Console account. Select **NGINX One > Visit Service**. In the dashboard, go to **Manage > Instances**. You should see your instances listed by name. The instance name matches both the hostname and the pod name. + +## Troubleshooting + +If you encounter issues connecting your instances to NGINX One Console, try the following commands: + +Check the NGINX Agent version: + +```shell +kubectl exec -it -n -- nginx-agent -v +``` + +If nginx-agent version is v3, continue with the following steps. +Otherwise, make sure you are using an image that does not include NGINX App Protect. + +Check the NGINX Agent configuration: + +```shell +kubectl exec -it -n -- cat /etc/nginx-agent/nginx-agent.conf +``` + +Check NGINX Agent logs: + +```shell +kubectl exec -it -n -- nginx-agent +``` + +Select the instance associated with your deployment of NGINX Ingress Controller. Under the **Details** tab, you'll see information associated with: + +- Unmanaged SSL/TLS certificates for Control Planes +- Configuration recommendations + +Under the **Configuration** tab, you'll see a **read-only** view of the configuration files. diff --git a/content/nginx-one/k8s/overview.md b/content/nginx-one/k8s/overview.md new file mode 100644 index 000000000..b2da7f2d1 --- /dev/null +++ b/content/nginx-one/k8s/overview.md @@ -0,0 +1,19 @@ +--- +# We use sentence case and present imperative tone +title: "Integrate Kubernetes control planes" +# Weights are assigned in increments of 100: determines sorting order +weight: 100 +# Creates a table of contents and sidebar, useful for large documents +toc: false +# Types have a 1:1 relationship with Hugo archetypes, so you shouldn't need to change this +nd-content-type: concept +# Intended for internal catalogue and search, case sensitive: +# Agent, N4Azure, NIC, NIM, NGF, NAP-DOS, NAP-WAF, NGINX One, NGINX+, Solutions, Unit +nd-product: NGINX One +--- + +You can now include Kubernetes systems through the [control plane](https://www.f5.com/glossary/control-plane). In related documentation, you can learn how to: + +- Set up a connection to F5 NGINX One Console through a data plane key. +- Review the NGINX Ingress Controller instances that are part of your fleet. + diff --git a/content/nginx-one/nap-integration/_index.md b/content/nginx-one/nap-integration/_index.md new file mode 100644 index 000000000..b21ffdf45 --- /dev/null +++ b/content/nginx-one/nap-integration/_index.md @@ -0,0 +1,6 @@ +--- +title: Secure with NGINX App Protect +description: +weight: 400 +url: /nginx-one/nap-integration +--- diff --git a/content/nginx-one/nap-integration/configure-policy.md b/content/nginx-one/nap-integration/configure-policy.md new file mode 100644 index 000000000..da58de0c8 --- /dev/null +++ b/content/nginx-one/nap-integration/configure-policy.md @@ -0,0 +1,48 @@ +--- +# We use sentence case and present imperative tone +title: "Add and configure a policy" +# Weights are assigned in increments of 100: determines sorting order +weight: 200 +# Creates a table of contents and sidebar, useful for large documents +toc: false +# Types have a 1:1 relationship with Hugo archetypes, so you shouldn't need to change this +nd-content-type: how-to +# Intended for internal catalogue and search, case sensitive: +# Agent, N4Azure, NIC, NIM, NGF, NAP-DOS, NAP-WAF, NGINX One, NGINX+, Solutions, Unit +nd-product: NGINX One +--- + +This document describes how you can configure a security policy in the F5 NGINX One Console. When you add a policy, NGINX One Console includes several UI-based options and presets, based on NGINX App Protect WAF. + + +If you already know NGINX App Protect WAF, you can go beyond the options available in the UI. + +## Add a policy + +From NGINX One Console, select App Protect > Policies. In the screen that appears, select **Add Policy**. That action opens a screen where you can: + +- In General Settings, name and describe the policy. + - You can also set one of the following enforcement modes: + - Transparent + - Blocking + +For details, see the [Glossary]({{< ref "/nginx-one/glossary.md#nginx-app-protect-waf-terminology" >}}), specifically the entry: **Enforcement mode**. You'll see this in the associated configuration file, +with the `enforcementMode` property. + +You can also set a character encoding. The default encoding is `Unicode (utf-8)`. To set a different character encoding, select **Show Advanced Fields** and select the **Application Language** of your choice. + +## Configure a policy + +With NGINX One Console User Interface, you get a default policy. You can also select **NGINX Strict** for a more rigorous policy: + +### Basic Configuration and the Default Policy + +{{< include "/nap-waf/concept/basic-config-default-policy.md" >}} + +## Save your policy + +NGINX One Console includes a Policy JSON section which displays your policy in JSON format. What you configure here is written to your instance of NGINX App Protect WAF. + +With the **Edit** option, you can customize this policy. It opens the JSON file in a local editor. When you select **Save Policy**, it saves the latest version of what you've configured. You'll see your new policy under the name you used. + +From NGINX One Console, you can review the policies that you've saved, along with their versions. Select **App Protect** > **Policies**. Select the policy that you want to review or modify. diff --git a/content/nginx-one/nap-integration/deploy-policy.md b/content/nginx-one/nap-integration/deploy-policy.md new file mode 100644 index 000000000..884c1a86f --- /dev/null +++ b/content/nginx-one/nap-integration/deploy-policy.md @@ -0,0 +1,28 @@ +--- +# We use sentence case and present imperative tone +title: "Deploy policy" +# Weights are assigned in increments of 100: determines sorting order +weight: 400 +# Creates a table of contents and sidebar, useful for large documents +toc: false +# Types have a 1:1 relationship with Hugo archetypes, so you shouldn't need to change this +nd-content-type: how-to +# Intended for internal catalogue and search, case sensitive: +# Agent, N4Azure, NIC, NIM, NGF, NAP-DOS, NAP-WAF, NGINX One, NGINX+, Solutions, Unit +nd-product: NGINX One +--- + +After you've set up a policy, it won't do anything, until you deploy it to one or more instances and Config Sync Groups. + +This page assumes you've created a policy in NGINX One Console that you're ready to deploy. + +## Deploy a policy + +To deploy a policy from NGINX One Console, take the following steps: + +1. Select **App Protect** > **Policies**. +1. Select the policy that you're ready to deploy. +1. Select the **Details** tab. +1. In the **Deploy Policy** window that appears, you can confirm the name of the current policy and the version to deploy. NGINX One Console defaults to the selected policy and latest version. +1. In the **Target** section, select Instance or Config Sync Group. +1. In the drop-down menu that appears, select the instance or Config Sync Group available in the current NGINX One Console. diff --git a/content/nginx-one/nap-integration/overview.md b/content/nginx-one/nap-integration/overview.md new file mode 100644 index 000000000..f3c628333 --- /dev/null +++ b/content/nginx-one/nap-integration/overview.md @@ -0,0 +1,56 @@ +--- +# We use sentence case and present imperative tone +title: "NGINX App Protect integration overview" +# Weights are assigned in increments of 100: determines sorting order +weight: 100 +# Creates a table of contents and sidebar, useful for large documents +toc: false +# Types have a 1:1 relationship with Hugo archetypes, so you shouldn't need to change this +nd-content-type: concept +# Intended for internal catalogue and search, case sensitive: +# Agent, N4Azure, NIC, NIM, NGF, NAP-DOS, NAP-WAF, NGINX One, NGINX+, Solutions, Unit +nd-product: NGINX One +--- + +You can now integrate the features of F5 NGINX App Protect WAF v4 and v5 in F5 NGINX One Console. NGINX App Protect offers advanced Web Application Firewall (WAF) capabilities. +Through the NGINX One Console UI, you can now set up the [NGINX App Protect WAF]({{< ref "/nap-waf/" >}}) firewall. This solution provides robust security and scalability. + +## Features + +Once you've connected to the NGINX One Console, select **App Protect > Policies**. You can add new policies or edit existing policies, as defined in the [NGINX App Protect WAF Administration Guide]({{< ref "/nap-waf/v5/admin-guide/overview.md" >}}) + +Through the NGINX One Console UI, you can: + +- [Add and configure a policy]({{< ref "/nginx-one/nap-integration/configure-policy.md/" >}}) +- [Review existing policies]({{< ref "/nginx-one/nap-integration/review-policy.md/" >}}) +- [Deploy policies]({{< ref "/nginx-one/nap-integration/deploy-policy.md/" >}}) on instances and Config Sync Groups + +You can also set up policies through the [NGINX One Console API]({{< ref "/nginx-one/nap-integration/security-policy-api.md/" >}}). + +## Set up NGINX App Protect + +You can install and upgrade NGINX App Protect: + +Version 4: + +- [Install]({{< ref "/nap-waf/v4/admin-guide/install.md" >}}) +- [Upgrade]({{< ref "/nap-waf/v4/admin-guide/upgrade-nap-waf.md" >}}) + +Version 5: + +- [Install]({{< ref "/nap-waf/v5/admin-guide/install.md" >}}) +- [Upgrade]({{< ref "/nap-waf/v5/admin-guide/upgrade-nap-waf.md" >}}) + +### Container-related configuration requirements + +NGINX App Protect WAF Version 5 has specific requirements for the configuration with Docker containers: + +- Directory associated with the volume, which you may configure in a `docker-compose.yaml` file. + - You may set it up with the `volumes` directive with a directory like `/etc/nginx/app_protect_policies`. + - You need to set up the container volume. So when the policy bundle is referenced in the `nginx` directive, the file path is what the container sees. + - You need to also include an `app_protect_policy_file`, as described in [App Protect Specific Directives]({{< ref "/nap-waf/v5/configuration-guide/configuration.md#app-protect-specific-directives" >}}) + + - You'll need to set a policy bundle (in compressed tar format) in a configured `volume`. + - Make sure the directory for [NGINX Agent]({{< ref "/agent/configuration/" >}}) includes `/etc/nginx/app_protect_policies`. + +When you deploy NAP policy through NGINX One Console, do not also use plain JSON policy in the same NGINX instance. diff --git a/content/nginx-one/nap-integration/review-policy.md b/content/nginx-one/nap-integration/review-policy.md new file mode 100644 index 000000000..5ab824b2e --- /dev/null +++ b/content/nginx-one/nap-integration/review-policy.md @@ -0,0 +1,40 @@ +--- +# We use sentence case and present imperative tone +title: "Review policy" +# Weights are assigned in increments of 100: determines sorting order +weight: 300 +# Creates a table of contents and sidebar, useful for large documents +toc: false +# Types have a 1:1 relationship with Hugo archetypes, so you shouldn't need to change this +nd-content-type: how-to +# Intended for internal catalogue and search, case sensitive: +# Agent, N4Azure, NIC, NIM, NGF, NAP-DOS, NAP-WAF, NGINX One, NGINX+, Solutions, Unit +nd-product: NGINX One +--- + +Before you implement a policy on an NGINX instance or Config Sync Group, you may want to review it. F5 NGINX One Console creates a policy for your NGINX App Protect WAF system. + +## Review NGINX App Protect policies + +From NGINX One Console, select **App Protect** > **Policies**. Select the name of the policy that you want to review. You'll see the following tabs: + +- Details, which includes: + - Policy Details: Descriptions, status, enforcement type, latest version, and last deployed time. + - Deployments: List of instances and Config Sync Groups where the NGINX App Protect policy is deployed. +- Policy JSON: The policy, in JSON format. With the **Edit** button, you can modify this policy. +- Versions: Policy versions that you've written. You can apply an older policy to your deployments. + +## Modify existing policies + +From the NGINX One Console, you can also manage existing policies. In the Policies screen, identify a policy, and select **Actions**. From the menu that appears, you can: + +- **Edit** an existing policy. +- **Save As** to save an existing policy with a new name. You can use an existing policy as a baseline for further customization. +- **Deploy Latest Version** to apply the latest revision of an existing policy to the configured instances and Config Sync Groups. +- **Export** the policy in JSON format. +- **Delete** the policy. Once confirmed, you'll lose all work you've done on that policy. + +{{< note >}} +If you use **Save As** to create a new policy, include the `app_protect_cookie_seed` [directive]({{< ref "/nap-waf/v5/configuration-guide/configuration.md#directives" >}}). +{{< /note >}} + diff --git a/content/nginx-one/nap-integration/security-policy-api.md b/content/nginx-one/nap-integration/security-policy-api.md new file mode 100644 index 000000000..3a9b91d36 --- /dev/null +++ b/content/nginx-one/nap-integration/security-policy-api.md @@ -0,0 +1,26 @@ +--- +title: "Set security policies through the API" +weight: 700 +toc: true +type: reference +product: NGINX One +docs: DOCS-000 +--- + +You can use F5 NGINX One Console API to manage security policies. With our API, you can: + +- [List existing policies]({{< ref "/nginx-one/api/api-reference-guide/#operation/listNapPolicies" >}}) + - You can set parameters to sort policies by type. +- [Create a new policy]({{< ref "/nginx-one/api/api-reference-guide/#operation/createNapPolicy" >}}) + - You need to translate the desired policy.json file to base64 format. +- [Get policy details]({{< ref "/nginx-one/api/api-reference-guide/#operation/getNapPolicy" >}}) + - Returns details of the policy you identified with the policy `object_id`. +- [List NGINX App Protect Deployments]({{< ref "/nginx-one/api/api-reference-guide/#operation/listNapPolicyDeployments" >}}) + - The output includes: + - Target of the deployment + - Time of deployment + - Enforcement mode + - Policy version + - Threat campaign + - Attack signature + - Bot signature diff --git a/content/nginx-one/nginx-configs/config-sync-groups/_index.md b/content/nginx-one/nginx-configs/config-sync-groups/_index.md index eaefeaea3..96d90f3e3 100644 --- a/content/nginx-one/nginx-configs/config-sync-groups/_index.md +++ b/content/nginx-one/nginx-configs/config-sync-groups/_index.md @@ -2,5 +2,5 @@ description: title: Change multiple instances with one push weight: 400 -url: /nginx-one/config-sync-groups +url: /nginx-one/nginx-configs/config-sync-groups --- diff --git a/content/nginx-one/nginx-configs/config-sync-groups/manage-config-sync-groups.md b/content/nginx-one/nginx-configs/config-sync-groups/manage-config-sync-groups.md index 5db71008b..404d22c23 100644 --- a/content/nginx-one/nginx-configs/config-sync-groups/manage-config-sync-groups.md +++ b/content/nginx-one/nginx-configs/config-sync-groups/manage-config-sync-groups.md @@ -81,7 +81,7 @@ When you plan Config Sync Groups, consider the following factors: - **Single Config Sync Group membership**: You can add an instance to only one Config Sync Group. -- **NGINX Agent configuration file location**: When you run the NGINX Agent installation script to register an instance with NGINX One, the script creates the `agent-dynamic.conf` file, which contains settings for the NGINX Agent, including the specified Config Sync Group. This file is typically located in `/var/lib/nginx-agent/` on most systems; however, on FreeBSD, it's located at `/var/db/nginx-agent/`. +- **NGINX Agent configuration file location**: When you run the NGINX Agent installation script to register an instance with NGINX One, the script creates the `nginx-agent.conf` (or `agent-dynamic.conf` if you are using NGINX Agent 2.x) file, which contains settings for the NGINX Agent, including the specified Config Sync Group. This file is typically located in `/etc/nginx-agent/` on most systems. - **Mixing NGINX Open Source and NGINX Plus instances**: You can add both NGINX Open Source and NGINX Plus instances to the same Config Sync Group, but there are limitations. If your configuration includes features exclusive to NGINX Plus, synchronization will fail on NGINX Open Source instances because they don't support these features. NGINX One allows you to mix NGINX instance types for flexibility, but it’s important to ensure that the configurations you're applying are compatible with all instances in the group. @@ -104,6 +104,28 @@ Any instance that joins the group afterwards inherits that configuration. You can add existing NGINX instances that are already registered with NGINX One to a Config Sync Group. +{{< tabs name="Add existing instance to Config Sync Group" >}} + +{{%tab name="NGINX Agent 3.x"%}} + +1. Open a command-line terminal on the NGINX instance. +2. Open the `/etc/nginx-agent/nginx-agent.conf` file in a text editor. +3. Find or create the `labels` section and change the `config_sync_group` label to the name of the new Config Sync Group. + + ``` text + labels: + config-sync-group: + ``` + +4. Restart NGINX Agent: + + ``` shell + sudo systemctl restart nginx-agent + ``` + +{{%/tab%}} +{{%tab name="NGINX Agent 2.x"%}} + 1. Open a command-line terminal on the NGINX instance. 2. Open the `/var/lib/nginx-agent/agent-dynamic.conf` file in a text editor. 3. At the end of the file, add a new line beginning with `instance_group:`, followed by the Config Sync Group name. @@ -118,6 +140,9 @@ You can add existing NGINX instances that are already registered with NGINX One sudo systemctl restart nginx-agent ``` +{{%/tab%}} +{{< /tabs >}} + ### Add a new instance to a Config Sync Group {#add-a-new-instance-to-a-config-sync-group} When adding a new NGINX instance that is not yet registered with NGINX One, you need a data plane key to securely connect the instance. You can generate a new data plane key during the process or use an existing one if you already have it. @@ -185,6 +210,29 @@ For more details on creating and managing data plane keys, see [Create and manag If you need to move an NGINX instance to a different Config Sync Group, follow these steps: +{{< tabs name="Move instance to Config Sync Group" >}} + +{{%tab name="NGINX Agent 3.x"%}} + +1. Open a command-line terminal on the NGINX instance. +2. Open the `/etc/nginx-agent/nginx-agent.conf` file in a text editor. +3. Find the `labels` section and change the `config_sync_group` label to the name of the new Config Sync Group. + + ``` text + labels: + config-sync-group: + ``` + +4. Restart NGINX Agent by running the following command: + + ```shell + sudo systemctl restart nginx-agent + ``` + +{{%/tab%}} +{{%tab name="NGINX Agent 2.x"%}} + + 1. Open a command-line terminal on the NGINX instance. 2. Open the `/var/lib/nginx-agent/agent-dynamic.conf` file in a text editor. 3. Locate the line that begins with `instance_group:` and change it to the name of the new Config Sync Group. @@ -199,12 +247,39 @@ If you need to move an NGINX instance to a different Config Sync Group, follow t sudo systemctl restart nginx-agent ``` +{{%/tab%}} +{{< /tabs >}} + + If you move an instance with certificates from one Config Sync Group to another, NGINX One adds or removes those certificates from the data plane, to synchronize with the deployed certificates of the group. ### Remove an instance from a Config Sync Group If you need to remove an NGINX instance from a Config Sync Group without adding it to another group, follow these steps: + +{{< tabs name="Remove instance from Config Sync Group" >}} + +{{%tab name="NGINX Agent 3.x"%}} + +1. Open a command-line terminal on the NGINX instance. +2. Open the `/etc/nginx-agent/nginx-agent.conf` file in a text editor. +3. Locate the line that begins with `labels:` section and either remove the `config-sync-group` line or comment it out by adding a `#` at the beginning of the line. + + ```text + labels: + # config-sync-group: + ``` + +4. Restart NGINX Agent: + + ```shell + sudo systemctl restart nginx-agent + ``` + +{{%/tab%}} +{{%tab name="NGINX Agent 2.x"%}} + 1. Open a command-line terminal on the NGINX instance. 2. Open the `/var/lib/nginx-agent/agent-dynamic.conf` file in a text editor. 3. Locate the line that begins with `instance_group:` and either remove it or comment it out by adding a `#` at the beginning of the line. @@ -219,6 +294,10 @@ If you need to remove an NGINX instance from a Config Sync Group without adding sudo systemctl restart nginx-agent ``` +{{%/tab%}} +{{< /tabs >}} + + By removing or commenting out this line, the instance will no longer be associated with any Config Sync Group. ## Publish the Config Sync Group configuration {#publish-the-config-sync-group-configuration} diff --git a/content/nginx-one/nginx-configs/staged-configs/_index.md b/content/nginx-one/nginx-configs/staged-configs/_index.md index 1305546f1..ddb5f35fb 100644 --- a/content/nginx-one/nginx-configs/staged-configs/_index.md +++ b/content/nginx-one/nginx-configs/staged-configs/_index.md @@ -2,5 +2,5 @@ description: title: Draft new configurations weight: 400 -url: /nginx-one/staged-configs +url: /nginx-one/nginx-configs/staged-configs --- diff --git a/content/nginx/admin-guide/dynamic-modules/nginx-waf.md b/content/nginx/admin-guide/dynamic-modules/nginx-waf.md index 4564ebfb6..96d2c1a60 100644 --- a/content/nginx/admin-guide/dynamic-modules/nginx-waf.md +++ b/content/nginx/admin-guide/dynamic-modules/nginx-waf.md @@ -11,7 +11,9 @@ type: {{< note >}} The `nginx-plus-module-modsecurity` package is no longer available in the NGINX Plus repository.{{< /note >}} -The ModSecurity WAF module was deprecated since [NGINX Plus Release 29]({{< ref "/nginx/releases.md#r29" >}}), and is no longer available since [NGINX Plus Release 32]({{< ref "/nginx/releases.md#r32" >}}). +NGINX ModSecurity WAF officially reached End-of-Sale status on April 1, 2022 ([NGINX Plus Release 29]({{< ref "/nginx/releases.md#r29" >}})), and End-of-Life status on March 31, 2024 ([NGINX Plus Release 32]({{< ref "/nginx/releases.md#r32" >}})). + +For more details, [see this blog](https://www.f5.com/company/blog/nginx/f5-nginx-modsecurity-waf-transitioning-to-eol) announcement. To remove the module, follow the [Uninstalling a Dynamic Module]({{< ref "uninstall.md" >}}) instructions. diff --git a/content/nginx/admin-guide/installing-nginx/installing-nginx-docker.md b/content/nginx/admin-guide/installing-nginx/installing-nginx-docker.md index b47f900e3..7bb2a5ffa 100644 --- a/content/nginx/admin-guide/installing-nginx/installing-nginx-docker.md +++ b/content/nginx/admin-guide/installing-nginx/installing-nginx-docker.md @@ -88,61 +88,13 @@ where: - the `jq` command is used to format the JSON output for easier reading and requires the [jq](https://jqlang.github.io/jq/) JSON processor to be installed. +### Download your subscription credential files +{{< include "use-cases/credential-download-instructions.md" >}} -### Download the JSON Web Token or NGINX Plus certificate and key {#myf5-download} +### Set up Docker for the F5 Container Registry -Before you get a container image, you should provide the JSON Web Token file or SSL certificate and private key files provided with your NGINX Plus subscription. These files grant access to the package repository from which the script will download the NGINX Plus package: - -{{}} - -{{%tab name="JSON Web Token"%}} -{{< include "licensing-and-reporting/download-jwt-from-myf5.md" >}} -{{% /tab %}} - -{{%tab name="SSL"%}} -1. Log in to the [MyF5](https://my.f5.com) customer portal. -2. Go to **My Products and Plans** > **Subscriptions**. -3. Select the product subscription. -4. Download the **SSL Certificate** and **Private Key** files. -{{% /tab %}} - -{{% /tabs %}} - -### Set up Docker for NGINX Plus container registry - -Set up Docker to communicate with the NGINX Container Registry located at `private-registry.nginx.com`. - -{{}} - -{{%tab name="JSON Web Token"%}} -Open the JSON Web Token file previously downloaded from [MyF5](https://my.f5.com) customer portal (for example, `nginx-repo-12345abc.jwt`) and copy its contents. - -Log in to the docker registry using the contents of the JSON Web Token file: - -```shell -docker login private-registry.nginx.com --username= --password=none -``` -{{% /tab %}} - -{{%tab name="SSL"%}} -Create a directory and copy your certificate and key to this directory: - -```shell -mkdir -p /etc/docker/certs.d/private-registry.nginx.com -cp /etc/docker/certs.d/private-registry.nginx.com/client.cert -cp /etc/docker/certs.d/private-registry.nginx.com/client.key -``` -The steps provided are for Linux. For Mac or Windows, see the [Docker for Mac](https://docs.docker.com/docker-for-mac/#add-client-certificates) or [Docker for Windows](https://docs.docker.com/docker-for-windows/#how-do-i-add-client-certificates) documentation. For more details on Docker Engine security, you can refer to the [Docker Engine Security documentation](https://docs.docker.com/engine/security/). - -Log in to the docker registry: - -```shell -docker login private-registry.nginx.com -``` -{{% /tab %}} - -{{% /tabs %}} +{{< include "use-cases/docker-registry-instructions.md" >}} ### Pull the image @@ -192,7 +144,6 @@ For NGINX modules, run: docker pull private-registry.nginx.com/nginx-plus/modules: ``` - {{< include "security/jwt-password-note.md" >}} ### Push the image to your private registry @@ -335,14 +286,14 @@ To generate a custom NGINX Plus image: - no files are copied from the Docker host as a container is created: you can add `COPY` definitions to each Dockerfile, or the image you create can be used as the basis for another image -3. Log in to [MyF5 Customer Portal](https://account.f5.com/myf5) and download your *nginx-repo.crt* and *nginx-repo.key* files. For a trial of NGINX Plus, the files are provided with your trial package. +3. Log in to [MyF5 Customer Portal](https://account.f5.com/myf5). As noted in the [Prerequisites](#prerequisites], download your *nginx-repo.crt*, *nginx-repo.key*, and **JSON Web Token** files. For a trial of NGINX Plus, the files are provided with your trial package. 4. Copy the files to the directory where the Dockerfile is located. 5. Create a Docker image, for example, `nginxplus` (note the final period in the command). ```shell - docker build --no-cache --secret id=nginx-key,src=nginx-repo.key --secret id=nginx-crt,src=nginx-repo.crt -t nginxplus . + docker build --no-cache --secret id=nginx-key,src=nginx-repo.key --secret id=nginx-crt,src=nginx-repo.crt --secret id=nginx-jwt,src=license.jwt -t nginxplus . ``` The `--no-cache` option tells Docker to build the image from scratch and ensures the installation of the latest version of NGINX Plus. If the Dockerfile was previously used to build an image without the `--no-cache` option, the new image uses the version of NGINX Plus from the previously built image from the Docker cache. diff --git a/content/nginx/admin-guide/installing-nginx/installing-nginx-plus-amazon-web-services.md b/content/nginx/admin-guide/installing-nginx/installing-nginx-plus-amazon-web-services.md index 497ab477d..a16e00a16 100644 --- a/content/nginx/admin-guide/installing-nginx/installing-nginx-plus-amazon-web-services.md +++ b/content/nginx/admin-guide/installing-nginx/installing-nginx-plus-amazon-web-services.md @@ -38,7 +38,7 @@ To quickly set up an NGINX Plus environment on AWS: /etc/init.d/nginx status ``` -See [NGINX Plus on the AWS Cloud Quick Start](https://aws.amazon.com/quickstart/architecture/nginx-plus/) deployment guide for details. +See [NGINX Plus on the AWS Cloud Quick Start](https://aws.amazon.com/blogs/apn/introducing-a-new-aws-quick-start-nginx-plus-on-the-aws-cloud-in-15-minutes/) deployment guide for details. ## What If I Need Help? diff --git a/content/nginx/deployment-guides/single-sign-on/oidc-njs/active-directory-federation-services.md b/content/nginx/deployment-guides/single-sign-on/oidc-njs/active-directory-federation-services.md index d014bc302..92092d1fd 100644 --- a/content/nginx/deployment-guides/single-sign-on/oidc-njs/active-directory-federation-services.md +++ b/content/nginx/deployment-guides/single-sign-on/oidc-njs/active-directory-federation-services.md @@ -16,7 +16,7 @@ See [Single Sign-On With Microsoft AD FS]({{< ref "nginx/deployment-guides/singl This guide explains how to enable single sign-on (SSO) for applications being proxied by F5 NGINX Plus. The solution uses OpenID Connect as the authentication mechanism, with [Microsoft Active Directory Federation Services](https://docs.microsoft.com/en-us/windows-server/identity/active-directory-federation-services) (AD FS) as the identity provider (IdP) and NGINX Plus as the relying party. -{{< see-also >}}{{< readfile file="includes/nginx-openid-repo-note.txt" markdown="true" >}}{{< /see-also >}} +{{< see-also >}}{{}}{{< /see-also >}} ## Prerequisites diff --git a/content/nginx/deployment-guides/single-sign-on/oidc-njs/auth0.md b/content/nginx/deployment-guides/single-sign-on/oidc-njs/auth0.md index b789fb2f8..91f56a61a 100644 --- a/content/nginx/deployment-guides/single-sign-on/oidc-njs/auth0.md +++ b/content/nginx/deployment-guides/single-sign-on/oidc-njs/auth0.md @@ -16,7 +16,7 @@ See [Single Sign-On With Auth0]({{< ref "nginx/deployment-guides/single-sign-on/ You can use F5 NGINX Plus with [Auth0](https://auth0.com/) and OpenID Connect to enable single sign-on (SSO) for your proxied applications. By following the steps in this guide, you will learn how to set up SSO using OpenID Connect as the authentication mechanism, with Auth0 as the identity provider (IdP), and NGINX Plus as the relying party. -{{< see-also >}}{{< readfile file="includes/nginx-openid-repo-note.txt" markdown="true" >}}{{< /see-also >}} +{{< see-also >}}{{}}{{< /see-also >}} ## Prerequisites diff --git a/content/nginx/deployment-guides/single-sign-on/oidc-njs/cognito.md b/content/nginx/deployment-guides/single-sign-on/oidc-njs/cognito.md index 6a1d3fa91..639e915f3 100644 --- a/content/nginx/deployment-guides/single-sign-on/oidc-njs/cognito.md +++ b/content/nginx/deployment-guides/single-sign-on/oidc-njs/cognito.md @@ -16,7 +16,7 @@ See [Single Sign-On With Amazon Cognito]({{< ref "nginx/deployment-guides/single This guide explains how to enable single sign‑on (SSO) for applications being proxied by F5 NGINX Plus. The solution uses OpenID Connect as the authentication mechanism, with [Amazon Cognito](https://aws.amazon.com/cognito/) as the identity provider (IdP), and NGINX Plus as the relying party. -{{< see-also >}}{{< readfile file="includes/nginx-openid-repo-note.txt" markdown="true" >}}{{< /see-also >}} +{{< see-also >}}{{}}{{< /see-also >}} diff --git a/content/nginx/deployment-guides/single-sign-on/oidc-njs/keycloak.md b/content/nginx/deployment-guides/single-sign-on/oidc-njs/keycloak.md index 9bfd26e66..a983bd9b2 100644 --- a/content/nginx/deployment-guides/single-sign-on/oidc-njs/keycloak.md +++ b/content/nginx/deployment-guides/single-sign-on/oidc-njs/keycloak.md @@ -16,7 +16,7 @@ See [Single Sign-On With Keycloak]({{< ref "nginx/deployment-guides/single-sign- This guide explains how to enable single sign-on (SSO) for applications being proxied by F5 NGINX Plus. The solution uses OpenID Connect as the authentication mechanism, with [Keycloak](https://www.keycloak.org/) as the identity provider (IdP), and NGINX Plus as the relying party. -{{< see-also >}}{{< readfile file="includes/nginx-openid-repo-note.txt" markdown="true" >}}{{< /see-also >}} +{{< see-also >}}{{}}{{< /see-also >}} diff --git a/content/nginx/deployment-guides/single-sign-on/oidc-njs/okta.md b/content/nginx/deployment-guides/single-sign-on/oidc-njs/okta.md index 6b83eb437..27e9d0048 100644 --- a/content/nginx/deployment-guides/single-sign-on/oidc-njs/okta.md +++ b/content/nginx/deployment-guides/single-sign-on/oidc-njs/okta.md @@ -16,7 +16,7 @@ See [Single Sign-On With Okta]({{< ref "nginx/deployment-guides/single-sign-on/o You can use NGINX Plus with Okta and OpenID Connect to enable single sign-on (SSO) for your proxied applications. By following the steps in this guide, you will learn how to set up SSO using OpenID Connect as the authentication mechanism, with Okta as the identity provider (IdP), and NGINX Plus as the relying party. -{{< see-also >}}{{< readfile file="includes/nginx-openid-repo-note.txt" markdown="true" >}}{{< /see-also >}} +{{< see-also >}}{{}}{{< /see-also >}} ## Prerequisites diff --git a/content/nginx/deployment-guides/single-sign-on/oidc-njs/onelogin.md b/content/nginx/deployment-guides/single-sign-on/oidc-njs/onelogin.md index e5560f8d5..a021ed537 100644 --- a/content/nginx/deployment-guides/single-sign-on/oidc-njs/onelogin.md +++ b/content/nginx/deployment-guides/single-sign-on/oidc-njs/onelogin.md @@ -16,7 +16,7 @@ See [Single Sign-On With OneLogin]({{< ref "nginx/deployment-guides/single-sign- You can use NGINX Plus with [OneLogin](https://www.onelogin.com/) and the OpenID Connect protocol to enable single sign-on (SSO) for your proxied applications. By following the steps in this guide, you will learn how to set up SSO using OpenID Connect as the authentication mechanism, with OneLogin as the identity provider (IdP) and NGINX Plus as the relying party. -{{< see-also >}}{{< readfile file="includes/nginx-openid-repo-note.txt" markdown="true" >}}{{< /see-also >}} +{{< see-also >}}{{}}{{< /see-also >}} ## Prerequisites diff --git a/content/nginx/deployment-guides/single-sign-on/oidc-njs/ping-identity.md b/content/nginx/deployment-guides/single-sign-on/oidc-njs/ping-identity.md index 3332bb3fb..709ae5dcf 100644 --- a/content/nginx/deployment-guides/single-sign-on/oidc-njs/ping-identity.md +++ b/content/nginx/deployment-guides/single-sign-on/oidc-njs/ping-identity.md @@ -18,7 +18,7 @@ This guide explains how to enable single sign-on (SSO) for applications being pr The instructions in this document apply to both Ping Identity's on‑premises and cloud products, PingFederate and PingOne for Enterprise. -{{< see-also >}}{{< readfile file="includes/nginx-openid-repo-note.txt" markdown="true" >}}{{< /see-also >}} +{{< see-also >}}{{}}{{< /see-also >}} ## Prerequisites diff --git a/content/nginxaas-azure/_index.md b/content/nginxaas-azure/_index.md index 0d8c6a9a9..84acdce9a 100644 --- a/content/nginxaas-azure/_index.md +++ b/content/nginxaas-azure/_index.md @@ -1,11 +1,67 @@ --- title: NGINXaaS for Azure -description: 'NGINX as a Service for Azure is an IaaS offering that is tightly integrated - into Microsoft Azure public cloud and its ecosystem, making applications fast, efficient, - and reliable with full lifecycle management of advanced NGINX traffic services. - - ' +nd-subtitle: Infrastructure-as-a-Service (IaaS) version of NGINX Plus for your Microsoft Azure application stack url: /nginxaas/azure/ +nd-landing-page: true cascade: logo: NGINX-for-Azure-icon.svg +nd-content-type: landing-page +nd-product: N4Azure --- + + +## About + +NGINX as a Service for Azure is an IaaS offering that is tightly integrated +into Microsoft Azure public cloud and its ecosystem, making applications fast, efficient, +and reliable with full lifecycle management of advanced NGINX traffic services. + +## Featured content + +{{}} + {{}} + {{}} + Deploy NGINX as a Service for Azure using the Azure portal, Azure CLI, or Terraform + {{}} + {{}} + Step-by-step guides for several common use cases, including scaling guidance, security controls, and more + {{}} + {{}} + Collect, correlate, and analyze metrics for a thorough understanding of your application's health and behavior + {{}} + {{}} +{{}} + +### Billing + +{{}} + {{}} + + {{}} + See the pricing plans and learn about NGINX Capacity Units (NCUs) + {{}} + {{}} +{{}} + +### Certificates + +{{}} + {{}} + {{}} + Learn to manage SSL/TSL certificates using the Azure portal + {{}} + {{}} +{{}} + +### More information + +{{}} + {{}} + {{}} + Learn about the differences between NGINX as a Service for Azure and NGINX Plus + {{}} + {{}} + See the latest updates: New features, improvements, and bug fixes + {{}} + {{}} +{{}} diff --git a/content/nginxaas-azure/getting-started/nginx-configuration/overview.md b/content/nginxaas-azure/getting-started/nginx-configuration/overview.md index 48ca51ef6..6d3f4910a 100644 --- a/content/nginxaas-azure/getting-started/nginx-configuration/overview.md +++ b/content/nginxaas-azure/getting-started/nginx-configuration/overview.md @@ -74,7 +74,9 @@ Some directives cannot be overridden by the user provided configuration. ## NGINX listen port restrictions -- Due to port restrictions on Azure Load Balancer health probes, ports `19`, `21`, `70`, and `119` are not allowed. The NGINXaaS deployment can listen on all other ports. +- Due to port restrictions on Azure Load Balancer health probes, certain ports are not allowed for the `listen` directive in NGINX configuration. The following ports are blocked: + - `19`, `21`, `70`, `119` - Azure health probe restricted ports + - `49151`, `49153`, `5140`, `50000`, `54141`, `54779` - reserved ports to support other NGINXaaS features - The [Basic]({{< ref "/nginxaas-azure/billing/overview.md#basic-plan" >}}) plan (and the deprecated Standard (v1) plan) supports a maximum of 5 listen ports in the NGINX configuration. Configurations that specify over 5 unique ports are rejected. diff --git a/content/nginxaas-azure/monitoring/metrics-catalog.md b/content/nginxaas-azure/monitoring/metrics-catalog.md index b31d887c9..8055ce342 100644 --- a/content/nginxaas-azure/monitoring/metrics-catalog.md +++ b/content/nginxaas-azure/monitoring/metrics-catalog.md @@ -34,19 +34,19 @@ The metrics are categorized by the namespace used in Azure Monitor. The dimensio {{}} -| **Metric** | **Dimensions** | **Type** | **Description** | **Roll-up per** | -| --------------------- | -------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------- | -| ncu.provisioned | | count | The number of successfully provisioned NCUs during the aggregation interval. During scaling events, this may lag behind `ncu.requested` as the system works to achieve the request. Available for Standard plan(s) only. | deployment | -| ncu.requested | | count | The requested number of NCUs during the aggregation interval. Describes the goal state of the system. Available for Standard plans(s) only. | deployment | -| nginxaas.capacity.percentage | | count | The percentage of the deployment's total capacity being used. This can be used to guide scaling your workload. See [Scaling Guidance]({{< ref "/nginxaas-azure/quickstart/scaling.md#iterative-approach" >}}) for details. Available for Standard plan(s) only. | deployment | -| system.worker_connections | pid process_name | count | The number of nginx worker connections used on the dataplane. This metric is one of the factors which determines the deployment's consumed NCU value. | deployment | -| nginxaas.certificates | name status | count | The number of certificates added to the NGINXaaS deployment dimensioned by the name of the certificate and its status. Refer to [Certificate Health]({{< ref "/nginxaas-azure/getting-started/ssl-tls-certificates/overview.md#monitor-certificates" >}}) to learn more about the status dimension. | deployment | -| nginxaas.maxmind | status | count | The status of any MaxMind license in use for downloading geoip2 databases. Refer to [License Health]({{< ref "/nginxaas-azure/quickstart/geoip2.md#monitoring" >}}) to learn more about the status dimension. | deployment | -| waf.enabled | | count | Current status of Web Application Firewall on the deployment. | deployment | -| ports.used | | count | The number of listen ports used by the deployment during the aggregation interval. | deployment | -| system.listener_backlog.max| listen_addr, file_desc | count | The fullness (expressed as a fraction) of the fullest backlog queue. | deployment | -| system.listener_backlog.length| listen_address, file_desc | count | The number of items in a specific backlog queue, labelled by listen address. | deployment | -| system.listener_backlog.queue_limit| listen_address, file_desc | count | The capacity of a specific backlog queue, labelled by listen address. | deployment | +| **Metric** | **Display Name** | **Dimensions** | **Type** | **Description** | **Roll-up per** | +| --------------------- | --------------------------- | -------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------- | +| ncu.provisioned | NCU provisioned | | count | The number of successfully provisioned NCUs during the aggregation interval. During scaling events, this may lag behind `ncu.requested` as the system works to achieve the request. Available for Standard plan(s) only. | deployment | +| ncu.requested | NCU requested | | count | The requested number of NCUs during the aggregation interval. Describes the goal state of the system. Available for Standard plans(s) only. | deployment | +| nginxaas.capacity.percentage | NGINXaaS capacity percentage | | count | The percentage of the deployment's total capacity being used. This can be used to guide scaling your workload. See [Scaling Guidance]({{< ref "/nginxaas-azure/quickstart/scaling.md#iterative-approach" >}}) for details. Available for Standard plan(s) only. | deployment | +| system.worker_connections | Worker connections | pid process_name | count | The number of nginx worker connections used on the dataplane. This metric is one of the factors which determines the deployment's consumed NCU value. | deployment | +| nginxaas.certificates | Certificates | name status | count | The number of certificates added to the NGINXaaS deployment dimensioned by the name of the certificate and its status. Refer to [Certificate Health]({{< ref "/nginxaas-azure/getting-started/ssl-tls-certificates/overview.md#monitor-certificates" >}}) to learn more about the status dimension. | deployment | +| nginxaas.maxmind | Maxmind status | status | count | The status of any MaxMind license in use for downloading geoip2 databases. Refer to [License Health]({{< ref "/nginxaas-azure/quickstart/geoip2.md#monitoring" >}}) to learn more about the status dimension. | deployment | +| waf.enabled | Web application firewall enabled | | count | Current status of Web Application Firewall on the deployment. | deployment | +| ports.used | Ports used | | count | The number of listen ports used by the deployment during the aggregation interval. | deployment | +| system.listener_backlog.max | Max listener backlog | listen_addr, file_desc | count | The fullness (expressed as a fraction) of the fullest backlog queue. | deployment | +| system.listener_backlog.queue_limit | Listener backlog queue limit | listen_address, file_desc | count | The capacity of a specific backlog queue, labelled by listen address. | deployment | +| system.listener_backlog.length | Listener backlog length | listen_address, file_desc | count | The number of items in a specific backlog queue, labelled by listen address. | deployment | {{}} @@ -56,13 +56,13 @@ The metrics are categorized by the namespace used in Azure Monitor. The dimensio {{}} -| **Metric** | **Dimensions** | **Type** | **Description** | **Roll-up per** | -|------------------------------|----------------|----------|---------------------------------------------------------------------------------------------------------------|-----------------| -| nginx.conn.accepted | build version | count | Accepted Connections The total number of accepted client connections during the aggregation interval. | deployment | -| nginx.conn.dropped | build version | count | Dropped Connections The total number of dropped client connections during the aggregation interval. | deployment | -| nginx.conn.active | build version | count | Active Connections The average number of active client connections during the aggregation interval. | deployment | -| nginx.conn.idle | build version | count | Idle Connections The average number of idle client connections during the aggregation interval. | deployment | -| nginx.conn.current | build version | count | Current Connections The average number of active and idle client connections during the aggregation interval. | deployment | +| **Metric** | **Display Name** | **Dimensions** | **Type** | **Description** | **Roll-up per** | +|------------------------------|------------------|----------------|----------|---------------------------------------------------------------------------------------------------------------|-----------------| +| nginx.conn.accepted | Accepted connections | build version | count | Accepted Connections The total number of accepted client connections during the aggregation interval. | deployment | +| nginx.conn.dropped | Dropped connections | build version | count | Dropped Connections The total number of dropped client connections during the aggregation interval. | deployment | +| nginx.conn.active | Active connections | build version | count | Active Connections The average number of active client connections during the aggregation interval. | deployment | +| nginx.conn.idle | Idle connections | build version | count | Idle Connections The average number of idle client connections during the aggregation interval. | deployment | +| nginx.conn.current | Current connections | build version | count | Current Connections The average number of active and idle client connections during the aggregation interval. | deployment | {{}} @@ -70,37 +70,37 @@ The metrics are categorized by the namespace used in Azure Monitor. The dimensio {{}} -| **Metric** | **Dimensions** | **Type** | **Description** | **Roll-up per** | -|----------------------------------------|-----------------------------|-------|-----------------------------------------------------------------------------------------------------------------------------|---------------| -| nginx.http.request.count | build version | count | HTTP Requests The total number of HTTP requests during the aggregation interval. | deployment | -| nginx.http.request.current | build version | count | Current Requests The number of current requests during the aggregation interval. | deployment | -| nginx.http.limit_conns.passed | build version limit_conn_zone | count | Limit Conn Zone Passed HTTP Connections The total number of connections that were neither limited nor accounted as limited during the aggregation interval. | limit conn zone | -| nginx.http.limit_conns.rejected | build version limit_conn_zone | count | Limit Conn Zone Rejected HTTP Connections The total number of connections that were rejected during the aggregation interval. | limit conn zone | -| nginx.http.limit_conns.rejected_dry_run| build version limit_conn_zone | count | Limit Conn Zone Rejected HTTP Connections In The Dry Run Mode The total number of connections accounted as rejected in the dry run mode during the aggregation interval. | limit conn zone | -| nginx.http.limit_reqs.passed | build version limit_req_zone | count | Limit Req Zone Passed HTTP Requests Rate The total number of requests that were neither limited nor accounted as limited during the aggregation interval. | limit req zone | -| nginx.http.limit_reqs.delayed | build version limit_req_zone | count | Limit Req Zone Delayed HTTP Requests Rate The total number of requests that were delayed during the aggregation interval. | limit req zone | -| nginx.http.limit_reqs.rejected | build version limit_req_zone | count | Limit Req Zone Rejected HTTP Requests Rate The total number of requests that were rejected during the aggregation interval. | limit req zone | -| nginx.http.limit_reqs.delayed_dry_run | build version limit_req_zone | count | Limit Req Zone Delayed HTTP Requests Rate In The Dry Run Mode The total number of requests accounted as delayed in the dry run mode during the aggregation interval. | limit req zone | -| nginx.http.limit_reqs.rejected_dry_run | build version limit_req_zone | count | Limit Req Zone Rejected HTTP Requests Rate In The Dry Run Mode The total number of requests accounted as rejected in the dry run mode during the aggregation interval. | limit req zone | -| plus.http.request.count | build version server_zone | count | Server Zone HTTP Requests The total number of HTTP requests during the aggregation interval. | server zone | -| plus.http.response.count | build version server_zone | count | Server Zone HTTP Responses The total number of HTTP responses during the aggregation interval. | server zone | -| plus.http.status.1xx | build version server_zone | count | Server Zone HTTP 1xx Responses The total number of HTTP responses with a 1xx status code during the aggregation interval. | server zone | -| plus.http.status.2xx | build version server_zone | count | Server Zone HTTP 2xx Responses The total number of HTTP responses with a 2xx status code during the aggregation interval. | server zone | -| plus.http.status.3xx | build version server_zone | count | Server Zone HTTP 3xx Responses The total number of HTTP responses with a 3xx status code during the aggregation interval. | server zone | -| plus.http.status.4xx | build version server_zone | count | Server Zone HTTP 4xx Responses The total number of HTTP responses with a 4xx status code during the aggregation interval. | server zone | -| plus.http.status.5xx | build version server_zone | count | Server Zone HTTP 5xx Responses The total number of HTTP responses with a 5xx status code during the aggregation interval. | server zone | -| plus.http.status.processing | build version server_zone | avg | Server Zone Status Processing The number of client requests that are currently being processed. | server zone | -| plus.http.request.bytes_rcvd | build version server_zone | count | Server Zone Bytes Received The total number of bytes received from clients during the aggregation interval. | server zone | -| plus.http.request.bytes_sent | build version server_zone | count | Server Zone Bytes Sent The total number of bytes sent to clients during the aggregation interval. | server zone | -| plus.http.request.count | build version location_zone | count | Location Zone HTTP Requests The total number of HTTP requests during the aggregation interval. | location zone | -| plus.http.response.count | build version location_zone | count | Location Zone HTTP Responses The total number of HTTP responses in the aggregation interval. | location zone | -| plus.http.status.1xx | build version location_zone | count | Location Zone HTTP 1xx Responses The total number of HTTP responses with a 1xx status code during the aggregation interval. | location zone | -| plus.http.status.2xx | build version location_zone | count | Location Zone HTTP 2xx Responses The total number of HTTP responses with a 2xx status code during the aggregation interval. | location zone | -| plus.http.status.3xx | build version location_zone | count | Location Zone HTTP 3xx Responses The total number of HTTP responses with a 3xx status code during the aggregation interval. | location zone | -| plus.http.status.4xx | build version location_zone | count | Location Zone HTTP 4xx Responses The total number of HTTP responses with a 4xx status code during the aggregation interval. | location zone | -| plus.http.status.5xx | build version location_zone | count | Location Zone HTTP 5xx Responses The total number of HTTP responses with a 5xx status code during the aggregation interval. | location zone | -| plus.http.request.bytes_rcvd | build version location_zone | count | Location Zone Bytes Received The total number of bytes received from clients during the aggregation interval. | location zone | -| plus.http.request.bytes_sent | build version location_zone | count | Location Zone Bytes Sent The total number of bytes sent to clients during the aggregation interval. | location zone | +| **Metric** | **Display Name** | **Dimensions** | **Type** | **Description** | **Roll-up per** | +|----------------------------------------|------------------|-----------------------------|-------|-----------------------------------------------------------------------------------------------------------------------------|---------------| +| nginx.http.request.count | Total HTTP requests | build version | count | HTTP Requests The total number of HTTP requests during the aggregation interval. | deployment | +| nginx.http.request.current | Current HTTP requests | build version | count | Current Requests The number of current requests during the aggregation interval. | deployment | +| nginx.http.limit_conns.passed | HTTP limit conn passed | build version limit_conn_zone | count | Limit Conn Zone Passed HTTP Connections The total number of connections that were neither limited nor accounted as limited during the aggregation interval. | limit conn zone | +| nginx.http.limit_conns.rejected | HTTP limit conn rejected | build version limit_conn_zone | count | Limit Conn Zone Rejected HTTP Connections The total number of connections that were rejected during the aggregation interval. | limit conn zone | +| nginx.http.limit_conns.rejected_dry_run| HTTP limit conn rejected dry-run | build version limit_conn_zone | count | Limit Conn Zone Rejected HTTP Connections In The Dry Run Mode The total number of connections accounted as rejected in the dry run mode during the aggregation interval. | limit conn zone | +| nginx.http.limit_reqs.passed | HTTP limit requests passed | build version limit_req_zone | count | Limit Req Zone Passed HTTP Requests Rate The total number of requests that were neither limited nor accounted as limited during the aggregation interval. | limit req zone | +| nginx.http.limit_reqs.delayed | HTTP limit requests delayed | build version limit_req_zone | count | Limit Req Zone Delayed HTTP Requests Rate The total number of requests that were delayed during the aggregation interval. | limit req zone | +| nginx.http.limit_reqs.rejected | HTTP limit requests rejected | build version limit_req_zone | count | Limit Req Zone Rejected HTTP Requests Rate The total number of requests that were rejected during the aggregation interval. | limit req zone | +| nginx.http.limit_reqs.delayed_dry_run | HTTP limit requests delayed dry-run | build version limit_req_zone | count | Limit Req Zone Delayed HTTP Requests Rate In The Dry Run Mode The total number of requests accounted as delayed in the dry run mode during the aggregation interval. | limit req zone | +| nginx.http.limit_reqs.rejected_dry_run | HTTP limit requests rejected dry-run | build version limit_req_zone | count | Limit Req Zone Rejected HTTP Requests Rate In The Dry Run Mode The total number of requests accounted as rejected in the dry run mode during the aggregation interval. | limit req zone | +| plus.http.request.count | Server zone HTTP requests | build version server_zone | count | Server Zone HTTP Requests The total number of HTTP requests during the aggregation interval. | server zone | +| plus.http.response.count | Server zone HTTP responses | build version server_zone | count | Server Zone HTTP Responses The total number of HTTP responses during the aggregation interval. | server zone | +| plus.http.status.1xx | Server zone HTTP 1xx responses | build version server_zone | count | Server Zone HTTP 1xx Responses The total number of HTTP responses with a 1xx status code during the aggregation interval. | server zone | +| plus.http.status.2xx | Server zone HTTP 2xx responses | build version server_zone | count | Server Zone HTTP 2xx Responses The total number of HTTP responses with a 2xx status code during the aggregation interval. | server zone | +| plus.http.status.3xx | Server zone HTTP 3xx responses | build version server_zone | count | Server Zone HTTP 3xx Responses The total number of HTTP responses with a 3xx status code during the aggregation interval. | server zone | +| plus.http.status.4xx | Server zone HTTP 4xx responses | build version server_zone | count | Server Zone HTTP 4xx Responses The total number of HTTP responses with a 4xx status code during the aggregation interval. | server zone | +| plus.http.status.5xx | Server zone HTTP 5xx responses | build version server_zone | count | Server Zone HTTP 5xx Responses The total number of HTTP responses with a 5xx status code during the aggregation interval. | server zone | +| plus.http.status.processing | Server zone HTTP status processing | build version server_zone | avg | Server Zone Status Processing The number of client requests that are currently being processed. | server zone | +| plus.http.request.bytes_rcvd | Server zone HTTP bytes received | build version server_zone | count | Server Zone Bytes Received The total number of bytes received from clients during the aggregation interval. | server zone | +| plus.http.request.bytes_sent | Server zone HTTP bytes sent | build version server_zone | count | Server Zone Bytes Sent The total number of bytes sent to clients during the aggregation interval. | server zone | +| plus.http.request.location_zone.count | Location zone HTTP requests | build version location_zone | count | Location Zone HTTP Requests The total number of HTTP requests during the aggregation interval. | location zone | +| plus.http.response.location_zone.count | Location zone HTTP responses | build version location_zone | count | Location Zone HTTP Responses The total number of HTTP responses in the aggregation interval. | location zone | +| plus.http.status.location_zone.1xx | Location zone HTTP 1xx responses | build version location_zone | count | Location Zone HTTP 1xx Responses The total number of HTTP responses with a 1xx status code during the aggregation interval. | location zone | +| plus.http.status.location_zone.2xx | Location zone HTTP 2xx responses | build version location_zone | count | Location Zone HTTP 2xx Responses The total number of HTTP responses with a 2xx status code during the aggregation interval. | location zone | +| plus.http.status.location_zone.3xx | Location zone HTTP 3xx responses | build version location_zone | count | Location Zone HTTP 3xx Responses The total number of HTTP responses with a 3xx status code during the aggregation interval. | location zone | +| plus.http.status.location_zone.4xx | Location zone HTTP 4xx responses | build version location_zone | count | Location Zone HTTP 4xx Responses The total number of HTTP responses with a 4xx status code during the aggregation interval. | location zone | +| plus.http.status.location_zone.5xx | Location zone HTTP 5xx responses | build version location_zone | count | Location Zone HTTP 5xx Responses The total number of HTTP responses with a 5xx status code during the aggregation interval. | location zone | +| plus.http.request.location_zone.bytes_rcvd | Location zone HTTP bytes received | build version location_zone | count | Location Zone Bytes Received The total number of bytes received from clients during the aggregation interval. | location zone | +| plus.http.request.location_zone.bytes_sent | Location zone HTTP bytes sent | build version location_zone | count | Location Zone Bytes Sent The total number of bytes sent to clients during the aggregation interval. | location zone | {{}} @@ -108,31 +108,31 @@ The metrics are categorized by the namespace used in Azure Monitor. The dimensio {{}} -| **Metric** | **Dimensions** | **Type** | **Description** | **Roll-up per** | -|----------------------------------------|-----------------------------|-------|-----------------------------------------------------------------------------------------------------------------------------|---------------| -| plus.ssl.failed | build version | count | The total number of failed SSL handshakes during the aggregation interval. | deployment | -| plus.ssl.handshakes | build version | count |The total number of successful SSL handshakes during the aggregation interval. | deployment | -| plus.ssl.reuses | build version | count |The total number of session reuses during SSL handshakes in the aggregation interval. | deployment | -| plus.ssl.no_common_protocol | build version | count |The number of SSL handshakes failed because of no common protocol during the aggregation interval. | deployment | -| plus.ssl.no_common_cipher | build version | count |The number of SSL handshakes failed because of no shared cipher during the aggregation interval. | deployment | -| plus.ssl.handshake_timeout | build version | count | The number of SSL handshakes failed because of a timeout during the aggregation interval. | deployment | -| plus.ssl.peer_rejected_cert | build version | count |The number of failed SSL handshakes when nginx presented the certificate to the client but it was rejected with a corresponding alert message during the aggregation interval. | deployment | -| plus.ssl.verify_failures.no_cert | build version | count | SSL certificate verification errors - a client did not provide the required certificate during the aggregation interval. | deployment | -| plus.ssl.verify_failures.expired_cert | build version | count |SSL certificate verification errors - an expired or not yet valid certificate was presented by a client during the aggregation interval. | deployment | -| plus.ssl.verify_failures.revoked_cert | build version | count |SSL certificate verification errors - a revoked certificate was presented by a client during the aggregation interval. | deployment | -| plus.ssl.verify_failures.hostname_mismatch | build version | count |SSL certificate verification errors - server's certificate doesn't match the hostname during the aggregation interval. | deployment| -| plus.ssl.verify_failures.other | build version | count | SSL certificate verification errors - other SSL certificate verification errors during the aggregation interval. | deployment | -| plus.http.ssl.handshakes | build version server_zone | count |The total number of successful SSL handshakes during the aggregation interval. | server zone | -| plus.http.ssl.handshakes.failed | build version server_zone | count | The total number of failed SSL handshakes during the aggregation interval. | server zone | -| plus.http.ssl.session.reuses | build version server_zone | count |The total number of session reuses during SSL handshakes in the aggregation interval. | server zone | -| plus.http.ssl.no_common_protocol | build version server_zone | count |The number of SSL handshakes failed because of no common protocol during the aggregation interval. | server zone | -| plus.http.ssl.no_common_cipher | build version server_zone | count |The number of SSL handshakes failed because of no shared cipher during the aggregation interval. | server zone | -| plus.http.ssl.handshake_timeout | build version server_zone | count | The number of SSL handshakes failed because of a timeout during the aggregation interval. | server zone | -| plus.http.ssl.peer_rejected_cert | build version server_zone | count |The number of failed SSL handshakes when nginx presented the certificate to the client but it was rejected with a corresponding alert message during the aggregation interval. | server zone | -| plus.http.ssl.verify_failures.no_cert | build version server_zone | count | SSL certificate verification errors - a client did not provide the required certificate during the aggregation interval. | server zone | -| plus.http.ssl.verify_failures.expired_cert | build version server_zone | count |SSL certificate verification errors - an expired or not yet valid certificate was presented by a client during the aggregation interval. | server zone | -| plus.http.ssl.verify_failures.revoked_cert | build version server_zone | count |SSL certificate verification errors - a revoked certificate was presented by a client during the aggregation interval. | server zone | -| plus.http.ssl.verify_failures.other | build version server_zone | count | SSL certificate verification errors - other SSL certificate verification errors during the aggregation interval. | server zone | +| **Metric** | **Display Name** | **Dimensions** | **Type** | **Description** | **Roll-up per** | +|----------------------------------------|------------------|-----------------------------|-------|-----------------------------------------------------------------------------------------------------------------------------|---------------| +| plus.ssl.failed | Failed SSL handshakes | build version | count | The total number of failed SSL handshakes during the aggregation interval. | deployment | +| plus.ssl.handshakes | Successful SSL handshakes | build version | count | The total number of successful SSL handshakes during the aggregation interval. | deployment | +| plus.ssl.reuses | SSL session reuses | build version | count | The total number of session reuses during SSL handshakes in the aggregation interval. | deployment | +| plus.ssl.no_common_protocol | Handshakes failed - no common protocol | build version | count | The number of SSL handshakes failed because of no common protocol during the aggregation interval. | deployment | +| plus.ssl.no_common_cipher | Handshakes failed - no shared cipher | build version | count | The number of SSL handshakes failed because of no shared cipher during the aggregation interval. | deployment | +| plus.ssl.handshake_timeout | Handshakes failed - timeout | build version | count | The number of SSL handshakes failed because of a timeout during the aggregation interval. | deployment | +| plus.ssl.peer_rejected_cert | Handshakes failed - certificate rejected | build version | count | The number of failed SSL handshakes when nginx presented the certificate to the client but it was rejected with a corresponding alert message during the aggregation interval. | deployment | +| plus.ssl.verify_failures.no_cert | Cert verify failures - no cert | build version | count | SSL certificate verification errors - a client did not provide the required certificate during the aggregation interval. | deployment | +| plus.ssl.verify_failures.expired_cert | Cert verify failures - expired cert | build version | count | SSL certificate verification errors - an expired or not yet valid certificate was presented by a client during the aggregation interval. | deployment | +| plus.ssl.verify_failures.revoked_cert | Cert verify failures - revoked cert | build version | count | SSL certificate verification errors - a revoked certificate was presented by a client during the aggregation interval. | deployment | +| plus.ssl.verify_failures.hostname_mismatch | Cert verify failures - hostname mismatch | build version | count | SSL certificate verification errors - server's certificate doesn't match the hostname during the aggregation interval. | deployment | +| plus.ssl.verify_failures.other | Cert verify failures - other | build version | count | SSL certificate verification errors - other SSL certificate verification errors during the aggregation interval. | deployment | +| plus.http.ssl.handshakes | HTTP successful SSL handshakes | build version server_zone | count | The total number of successful SSL handshakes during the aggregation interval. | server zone | +| plus.http.ssl.handshakes.failed | HTTP failed SSL handshakes | build version server_zone | count | The total number of failed SSL handshakes during the aggregation interval. | server zone | +| plus.http.ssl.session.reuses | HTTP SSL session reuses | build version server_zone | count | The total number of session reuses during SSL handshakes in the aggregation interval. | server zone | +| plus.http.ssl.no_common_protocol | Handshakes failed - no common protocol | build version server_zone | count | The number of SSL handshakes failed because of no common protocol during the aggregation interval. | server zone | +| plus.http.ssl.no_common_cipher | Handshakes failed - no shared cipher | build version server_zone | count | The number of SSL handshakes failed because of no shared cipher during the aggregation interval. | server zone | +| plus.http.ssl.handshake_timeout | Handshakes failed - timeout | build version server_zone | count | The number of SSL handshakes failed because of a timeout during the aggregation interval. | server zone | +| plus.http.ssl.peer_rejected_cert | Handshakes failed - certificate rejected | build version server_zone | count | The number of failed SSL handshakes when nginx presented the certificate to the client but it was rejected with a corresponding alert message during the aggregation interval. | server zone | +| plus.http.ssl.verify_failures.no_cert | Verify failures - no certificate | build version server_zone | count | SSL certificate verification errors - a client did not provide the required certificate during the aggregation interval. | server zone | +| plus.http.ssl.verify_failures.expired_cert | Verify failures - expired cert | build version server_zone | count | SSL certificate verification errors - an expired or not yet valid certificate was presented by a client during the aggregation interval. | server zone | +| plus.http.ssl.verify_failures.revoked_cert | Verify failures - revoked cert | build version server_zone | count | SSL certificate verification errors - a revoked certificate was presented by a client during the aggregation interval. | server zone | +| plus.http.ssl.verify_failures.other | Verify failures - other | build version server_zone | count | SSL certificate verification errors - other SSL certificate verification errors during the aggregation interval. | server zone | {{}} @@ -140,29 +140,29 @@ The metrics are categorized by the namespace used in Azure Monitor. The dimensio {{}} -| **Metric** | **Dimensions** | **Type** | **Description** | **Roll-up per** | -|----------------------------------------|-----------------------------|-------|-----------------------------------------------------------------------------------------------------------------------------|---------------| -| plus.cache.hit.ratio | build version cache_zone | avg | Cache Hit Ratio The average ratio of cache hits to misses during the aggregation interval. | cache zone | -| plus.cache.size | build version cache_zone | avg | Cache Size The average size of the cache during the aggregation interval. | cache zone | -| plus.cache.max_size | build version cache_zone | max | Cache Max Size The max size of the cache during the aggregation interval. | cache zone | -| plus.cache.hit.responses | build version cache_zone | count | The total number of responses that were served from the cache during the aggregation interval. | cache zone | -| plus.cache.hit.bytes | build version cache_zone | count | The total number of bytes served from the cache during the aggregation interval. | cache zone | -| plus.cache.stale.responses | build version cache_zone | count | The total number of responses served from stale cache content during the aggregation interval. | cache zone | -| plus.cache.stale.bytes | build version cache_zone | count | The total number of bytes served from stale cache content during the aggregation interval. | cache zone | -| plus.cache.updating.responses | build version cache_zone | count | The total number of responses served from the cache while the cache is being updated during the aggregation interval. | cache zone | -| plus.cache.updating.bytes | build version cache_zone | count | The total number of bytes served from the cache while the cache is being updated during the aggregation interval. | cache zone | -| plus.cache.revalidated.responses | build version cache_zone | count | The total number of cache responses that were successfully revalidated with the origin server during the aggregation interval. | cache zone | -| plus.cache.revalidated.bytes | build version cache_zone | count | The total number of bytes served from the cache after successful revalidation with the origin server during the aggregation interval. | cache zone | -| plus.cache.miss.responses | build version cache_zone | count | The total number of responses that were not served from the cache (cache misses) during the aggregation interval. | cache zone | -| plus.cache.miss.bytes | build version cache_zone | count | The total number of bytes served from the origin server due to cache misses during the aggregation interval. | cache zone | -| plus.cache.expired.responses | build version cache_zone | count | The total number of cache responses that expired and had to be refreshed from the origin server during the aggregation interval. | cache zone | -| plus.cache.expired.bytes | build version cache_zone | count | The total number of bytes served from the cache after expiration and refresh from the origin server during the aggregation interval. | cache zone | -| plus.cache.expired.responses_written | build version cache_zone | count | The total number of expired cache responses that were refreshed and written back to the cache during the aggregation interval. | cache zone | -| plus.cache.expired.bytes_written | build version cache_zone | count | The total number of bytes written back to the cache after expiration and refresh from the origin server during the aggregation interval. | cache zone | -| plus.cache.bypass.responses | build version cache_zone | count | The total number of responses that bypassed the cache during the aggregation interval. | cache zone | -| plus.cache.bypass.bytes | build version cache_zone | count | The total number of bytes served by bypassing the cache during the aggregation interval. | cache zone | -| plus.cache.bypass.responses_written | build version cache_zone | count | The total number of responses that bypassed the cache and were written back to the cache during the aggregation interval. | cache zone | -| plus.cache.bypass.bytes_written | build version cache_zone | count | The total number of bytes that bypassed the cache and were written back to the cache during the aggregation interval. | cache zone | +| **Metric** | **Display Name** | **Dimensions** | **Type** | **Description** | **Roll-up per** | +|----------------------------------------|------------------|-----------------------------|-------|-----------------------------------------------------------------------------------------------------------------------------|---------------| +| plus.cache.hit.ratio | Cache hit ratio | build version cache_zone | avg | Cache Hit Ratio The average ratio of cache hits to misses during the aggregation interval. | cache zone | +| plus.cache.size | Cache size | build version cache_zone | avg | Cache Size The average size of the cache during the aggregation interval. | cache zone | +| plus.cache.max_size | Cache max size | build version cache_zone | max | Cache Max Size The max size of the cache during the aggregation interval. | cache zone | +| plus.cache.hit.responses | Cache hit responses | build version cache_zone | count | The total number of responses that were served from the cache during the aggregation interval. | cache zone | +| plus.cache.hit.bytes | Cache hit bytes | build version cache_zone | count | The total number of bytes served from the cache during the aggregation interval. | cache zone | +| plus.cache.stale.responses | Cache stale responses | build version cache_zone | count | The total number of responses served from stale cache content during the aggregation interval. | cache zone | +| plus.cache.stale.bytes | Cache stale bytes | build version cache_zone | count | The total number of bytes served from stale cache content during the aggregation interval. | cache zone | +| plus.cache.updating.responses | Cache updating responses | build version cache_zone | count | The total number of responses served from the cache while the cache is being updated during the aggregation interval. | cache zone | +| plus.cache.updating.bytes | Cache updating bytes | build version cache_zone | count | The total number of bytes served from the cache while the cache is being updated during the aggregation interval. | cache zone | +| plus.cache.revalidated.responses | Cache revalidated responses | build version cache_zone | count | The total number of cache responses that were successfully revalidated with the origin server during the aggregation interval. | cache zone | +| plus.cache.revalidated.bytes | Cache revalidated bytes | build version cache_zone | count | The total number of bytes served from the cache after successful revalidation with the origin server during the aggregation interval. | cache zone | +| plus.cache.miss.responses | Cache miss responses | build version cache_zone | count | The total number of responses that were not served from the cache (cache misses) during the aggregation interval. | cache zone | +| plus.cache.miss.bytes | Cache miss bytes | build version cache_zone | count | The total number of bytes served from the origin server due to cache misses during the aggregation interval. | cache zone | +| plus.cache.expired.responses | Cache expired responses | build version cache_zone | count | The total number of cache responses that expired and had to be refreshed from the origin server during the aggregation interval. | cache zone | +| plus.cache.expired.bytes | Cache expired bytes | build version cache_zone | count | The total number of bytes served from the cache after expiration and refresh from the origin server during the aggregation interval. | cache zone | +| plus.cache.expired.responses_written | Cache expired responses written | build version cache_zone | count | The total number of expired cache responses that were refreshed and written back to the cache during the aggregation interval. | cache zone | +| plus.cache.expired.bytes_written | Cache expired bytes written | build version cache_zone | count | The total number of bytes written back to the cache after expiration and refresh from the origin server during the aggregation interval. | cache zone | +| plus.cache.bypass.responses | Cache bypass responses | build version cache_zone | count | The total number of responses that bypassed the cache during the aggregation interval. | cache zone | +| plus.cache.bypass.bytes | Cache bypass bytes | build version cache_zone | count | The total number of bytes served by bypassing the cache during the aggregation interval. | cache zone | +| plus.cache.bypass.responses_written | Cache bypass responses written | build version cache_zone | count | The total number of responses that bypassed the cache and were written back to the cache during the aggregation interval. | cache zone | +| plus.cache.bypass.bytes_written | Cache bypass bytes written | build version cache_zone | count | The total number of bytes that bypassed the cache and were written back to the cache during the aggregation interval. | cache zone | {{}} @@ -170,14 +170,14 @@ The metrics are categorized by the namespace used in Azure Monitor. The dimensio {{}} -| **Metric** | **Dimensions** | **Type** | **Description** | **Roll-up per** | -|----------------------------------------|-----------------------------|-------|-----------------------------------------------------------------------------------------------------------------------------|---------------| -| plus.worker.conn.accepted| build version worker_id | count |The total number of client connections accepted by the worker process during the aggregation interval. | worker | -| plus.worker.conn.dropped| build version worker_id | count |The total number of client connections dropped by the worker process during the aggregation interval. | worker | -| plus.worker.conn.active| build version worker_id | count | The current number of active client connections that are currently being handled by the worker process during the aggregation interval. | worker | -| plus.worker.conn.idle| build version worker_id | count |The number of idle client connections that are currently being handled by the worker process during the aggregation interval. | worker | -| plus.worker.http.request.total | build version worker_id | count | The total number of client requests received by the worker process during the aggregation interval. | worker | -| plus.worker.http.request.current | build version worker_id | count | The current number of client requests that are currently being processed by the worker process during the aggregation interval. | worker| +| **Metric** | **Display Name** | **Dimensions** | **Type** | **Description** | **Roll-up per** | +|----------------------------------------|-------------------------------|-----------------------------|-------|-----------------------------------------------------------------------------------------------------------------------------|---------------| +| plus.worker.conn.accepted | Worker connections accepted | build version worker_id | count | The total number of client connections accepted by the worker process during the aggregation interval. | worker | +| plus.worker.conn.dropped | Worker connections dropped | build version worker_id | count | The total number of client connections dropped by the worker process during the aggregation interval. | worker | +| plus.worker.conn.active | Active worker connections | build version worker_id | count | The current number of active client connections that are currently being handled by the worker process during the aggregation interval. | worker | +| plus.worker.conn.idle | Idle worker connections | build version worker_id | count | The number of idle client connections that are currently being handled by the worker process during the aggregation interval. | worker | +| plus.worker.http.request.total | Total worker HTTP requests | build version worker_id | count | The total number of client requests received by the worker process during the aggregation interval. | worker | +| plus.worker.http.request.current | Current worker HTTP requests | build version worker_id | count | The current number of client requests that are currently being processed by the worker process during the aggregation interval. | worker | {{}} @@ -185,58 +185,58 @@ The metrics are categorized by the namespace used in Azure Monitor. The dimensio {{}} -| **Metric** | **Dimensions** | **Type** | **Description** | **Roll-up per** | -|-----------------------------------|-----------------------------|-------|-----------------------------------------------------------------------------------------------------------------------------|---------------| -| plus.http.upstream.peers.conn.active | build version upstream peer.address peer.name | count | Upstream Server Active Connections The number of active client connections during the aggregation interval. | upstream server | -| plus.http.upstream.peers.request.count | build version upstream peer.address peer.name | count | Upstream Server HTTP Requests The total number of HTTP requests during the aggregation interval. | upstream server | -| plus.http.upstream.peers.response.count | build version upstream peer.address peer.name | count | Upstream Server HTTP Responses The total number of HTTP responses during the aggregation interval. | upstream server | -| plus.http.upstream.peers.status.1xx | build version upstream peer.address peer.name | count | Upstream Server HTTP 1xx Responses The total number of HTTP responses with a 1xx status code during the aggregation interval. | upstream server | -| plus.http.upstream.peers.status.2xx | build version upstream peer.address peer.name | count | Upstream Server HTTP 2xx Responses The total number of HTTP responses with a 2xx status code during the aggregation interval. | upstream server | -| plus.http.upstream.peers.status.3xx | build version upstream peer.address peer.name | count | Upstream Server HTTP 3xx Responses The total number of HTTP responses with a 3xx status code during the aggregation interval. | upstream server | -| plus.http.upstream.peers.status.4xx | build version upstream peer.address peer.name | count | Upstream Server HTTP 4xx Responses The total number of HTTP responses with a 4xx status code during the aggregation interval. | upstream server | -| plus.http.upstream.peers.status.5xx | build version upstream peer.address peer.name | count | Upstream Server HTTP 5xx Responses The total number of HTTP responses with a 5xx status code during the aggregation interval. | upstream server | -| plus.http.upstream.peers.request.bytes_sent | build version upstream peer.address peer.name | count | | upstream server | -| plus.http.upstream.peers.request.bytes_rcvd | build version upstream peer.address peer.name | count | | upstream server | -| plus.http.upstream.peers.state.up | build version upstream peer.address peer.name | boolean | Upstream Server State Up Current state of upstream servers in deployment. If all upstream servers in the deployment are up, then the value will be 1. If any upstream server is not up, then the value will be 0. | upstream peer | -| plus.http.upstream.peers.state.draining | build version upstream peer.address peer.name | boolean | Upstream Server State Draining Current state of upstream servers in deployment. If any of the upstream servers in the deployment are draining, then the value will be 1. If no upstream server is draining, then the value will be 0. | upstream peer | -| plus.http.upstream.peers.state.down | build version upstream peer.address peer.name | boolean | Upstream Server State Down Current state of upstream servers in deployment. If any of the upstream servers in the deployment are down, then the value will be 1. If no upstream server is down, then the value will be 0. | upstream peer | -| plus.http.upstream.peers.state.unavail | build version upstream peer.address peer.name | boolean | Upstream Server State Unavailable Current state of upstream servers in deployment. If any of the upstream servers in the deployment are unavailable, then the value will be 1. If no upstream server is unavailable, then the value will be 0. | upstream peer | -| plus.http.upstream.peers.state.checking | build version upstream peer.address peer.name | boolean | Upstream Server State Check Current state of upstream servers in deployment. If any of the upstream servers in the deployment is being checked then the value will be 1. If no upstream server is being checked then the value will be 0. | upstream peer | -| plus.http.upstream.peers.state.unhealthy | build version upstream peer.address peer.name | boolean | Upstream Server State Unhealthy Current state of upstream servers in deployment. If any of the upstream servers in the deployment are unhealthy then the value will be 1. If no upstream server is unhealthy then the value will be 0. | upstream peer | -| plus.http.upstream.peers.fails | build version upstream peer.address peer.name | count | Upstream Server Fails The total number of unsuccessful attempts to communicate with the server during the aggregation interval. | upstream peer | -| plus.http.upstream.peers.unavail | build version upstream peer.address peer.name | count | Upstream Server Unavailable The number of times the server became unavailable for client requests (state “unavail”) due to the number of unsuccessful attempts reaching the [max_fails](https://nginx.org/en/docs/http/ngx_http_upstream_module.html#max_fails) threshold during the aggregation interval. | upstream peer | -| plus.http.upstream.peers.health_checks.checks | build version upstream peer.address peer.name | count | Upstream Server Health Checks The total number of [health check](https://nginx.org/en/docs/http/ngx_http_upstream_hc_module.html#health_check) requests made during the aggregation interval. | upstream peer | -| plus.http.upstream.peers.health_checks.fails | build version upstream peer.address peer.name | count | Upstream Server Health Checks Fails The number of failed health checks during the aggregation interval. | upstream peer | -| plus.http.upstream.peers.health_checks.unhealthy | build version upstream peer.address peer.name | count | Upstream Server Health Checks Unhealthy How many times the server became unhealthy (state “unhealthy”) during the aggregation interval. | upstream peer | -| plus.http.upstream.peers.health_checks.last_passed | build version upstream peer.address peer.name | boolean | Upstream Server Health Checks Last Pass last_passed (boolean) indicating if the last health check request was successful and passed [tests](https://nginx.org/en/docs/http/ngx_http_upstream_hc_module.html#match). | upstream peer | -| plus.http.upstream.peers.downstart | build version upstream peer.address peer.name | timestamp | Upstream Server Downstart The time when the server became “unavail”, “checking”, or “unhealthy”, as a UTC timestamp. | upstream peer | -| plus.http.upstream.peers.response.time | build version upstream peer.address peer.name | avg | Upstream Server Response Time The average time to get the [full response](https://nginx.org/en/docs/http/ngx_http_upstream_module.html#var_upstream_response_time) from the server during the aggregation interval. | upstream server | -| plus.http.upstream.peers.header.time | build version upstream peer.address peer.name | avg | Upstream Server Header Time The average time to get the [response header](https://nginx.org/en/docs/http/ngx_http_upstream_module.html#var_upstream_header_time) from the server | upstream server | -| plus.http.upstream.zombies | build version | avg | Upstream Zombies The current number of servers removed from the group but still processing active client requests | deployment | -| plus.http.upstream.keepalives | build version | count | Upstream Keepalive Connections The current number of idle [keepalive](https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive) connections | deployment | -| plus.http.upstream.queue.maxsize | build version | avg | Upstream Queue Max Size The maximum number of requests that can be in the queue at the same time | deployment | -| plus.http.upstream.queue.overflows | build version | sum | Upstream Queue Overflows The total number of requests rejected due to the queue overflow | deployment | -| plus.http.upstream.queue.size | build version | avg | Upstream Queue Size The current number of requests in the queue | deployment | -| plus.http.upstream.peers.ssl.handshakes | build version upstream peer.address peer.name | count | The total number of successful SSL handshakes during the aggregation interval. | upstream peer | -| plus.http.upstream.peers.ssl.handshakes.failed | build version upstream peer.address peer.name | count |The total number of failed SSL handshakes during the aggregation interval. | upstream peer | -| plus.http.upstream.peers.ssl.session.reuses | build version upstream peer.address peer.name | count |The total number of session reuses during SSL handshake in the aggregation interval. | upstream peer | -| plus.http.upstream.peers.ssl.no_common_protocol | build version upstream peer.address peer.name | count | The number of SSL handshakes failed because of no common protocol during the aggregation interval. | upstream peer | -| plus.http.upstream.peers.ssl.handshake_timeout | build version upstream peer.address peer.name | count |The number of SSL handshakes failed because of a timeout during the aggregation interval. | upstream peer | -| plus.http.upstream.peers.ssl.peer_rejected_cert | build version upstream peer.address peer.name | count | The number of failed SSL handshakes when nginx presented the certificate to the client but it was rejected with a corresponding alert message during the aggregation interval. | upstream peer | -| plus.http.upstream.peers.ssl.verify_failures.expired_cert | build version upstream peer.address peer.name | count | SSL certificate verification errors - an expired or not yet valid certificate was presented by a client during the aggregation interval. | upstream peer | -| plus.http.upstream.peers.ssl.verify_failures.revoked_cert | build version upstream peer.address peer.name | count | SSL certificate verification errors - a revoked certificate was presented by a client during the aggregation interval. | upstream peer | -| plus.http.upstream.peers.ssl.verify_failures.hostname_mismatch | build version upstream peer.address peer.name | count | SSL certificate verification errors - server's certificate doesn't match the hostname during the aggregation interval. | upstream peer | -| plus.http.upstream.peers.ssl.verify_failures.other | build version upstream peer.address peer.name | count |SSL certificate verification errors - other SSL certificate verification errors during the aggregation interval. | upstream peer | -| plus.stream.upstream.peers.ssl.handshakes | build version upstream peer.address peer.name | count |The total number of successful SSL handshakes during the aggregation interval. | upstream peer | -| plus.stream.upstream.peers.ssl.handshakes.failed | build version upstream peer.address peer.name | count | The total number of failed SSL handshakes during the aggregation interval. | upstream peer | -| plus.stream.upstream.peers.ssl.session.reuses | build version upstream peer.address peer.name | count | The total number of session reuses during SSL handshake in the aggregation interval. | upstream peer | -| plus.stream.upstream.peers.ssl.no_common_protocol | build version upstream peer.address peer.name | count | The number of SSL handshakes failed because of no common protocol during the aggregation interval. | upstream peer | -| plus.stream.upstream.peers.ssl.handshake_timeout | build version upstream peer.address peer.name | count | The number of SSL handshakes failed because of a timeout during the aggregation interval. | upstream peer | -| plus.stream.upstream.peers.ssl.peer_rejected_cert | build version upstream peer.address peer.name | count | The number of failed SSL handshakes when nginx presented the certificate to the client but it was rejected with a corresponding alert message during the aggregation interval. | upstream peer | -| plus.stream.upstream.peers.ssl.verify_failures.expired_cert | build version upstream peer.address peer.name | count | SSL certificate verification errors - an expired or not yet valid certificate was presented by a client during the aggregation interval. | upstream peer | -| plus.stream.upstream.peers.ssl.verify_failures.revoked_cert | build version upstream peer.address peer.name | count | SSL certificate verification errors - a revoked certificate was presented by a client during the aggregation interval. | upstream peer | -| plus.stream.upstream.peers.ssl.verify_failures.hostname_mismatch | build version upstream peer.address peer.name | count | SSL certificate verification errors - server's certificate doesn't match the hostname during the aggregation interval. | upstream peer | -| plus.stream.upstream.peers.ssl.verify_failures.other | build version upstream peer.address peer.name | count | SSL certificate verification errors - other SSL certificate verification errors during the aggregation interval. | upstream peer | +| **Metric** | **Display Name** | **Dimensions** | **Type** | **Description** | **Roll-up per** | +|-----------------------------------|-------------------------------|-----------------------------|-------|-----------------------------------------------------------------------------------------------------------------------------|---------------| +| plus.http.upstream.peers.conn.active | Upstream active connections | build version upstream peer.address peer.name | count | Upstream Server Active Connections The number of active client connections during the aggregation interval. | upstream server | +| plus.http.upstream.peers.request.count | Upstream HTTP requests | build version upstream peer.address peer.name | count | Upstream Server HTTP Requests The total number of HTTP requests during the aggregation interval. | upstream server | +| plus.http.upstream.peers.response.count | Upstream server HTTP responses | build version upstream peer.address peer.name | count | Upstream Server HTTP Responses The total number of HTTP responses during the aggregation interval. | upstream server | +| plus.http.upstream.peers.status.1xx | Upstream server HTTP 1xx responses | build version upstream peer.address peer.name | count | Upstream Server HTTP 1xx Responses The total number of HTTP responses with a 1xx status code during the aggregation interval. | upstream server | +| plus.http.upstream.peers.status.2xx | Upstream server HTTP 2xx responses | build version upstream peer.address peer.name | count | Upstream Server HTTP 2xx Responses The total number of HTTP responses with a 2xx status code during the aggregation interval. | upstream server | +| plus.http.upstream.peers.status.3xx | Upstream server HTTP 3xx responses | build version upstream peer.address peer.name | count | Upstream Server HTTP 3xx Responses The total number of HTTP responses with a 3xx status code during the aggregation interval. | upstream server | +| plus.http.upstream.peers.status.4xx | Upstream server HTTP 4xx responses | build version upstream peer.address peer.name | count | Upstream Server HTTP 4xx Responses The total number of HTTP responses with a 4xx status code during the aggregation interval. | upstream server | +| plus.http.upstream.peers.status.5xx | Upstream server HTTP 5xx responses | build version upstream peer.address peer.name | count | Upstream Server HTTP 5xx Responses The total number of HTTP responses with a 5xx status code during the aggregation interval. | upstream server | +| plus.http.upstream.peers.request.bytes_sent | Upstream server request bytes sent | build version upstream peer.address peer.name | count | The total number of bytes sent in HTTP requests during the aggregation interval. | upstream server | +| plus.http.upstream.peers.request.bytes_rcvd | Upstream server request bytes received | build version upstream peer.address peer.name | count | The total number of bytes received in HTTP requests during the aggregation interval. | upstream server | +| plus.http.upstream.peers.state.up | Upstream server state up | build version upstream peer.address peer.name | boolean | Upstream Server State Up Current state of upstream servers in deployment. If all upstream servers in the deployment are up, then the value will be 1. If any upstream server is not up, then the value will be 0. | upstream peer | +| plus.http.upstream.peers.state.draining | Upstream server state draining | build version upstream peer.address peer.name | boolean | Upstream Server State Draining Current state of upstream servers in deployment. If any of the upstream servers in the deployment are draining, then the value will be 1. If no upstream server is draining, then the value will be 0. | upstream peer | +| plus.http.upstream.peers.state.down | Upstream server state down | build version upstream peer.address peer.name | boolean | Upstream Server State Down Current state of upstream servers in deployment. If any of the upstream servers in the deployment are down, then the value will be 1. If no upstream server is down, then the value will be 0. | upstream peer | +| plus.http.upstream.peers.state.unavail | Upstream server state unavailable | build version upstream peer.address peer.name | boolean | Upstream Server State Unavailable Current state of upstream servers in deployment. If any of the upstream servers in the deployment are unavailable, then the value will be 1. If no upstream server is unavailable, then the value will be 0. | upstream peer | +| plus.http.upstream.peers.state.checking | Upstream server state checking | build version upstream peer.address peer.name | boolean | Upstream Server State Check Current state of upstream servers in deployment. If any of the upstream servers in the deployment is being checked then the value will be 1. If no upstream server is being checked then the value will be 0. | upstream peer | +| plus.http.upstream.peers.state.unhealthy | Upstream server state unhealthy | build version upstream peer.address peer.name | boolean | Upstream Server State Unhealthy Current state of upstream servers in deployment. If any of the upstream servers in the deployment are unhealthy then the value will be 1. If no upstream server is unhealthy then the value will be 0. | upstream peer | +| plus.http.upstream.peers.fails | Upstream server fails | build version upstream peer.address peer.name | count | Upstream Server Fails The total number of unsuccessful attempts to communicate with the server during the aggregation interval. | upstream peer | +| plus.http.upstream.peers.unavail | Upstream server unavailable | build version upstream peer.address peer.name | count | Upstream Server Unavailable The number of times the server became unavailable for client requests (state “unavail”) due to the number of unsuccessful attempts reaching the [max_fails](https://nginx.org/en/docs/http/ngx_http_upstream_module.html#max_fails) threshold during the aggregation interval. | upstream peer | +| plus.http.upstream.peers.health_checks.checks | Upstream server health checks | build version upstream peer.address peer.name | count | Upstream Server Health Checks The total number of [health check](https://nginx.org/en/docs/http/ngx_http_upstream_hc_module.html#health_check) requests made during the aggregation interval. | upstream peer | +| plus.http.upstream.peers.health_checks.fails | Upstream server health checks fails | build version upstream peer.address peer.name | count | Upstream Server Health Checks Fails The number of failed health checks during the aggregation interval. | upstream peer | +| plus.http.upstream.peers.health_checks.unhealthy | Upstream server health checks unhealthy | build version upstream peer.address peer.name | count | Upstream Server Health Checks Unhealthy How many times the server became unhealthy (state “unhealthy”) during the aggregation interval. | upstream peer | +| plus.http.upstream.peers.health_checks.last_passed | Upstream server health checks last pass | build version upstream peer.address peer.name | boolean | Upstream Server Health Checks Last Pass last_passed (boolean) indicating if the last health check request was successful and passed [tests](https://nginx.org/en/docs/http/ngx_http_upstream_hc_module.html#match). | upstream peer | +| plus.http.upstream.peers.downstart | Upstream server downstart | build version upstream peer.address peer.name | timestamp | Upstream Server Downstart The time when the server became “unavail”, “checking”, or “unhealthy”, as a UTC timestamp. | upstream peer | +| plus.http.upstream.peers.response.time | Upstream server response time | build version upstream peer.address peer.name | avg | Upstream Server Response Time The average time to get the [full response](https://nginx.org/en/docs/http/ngx_http_upstream_module.html#var_upstream_response_time) from the server during the aggregation interval. | upstream server | +| plus.http.upstream.peers.header.time | Upstream server header time | build version upstream peer.address peer.name | avg | Upstream Server Header Time The average time to get the [response header](https://nginx.org/en/docs/http/ngx_http_upstream_module.html#var_upstream_header_time) from the server | upstream server | +| plus.http.upstream.zombies | Upstream zombies | build version | avg | Upstream Zombies The current number of servers removed from the group but still processing active client requests | deployment | +| plus.http.upstream.keepalives | Upstream keepalive connections | build version | count | Upstream Keepalive Connections The current number of idle [keepalive](https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive) connections | deployment | +| plus.http.upstream.queue.maxsize | Upstream queue max size | build version | avg | Upstream Queue Max Size The maximum number of requests that can be in the queue at the same time | deployment | +| plus.http.upstream.queue.overflows | Upstream queue overflows | build version | sum | Upstream Queue Overflows The total number of requests rejected due to the queue overflow | deployment | +| plus.http.upstream.queue.size | Upstream queue size | build version | avg | Upstream Queue Size The current number of requests in the queue | deployment | +| plus.http.upstream.peers.ssl.handshakes | Upstream SSL handshakes | build version upstream peer.address peer.name | count | The total number of successful SSL handshakes during the aggregation interval. | upstream peer | +| plus.http.upstream.peers.ssl.handshakes.failed | Upstream SSL handshakes failed | build version upstream peer.address peer.name | count | The total number of failed SSL handshakes during the aggregation interval. | upstream peer | +| plus.http.upstream.peers.ssl.session.reuses | Upstream SSL session reuses | build version upstream peer.address peer.name | count | The total number of session reuses during SSL handshake in the aggregation interval. | upstream peer | +| plus.http.upstream.peers.ssl.no_common_protocol | Upstream SSL no common protocol | build version upstream peer.address peer.name | count | The number of SSL handshakes failed because of no common protocol during the aggregation interval. | upstream peer | +| plus.http.upstream.peers.ssl.handshake_timeout | Upstream SSL handshake timeout | build version upstream peer.address peer.name | count | The number of SSL handshakes failed because of a timeout during the aggregation interval. | upstream peer | +| plus.http.upstream.peers.ssl.peer_rejected_cert | SSL handshake failed - rejected cert | build version upstream peer.address peer.name | count | The number of failed SSL handshakes when nginx presented the certificate to the client but it was rejected with a corresponding alert message during the aggregation interval. | upstream peer | +| plus.http.upstream.peers.ssl.verify_failures.expired_cert | SSL verify failures - expired cert | build version upstream peer.address peer.name | count | SSL certificate verification errors - an expired or not yet valid certificate was presented by a client during the aggregation interval. | upstream peer | +| plus.http.upstream.peers.ssl.verify_failures.revoked_cert | SSL verify failures - revoked cert | build version upstream peer.address peer.name | count | SSL certificate verification errors - a revoked certificate was presented by a client during the aggregation interval. | upstream peer | +| plus.http.upstream.peers.ssl.verify_failures.hostname_mismatch | SSL verify failures - hostname mismatch | build version upstream peer.address peer.name | count | SSL certificate verification errors - server's certificate doesn't match the hostname during the aggregation interval. | upstream peer | +| plus.http.upstream.peers.ssl.verify_failures.other | SSL verify failures - other | build version upstream peer.address peer.name | count | SSL certificate verification errors - other SSL certificate verification errors during the aggregation interval. | upstream peer | +| plus.stream.upstream.peers.ssl.handshakes | Stream SSL handshakes total | build version upstream peer.address peer.name | count | The total number of successful SSL handshakes during the aggregation interval. | upstream peer | +| plus.stream.upstream.peers.ssl.handshakes.failed | Stream SSL handshakes failed | build version upstream peer.address peer.name | count | The total number of failed SSL handshakes during the aggregation interval. | upstream peer | +| plus.stream.upstream.peers.ssl.session.reuses | Stream SSL session reuses | build version upstream peer.address peer.name | count | The total number of session reuses during SSL handshake in the aggregation interval. | upstream peer | +| plus.stream.upstream.peers.ssl.no_common_protocol | Stream HS failed - no common protocol | build version upstream peer.address peer.name | count | The number of SSL handshakes failed because of no common protocol during the aggregation interval. | upstream peer | +| plus.stream.upstream.peers.ssl.handshake_timeout | Stream SSL handshake timeout | build version upstream peer.address peer.name | count | The number of SSL handshakes failed because of a timeout during the aggregation interval. | upstream peer | +| plus.stream.upstream.peers.ssl.peer_rejected_cert | Stream verify failure - rejected cert | build version upstream peer.address peer.name | count | The number of failed SSL handshakes when nginx presented the certificate to the client but it was rejected with a corresponding alert message during the aggregation interval. | upstream peer | +| plus.stream.upstream.peers.ssl.verify_failures.expired_cert | Stream verify failure - expired cert | build version upstream peer.address peer.name | count | SSL certificate verification errors - an expired or not yet valid certificate was presented by a client during the aggregation interval. | upstream peer | +| plus.stream.upstream.peers.ssl.verify_failures.revoked_cert | Stream verify failure - revoked cert | build version upstream peer.address peer.name | count | SSL certificate verification errors - a revoked certificate was presented by a client during the aggregation interval. | upstream peer | +| plus.stream.upstream.peers.ssl.verify_failures.hostname_mismatch | Stream verify failure - hostname mismatch | build version upstream peer.address peer.name | count | SSL certificate verification errors - server's certificate doesn't match the hostname during the aggregation interval. | upstream peer | +| plus.stream.upstream.peers.ssl.verify_failures.other | Stream SSL verify failure - other | build version upstream peer.address peer.name | count | SSL certificate verification errors - other SSL certificate verification errors during the aggregation interval. | upstream peer | {{}} @@ -244,15 +244,15 @@ The metrics are categorized by the namespace used in Azure Monitor. The dimensio {{}} -| **Metric** | **Dimensions** | **Type** | **Description** | **Roll-up per** | -|----------------------------------------|-----------------------------|-------|-----------------------------------------------------------------------------------------------------------------------------|---------------| -| system.cpu| | count | System CPU Utilization. | deployment | -| system.interface.bytes_rcvd| interface | count | System Interface Bytes Received. | deployment | -| system.interface.bytes_sent| interface | count | System Interface Bytes Sent. | deployment | -| system.interface.packets_rcvd| interface | count | System Interface Packets Received. | deployment | -| system.interface.packets_sent| interface | count | System Interface Packets Sent. | deployment | -| system.interface.total_bytes| interface | count | System Interface Total Bytes, sum of bytes_sent and bytes_rcvd. | deployment | -| system.interface.egress_throughput| interface | count | System Interface Egress Throughput, i.e. bytes sent per second| deployment | +| **Metric** | **Display Name** | **Dimensions** | **Type** | **Description** | **Roll-up per** | +|----------------------------------------|------------------|-----------------------------|-------|-----------------------------------------------------------------------------------------------------------------------------|---------------| +| system.cpu| CPU utilization | | count | System CPU Utilization. | deployment | +| system.interface.bytes_rcvd| Interface bytes received | interface | count | System Interface Bytes Received. | deployment | +| system.interface.bytes_sent| Interface bytes sent | interface | count | System Interface Bytes Sent. | deployment | +| system.interface.packets_rcvd| Interface packets received | interface | count | System Interface Packets Received. | deployment | +| system.interface.packets_sent| Interface packets sent | interface | count | System Interface Packets Sent. | deployment | +| system.interface.total_bytes| Interface total bytes | interface | count | System Interface Total Bytes, sum of bytes_sent and bytes_rcvd. | deployment | +| system.interface.egress_throughput| Interface egress throughput | interface | count | System Interface Egress Throughput, i.e. bytes sent per second| deployment | {{}} @@ -260,55 +260,55 @@ The metrics are categorized by the namespace used in Azure Monitor. The dimensio {{}} -| **Metric** | **Dimensions** | **Type** | **Description** | **Roll-up per** | -|----------------------------------------|-----------------------------|-------|-----------------------------------------------------------------------------------------------------------------------------|---------------| -| plus.stream.limit_conns.passed | build, version, limit_conn_zone | count | The total number of connections that were neither limited nor accounted as limited. | limit conn zone | -| plus.stream.limit_conns.rejected | build, version, limit_conn_zone | count | The total number of connections that were rejected. | limit conn zone | -| plus.stream.limit_conns.rejected_dry_run | build, version, limit_conn_zone | count | The total number of connections accounted as rejected in the dry run mode. | limit conn zone | -| plus.stream.request.bytes_rcvd | build, version, server_zone | count | The total number of bytes received from clients. | server zone | -| plus.stream.request.bytes_sent | build, version, server_zone | count | The total number of bytes sent to clients. | server zone | -| plus.stream.status.2xx | build, version, server_zone | count | The total number of sessions completed with status codes “2xx”. | server zone | -| plus.stream.status.4xx | build, version, server_zone | count | The total number of sessions completed with status codes “4xx”. | server zone | -| plus.stream.status.5xx | build, version, server_zone | count | The total number of sessions completed with status codes “5xx”. | server zone | -| plus.stream.status.connections | build, version, server_zone | avg | The averge number of connections accepted from clients. | server zone | -| plus.stream.status.discarded | build, version, server_zone | avg | The average number of connections completed without creating a session. | server zone | -| plus.stream.status.processing | build, version, server_zone | avg | The average of client connections that are currently being processed. | server zone | -| plus.stream.upstream.peers.conn.active | build, version, upstream, peer.address, peer.name | count | The current number of connections. | upstream peer | -| plus.stream.upstream.peers.downstart | build, version, upstream, peer.address, peer.name | timestamp | The time when the server became “unavail”, “checking”, or “unhealthy”, in the ISO 8601 format with millisecond resolution. | upstream peer | -| plus.stream.upstream.peers.downtime | build, version, upstream, peer.address, peer.name | count | Total time the server was in the “unavail”, “checking”, and “unhealthy” states. | upstream peer | -| plus.stream.upstream.peers.fails | build, version, upstream, peer.address, peer.name | count | The total number of unsuccessful attempts to communicate with the server. | upstream peer | -| plus.stream.upstream.peers.health_checks.checks | build, version, upstream, peer.address, peer.name | count | The total number of health check requests made. | upstream peer | -| plus.stream.upstream.peers.health_checks.fails | build, version, upstream, peer.address, peer.name | count | The number of failed health checks. | upstream peer | -| plus.stream.upstream.peers.health_checks.last_passed | build, version, upstream, peer.address, peer.name | boolean | Boolean indicating whether the last health check request was successful and passed tests. | upstream peer | -| plus.stream.upstream.peers.health_checks.unhealthy | build, version, upstream, peer.address, peer.name | count | How many times the server became unhealthy (state “unhealthy”). | upstream peer | -| plus.stream.upstream.peers.request.bytes_rcvd | build, version, upstream, peer.address, peer.name | count | The total number of bytes received from this server. | upstream peer | -| plus.stream.upstream.peers.request.bytes_sent | build, version, upstream, peer.address, peer.name | count | The total number of bytes sent to this server. | upstream peer | -| plus.stream.upstream.peers.response.time | build, version, upstream, peer.address, peer.name | avg | The average time to receive the last byte of data. | upstream peer | -| plus.stream.upstream.peers.state.checking | build, version, upstream, peer.address, peer.name | boolean | Boolean indicating if any of the upstream servers are being checked. | upstream peer | -| plus.stream.upstream.peers.state.down | build, version, upstream, peer.address, peer.name | boolean | Boolean indicating if any of the upstream servers are down. | upstream peer | -| plus.stream.upstream.peers.state.draining | build, version, upstream, peer.address, peer.name | boolean | Boolean indicating if any of the upstream servers are draining. | upstream peer | -| plus.stream.upstream.peers.state.unavail | build, version, upstream, peer.address, peer.name | boolean | Boolean indicating if any of the upstream servers are unavailable. | upstream peer | -| plus.stream.upstream.peers.state.unhealthy | build, version, upstream, peer.address, peer.name | boolean | Boolean indicating if any of the upstream servers are unhealthy. | upstream peer | -| plus.stream.upstream.peers.state.up | build, version, upstream, peer.address, peer.name | boolean | Boolean indicating if all upstream servers are up. | upstream peer | -| plus.stream.upstream.peers.unavail | build, version, upstream, peer.address, peer.name | count | How many times the server became unavailable for client connections (state “unavail”) due to the number of unsuccessful attempts reaching the max_fails threshold. | upstream peer | -| plus.stream.upstream.zombies | build, version | avg | The current number of servers removed from the group but still processing active client connections. | deployment | -| plus.stream.ssl.handshakes| build version server_zone| count | The total number of successful SSL handshakes during the aggregation interval. | server zone| -| plus.stream.ssl.handshakes.failed | build version server_zone | count | SSL Handshakes Failed The total number of failed SSL handshakes during the aggregation interval. | server zone | -| plus.stream.ssl.session.reuses | build version server_zone | count | The total number of session reuses during SSL handshakes in the aggregation interval. | server zone | -| plus.stream.ssl.no_common_protocol | build version server_zone| count |The number of SSL handshakes failed because of no common protocol during the aggregation interval. |server zone | -| plus.stream.ssl.no_common_cipher| build version server_zone | count | The number of SSL handshakes failed because of no shared cipher during the aggregation interval. | server zone | -| plus.stream.ssl.handshake_timeout | build version server_zone | count | The number of SSL handshakes failed because of a timeout during the aggregation interval. | server zone | -| plus.stream.ssl.peer_rejected_cert | build version server_zone | count | The number of failed SSL handshakes when nginx presented the certificate to the client but it was rejected with a corresponding alert message during the aggregation interval. | server zone | -| plus.stream.ssl.verify_failures.no_cert | build version server_zone | count |SSL certificate verification errors - a client did not provide the required certificate during the aggregation interval. |server zone | -| plus.stream.ssl.verify_failures.expired_cert | build version server_zone | count |SSL certificate verification errors - an expired or not yet valid certificate was presented by a client during the aggregation interval. |server zone | -| plus.stream.ssl.verify_failures.revoked_cert | build version server_zone | count |SSL certificate verification errors - a revoked certificate was presented by a client during the aggregation interval. |server zone | -| plus.stream.ssl.verify_failures.other | build version server_zone | count |SSL certificate verification errors - other SSL certificate verification errors during the aggregation interval. | server zone | -| plus.stream.zone_sync.status.bytes_in | build, version | count | The number of bytes received by all nodes during the aggregation interval. | deployment | -| plus.stream.zone_sync.status.bytes_out | build, version | count | The number of bytes sent by all nodes during the aggregation interval. | deployment | -| plus.stream.zone_sync.status.msgs_in | build, version | count | The number of messages received by all nodes during the aggregation interval. | deployment | -| plus.stream.zone_sync.status.msgs_out | build, version | count | The number of messages sent by all nodes during the aggregation interval. | deployment | -| plus.stream.zone_sync.zones.records_pending | build, version, shared_memory_zone | avg | The average number of records that need to be sent to the cluster during the aggregation interval. | shared memory zone | -| plus.stream.zone_sync.zones.records_total | build, version, shared_memory_zone | avg | The average number of records stored in the shared memory zone by all nodes during the aggregation interval. | shared memory zone | +| **Metric** | **Display Name** | **Dimensions** | **Type** | **Description** | **Roll-up per** | +|----------------------------------------|-------------------------------|-----------------------------|-------|-----------------------------------------------------------------------------------------------------------------------------|---------------| +| plus.stream.limit_conns.passed | Connections passed | build, version, limit_conn_zone | count | The total number of connections that were neither limited nor accounted as limited. | limit conn zone | +| plus.stream.limit_conns.rejected | Connections rejected | build, version, limit_conn_zone | count | The total number of connections that were rejected. | limit conn zone | +| plus.stream.limit_conns.rejected_dry_run | Connections rejected dry run | build, version, limit_conn_zone | count | The total number of connections accounted as rejected in the dry run mode. | limit conn zone | +| plus.stream.request.bytes_rcvd | Request bytes received | build, version, server_zone | count | The total number of bytes received from clients. | server zone | +| plus.stream.request.bytes_sent | Request bytes sent | build, version, server_zone | count | The total number of bytes sent to clients. | server zone | +| plus.stream.status.2xx | Status 2xx | build, version, server_zone | count | The total number of sessions completed with status codes '2xx'. | server zone | +| plus.stream.status.4xx | Status 4xx | build, version, server_zone | count | The total number of sessions completed with status codes '4xx'. | server zone | +| plus.stream.status.5xx | Status 5xx | build, version, server_zone | count | The total number of sessions completed with status codes '5xx'. | server zone | +| plus.stream.status.connections | Accepted connections | build, version, server_zone | avg | The average number of connections accepted from clients. | server zone | +| plus.stream.status.discarded | Connections discarded | build, version, server_zone | avg | The average number of connections completed without creating a session. | server zone | +| plus.stream.status.processing | Connections processing | build, version, server_zone | avg | The average number of client connections that are currently being processed. | server zone | +| plus.stream.upstream.peers.conn.active | Upstream active connections | build, version, upstream, peer.address, peer.name | count | The current number of connections. | upstream peer | +| plus.stream.upstream.peers.downstart | Upstream downstart | build, version, upstream, peer.address, peer.name | timestamp | The time when the server became 'unavail', 'checking', or 'unhealthy', in the ISO 8601 format with millisecond resolution. | upstream peer | +| plus.stream.upstream.peers.downtime | Upstream downtime | build, version, upstream, peer.address, peer.name | count | Total time the server was in the 'unavail', 'checking', and 'unhealthy' states. | upstream peer | +| plus.stream.upstream.peers.fails | Upstream fails | build, version, upstream, peer.address, peer.name | count | The total number of unsuccessful attempts to communicate with the server. | upstream peer | +| plus.stream.upstream.peers.health_checks.checks | Upstream health checks | build, version, upstream, peer.address, peer.name | count | The total number of health check requests made. | upstream peer | +| plus.stream.upstream.peers.health_checks.fails | Upstream health checks fails | build, version, upstream, peer.address, peer.name | count | The number of failed health checks. | upstream peer | +| plus.stream.upstream.peers.health_checks.last_passed | Upstream last health check pass | build, version, upstream, peer.address, peer.name | boolean | Boolean indicating whether the last health check request was successful and passed tests. | upstream peer | +| plus.stream.upstream.peers.health_checks.unhealthy | Upstream health checks unhealthy | build, version, upstream, peer.address, peer.name | count | How many times the server became unhealthy (state 'unhealthy'). | upstream peer | +| plus.stream.upstream.peers.request.bytes_rcvd | Upstream request bytes received | build, version, upstream, peer.address, peer.name | count | The total number of bytes received from this server. | upstream peer | +| plus.stream.upstream.peers.request.bytes_sent | Upstream request bytes sent | build, version, upstream, peer.address, peer.name | count | The total number of bytes sent to this server. | upstream peer | +| plus.stream.upstream.peers.response.time | Upstream response time | build, version, upstream, peer.address, peer.name | avg | The average time to receive the last byte of data. | upstream peer | +| plus.stream.upstream.peers.state.checking | Upstream state checking | build, version, upstream, peer.address, peer.name | boolean | Boolean indicating if any of the upstream servers are being checked. | upstream peer | +| plus.stream.upstream.peers.state.down | Upstream state down | build, version, upstream, peer.address, peer.name | boolean | Boolean indicating if any of the upstream servers are down. | upstream peer | +| plus.stream.upstream.peers.state.draining | Upstream state draining | build, version, upstream, peer.address, peer.name | boolean | Boolean indicating if any of the upstream servers are draining. | upstream peer | +| plus.stream.upstream.peers.state.unavail | Upstream state unavailable | build, version, upstream, peer.address, peer.name | boolean | Boolean indicating if any of the upstream servers are unavailable. | upstream peer | +| plus.stream.upstream.peers.state.unhealthy | Upstream state unhealthy | build, version, upstream, peer.address, peer.name | boolean | Boolean indicating if any of the upstream servers are unhealthy. | upstream peer | +| plus.stream.upstream.peers.state.up | Upstream state up | build, version, upstream, peer.address, peer.name | boolean | Boolean indicating if all upstream servers are up. | upstream peer | +| plus.stream.upstream.peers.unavail | Upstream unavailable | build, version, upstream, peer.address, peer.name | count | How many times the server became unavailable for client connections (state 'unavail') due to the number of unsuccessful attempts reaching the max_fails threshold. | upstream peer | +| plus.stream.upstream.zombies | Upstream zombies | build, version | avg | The current number of servers removed from the group but still processing active client connections. | deployment | +| plus.stream.ssl.handshakes | Stream SSL handshakes total | build version server_zone | count | The total number of successful SSL handshakes during the aggregation interval. | server zone | +| plus.stream.ssl.handshakes.failed | Stream SSL handshakes failed | build version server_zone | count | The total number of failed SSL handshakes during the aggregation interval. | server zone | +| plus.stream.ssl.session.reuses | Stream SSL session reuses | build version server_zone | count | The total number of session reuses during SSL handshakes in the aggregation interval. | server zone | +| plus.stream.ssl.no_common_protocol | Stream HS failed - no common protocol | build version server_zone | count | The number of SSL handshakes failed because of no common protocol during the aggregation interval. | server zone | +| plus.stream.ssl.no_common_cipher | Stream HS failed - no shared cipher | build version server_zone | count | The number of SSL handshakes failed because of no shared cipher during the aggregation interval. | server zone | +| plus.stream.ssl.handshake_timeout | Stream SSL handshake timeout | build version server_zone | count | The number of SSL handshakes failed because of a timeout during the aggregation interval. | server zone | +| plus.stream.ssl.peer_rejected_cert | Stream verify failure - rejected cert | build version server_zone | count | The number of failed SSL handshakes when nginx presented the certificate to the client but it was rejected with a corresponding alert message during the aggregation interval. | server zone | +| plus.stream.ssl.verify_failures.no_cert | Stream verify failure - no cert | build version server_zone | count | SSL certificate verification errors - a client did not provide the required certificate during the aggregation interval. | server zone | +| plus.stream.ssl.verify_failures.expired_cert | Stream verify failure - expired cert | build version server_zone | count | SSL certificate verification errors - an expired or not yet valid certificate was presented by a client during the aggregation interval. | server zone | +| plus.stream.ssl.verify_failures.revoked_cert | Stream verify failure - revoked cert | build version server_zone | count | SSL certificate verification errors - a revoked certificate was presented by a client during the aggregation interval. | server zone | +| plus.stream.ssl.verify_failures.other | Stream SSL verify failure - other | build version server_zone | count | SSL certificate verification errors - other SSL certificate verification errors during the aggregation interval. | server zone | +| plus.stream.zone_sync.status.bytes_in | Zone sync bytes in | build, version | count | The number of bytes received by all nodes during the aggregation interval. | deployment | +| plus.stream.zone_sync.status.bytes_out | Zone sync bytes out | build, version | count | The number of bytes sent by all nodes during the aggregation interval. | deployment | +| plus.stream.zone_sync.status.msgs_in | Zone sync messages in | build, version | count | The number of messages received by all nodes during the aggregation interval. | deployment | +| plus.stream.zone_sync.status.msgs_out | Zone sync messages out | build, version | count | The number of messages sent by all nodes during the aggregation interval. | deployment | +| plus.stream.zone_sync.zones.records_pending | Zone sync records pending | build, version, shared_memory_zone | avg | The average number of records that need to be sent to the cluster during the aggregation interval. | shared memory zone | +| plus.stream.zone_sync.zones.records_total | Zone sync records total | build, version, shared_memory_zone | avg | The average number of records stored in the shared memory zone by all nodes during the aggregation interval. | shared memory zone | {{}} @@ -316,18 +316,18 @@ The metrics are categorized by the namespace used in Azure Monitor. The dimensio {{}} -| **Metric** | **Dimensions** | **Type** | **Description** | **Roll-up per** | -|---------------------------------------|--------------------------------|----------|--------------------------------------------------------------------------------------------|-----------------| -| plus.resolvers.requests.name | build, version, resolver_zone | count | The number of requests to resolve names to addresses during the aggregation interval. | resolver zone | -| plus.resolvers.requests.srv | build, version, resolver_zone | count | The number of requests to resolve SRV records during the aggregation interval. | resolver zone | -| plus.resolvers.requests.addr | build, version, resolver_zone | count | The number of requests to resolve addresses to names during the aggregation interval. | resolver zone | -| plus.resolvers.responses.noerror | build, version, resolver_zone | count | The number of successful responses during the aggregation interval. | resolver zone | -| plus.resolvers.responses.formerr | build, version, resolver_zone | count | The number of FORMERR (Format error) responses during the aggregation interval. | resolver zone | -| plus.resolvers.responses.servfail | build, version, resolver_zone | count | The number of SERVFAIL (Server failure) responses during the aggregation interval. | resolver zone | -| plus.resolvers.responses.nxdomain | build, version, resolver_zone | count | The number of NXDOMAIN (Host not found) responses during the aggregation interval. | resolver zone | -| plus.resolvers.responses.notimp | build, version, resolver_zone | count | The number of NOTIMP (Unimplemented) responses during the aggregation interval. | resolver zone | -| plus.resolvers.responses.refused | build, version, resolver_zone | count | The number of REFUSED (Operation refused) responses during the aggregation interval. | resolver zone | -| plus.resolvers.responses.timedout | build, version, resolver_zone | count | The number of timed out requests during the aggregation interval. | resolver zone | -| plus.resolvers.responses.unknown | build, version, resolver_zone | count | The number of requests completed with an unknown error during the aggregation interval. | resolver zone | +| **Metric** | **Display Name** | **Dimensions** | **Type** | **Description** | **Roll-up per** | +|---------------------------------------|------------------------------|--------------------------------|----------|--------------------------------------------------------------------------------------------|-----------------| +| plus.resolvers.requests.name | Resolve name requests | build, version, resolver_zone | count | The number of requests to resolve names to addresses during the aggregation interval. | resolver zone | +| plus.resolvers.requests.srv | Resolve SRV requests | build, version, resolver_zone | count | The number of requests to resolve SRV records during the aggregation interval. | resolver zone | +| plus.resolvers.requests.addr | Resolve address requests | build, version, resolver_zone | count | The number of requests to resolve addresses to names during the aggregation interval. | resolver zone | +| plus.resolvers.responses.noerror | Successful responses | build, version, resolver_zone | count | The number of successful responses during the aggregation interval. | resolver zone | +| plus.resolvers.responses.formerr | FORMERR responses | build, version, resolver_zone | count | The number of FORMERR (Format error) responses during the aggregation interval. | resolver zone | +| plus.resolvers.responses.servfail | SERVFAIL responses | build, version, resolver_zone | count | The number of SERVFAIL (Server failure) responses during the aggregation interval. | resolver zone | +| plus.resolvers.responses.nxdomain | NXDOMAIN responses | build, version, resolver_zone | count | The number of NXDOMAIN (Host not found) responses during the aggregation interval. | resolver zone | +| plus.resolvers.responses.notimp | NOTIMP responses | build, version, resolver_zone | count | The number of NOTIMP (Unimplemented) responses during the aggregation interval. | resolver zone | +| plus.resolvers.responses.refused | REFUSED responses | build, version, resolver_zone | count | The number of REFUSED (Operation refused) responses during the aggregation interval. | resolver zone | +| plus.resolvers.responses.timedout | Timed out requests | build, version, resolver_zone | count | The number of timed out requests during the aggregation interval. | resolver zone | +| plus.resolvers.responses.unknown | Unknown error responses | build, version, resolver_zone | count | The number of requests completed with an unknown error during the aggregation interval. | resolver zone | {{}} diff --git a/content/nginxaas-azure/monitoring/migrate-to-platform-metrics.md b/content/nginxaas-azure/monitoring/migrate-to-platform-metrics.md index c33b781c7..70de8a906 100644 --- a/content/nginxaas-azure/monitoring/migrate-to-platform-metrics.md +++ b/content/nginxaas-azure/monitoring/migrate-to-platform-metrics.md @@ -2,14 +2,14 @@ title: Migrate from Custom metrics to Platform metrics weight: 1000 toc: true -url: /nginxaas/azure/getting-started/migrate-to-platform-metrics/ +url: /nginxaas/azure/monitoring/migrate-to-platform-metrics/ type: - how-to --- ## Overview -F5 NGINXaaS for Azure previously supported monitoring through [Custom Metrics](https://learn.microsoft.com/en-us/azure/azure-monitor/metrics/metrics-custom-overview), which is a preview feature in Azure. As a preview feature, support for Custom Metrics will be removed in the future. We've added support for Platform Metrics, which is the recommended way to monitor resources in Azure. We strongly recommend switching your deployment's monitoring to Platform Metrics to take advantage of lower latency and better reliability. +F5 NGINXaaS for Azure previously supported monitoring through [Custom Metrics](https://learn.microsoft.com/en-us/azure/azure-monitor/metrics/metrics-custom-overview), which is a preview feature in Azure. Support for Custom Metrics will be removed in the future. We've added support for Platform Metrics, which is the recommended way to monitor resources in Azure. We strongly recommend switching your deployment's monitoring to Platform Metrics to take advantage of lower latency and better reliability. ## Migration steps diff --git a/content/nginxaas-azure/quickstart/upgrade-channels.md b/content/nginxaas-azure/quickstart/upgrade-channels.md index 5c5360047..5fff67f6c 100644 --- a/content/nginxaas-azure/quickstart/upgrade-channels.md +++ b/content/nginxaas-azure/quickstart/upgrade-channels.md @@ -15,7 +15,7 @@ Maintaining the latest version NGINX Plus, operating system (OS), and other soft {{}} | Channel | Description | |-------------|---------------------------| -| preview | Selecting this channel automatically upgrades your deployment to the latest supported version of NGINX Plus and its dependencies soon after they become available. We recommend using this setting to try out new capabilities in deployments running in your development, testing, and staging environments. | +| preview | Selecting this channel automatically upgrades your deployment to the latest supported version of NGINX Plus and its dependencies soon after they become available. We recommend using this setting to try out new capabilities in deployments running in your development, testing, and staging environments. Avoid using it in your production environment. | | stable | A deployment running on this channel will receive updates on NGINX Plus and its dependencies at a slower rate than the **Preview** channel. We recommend using this setting for production deployments where you might want stable features instead of the latest ones. This is the **default channel** if you do not specify one for your deployment. | {{}} diff --git a/content/nic/configuration/global-configuration/configmap-resource.md b/content/nic/configuration/global-configuration/configmap-resource.md index 499f7733a..4d2f4fef3 100644 --- a/content/nic/configuration/global-configuration/configmap-resource.md +++ b/content/nic/configuration/global-configuration/configmap-resource.md @@ -80,6 +80,7 @@ For more information, view the [VirtualServer and VirtualServerRoute resources]( |*proxy-buffering* | Enables or disables [buffering of responses](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering) from the proxied server. | *True* | | |*proxy-buffers* | Sets the value of the [proxy_buffers](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffers) directive. | Depends on the platform. | | |*proxy-buffer-size* | Sets the value of the [proxy_buffer_size](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffer_size) and [grpc_buffer_size](https://nginx.org/en/docs/http/ngx_http_grpc_module.html#grpc_buffer_size) directives. | Depends on the platform. | | +|*proxy-busy-buffers-size* | Sets the value of the [proxy_busy_buffers_size](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_busy_buffers_size) directive. | Depends on the platform. | | |*proxy-max-temp-file-size* | Sets the value of the [proxy_max_temp_file_size](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_max_temp_file_size) directive. | *1024m* | | |*set-real-ip-from* | Sets the value of the [set_real_ip_from](https://nginx.org/en/docs/http/ngx_http_realip_module.html#set_real_ip_from) directive. | N/A | | |*real-ip-header* | Sets the value of the [real_ip_header](https://nginx.org/en/docs/http/ngx_http_realip_module.html#real_ip_header) directive. | *X-Real-IP* | | @@ -235,7 +236,7 @@ If you encounter the error `error [emerg] 13#13: "zone_sync" directive is duplic |*otel-exporter-header-value* | The value of a custom HTTP header to add to telemetry export request. `otel-exporter-endpoint` and `otel-exporter-header-name` required. | N/A | *"custom-value"* | |*otel-service-name* | Sets the `service.name` attribute of the OTel resource. `otel-exporter-endpoint` required. | N/A | *"nginx-ingress-controller:nginx"* | | *otel-trace-in-http* | Enables [OpenTelemetry](https://opentelemetry.io) globally (for all Ingress, VirtualServer and VirtualServerRoute resources). Set this to *"false"* to enable OpenTelemetry for individual routes with snippets. `otel-exporter-endpoint` required. | *"false"* | *"true"* | -|*opentracing* | Removed in v5.0.0. Enables [OpenTracing](https://opentracing.io) globally (for all Ingress, VirtualServer and VirtualServerRoute resources). Note: requires the Ingress Controller image with OpenTracing module and a tracer. See the [docs]({{< ref "/nic/installation/integrations/opentracing.md" >}}) for more information. | *False* | | +|*opentracing* | Removed in v5.0.0. Enables [OpenTracing](https://opentracing.io) globally (for all Ingress, VirtualServer and VirtualServerRoute resources). Note: requires the Ingress Controller image with OpenTracing module and a tracer. See the [docs]({{< ref "/nic/logging-and-monitoring/opentracing.md" >}}) for more information. | *False* | | |*opentracing-tracer* | Removed in v5.0.0. Sets the path to the vendor tracer binary plugin. | N/A | | |*opentracing-tracer-config* | Removed in v5.0.0. Sets the tracer configuration in JSON format. | N/A | | |*app-protect-compressed-requests-action* | Sets the *app_protect_compressed_requests_action* [global directive](/nginx-app-protect/configuration/#global-directives). | *drop* | | diff --git a/content/nic/configuration/ingress-resources/advanced-configuration-with-annotations.md b/content/nic/configuration/ingress-resources/advanced-configuration-with-annotations.md index b037f3f82..3769c4e52 100644 --- a/content/nic/configuration/ingress-resources/advanced-configuration-with-annotations.md +++ b/content/nic/configuration/ingress-resources/advanced-configuration-with-annotations.md @@ -108,6 +108,7 @@ The table below summarizes the available annotations. | *nginx.org/proxy-buffering* | *proxy-buffering* | Enables or disables [buffering of responses](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering) from the proxied server. | *True* | | | *nginx.org/proxy-buffers* | *proxy-buffers* | Sets the value of the [proxy_buffers](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffers) directive. | Depends on the platform. | | | *nginx.org/proxy-buffer-size* | *proxy-buffer-size* | Sets the value of the [proxy_buffer_size](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffer_size) and [grpc_buffer_size](https://nginx.org/en/docs/http/ngx_http_grpc_module.html#grpc_buffer_size) directives. | Depends on the platform. | | +| *nginx.org/proxy-busy-buffers-size* | *proxy-busy-buffers-size* | Sets the value of the [proxy_busy_buffers_size](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_busy_buffers_size) directive. | Depends on the platform. | | | *nginx.org/proxy-max-temp-file-size* | *proxy-max-temp-file-size* | Sets the value of the [proxy_max_temp_file_size](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_max_temp_file_size) directive. | *1024m* | | | *nginx.org/server-tokens* | *server-tokens* | Enables or disables the [server_tokens](https://nginx.org/en/docs/http/ngx_http_core_module.html#server_tokens) directive. Additionally, with the NGINX Plus, you can specify a custom string value, including the empty string value, which disables the emission of the “Server” field. | *True* | | | *nginx.org/path-regex* | N/A | Enables regular expression modifiers for Ingress path parameter. This translates to the NGINX [location](https://nginx.org/en/docs/http/ngx_http_core_module.html#location) directive. You can specify one of these values: "case_sensitive", "case_insensitive", or "exact". The annotation is applied to the entire Ingress resource and its paths. While using Master and Minion Ingresses i.e. Mergeable Ingresses, this annotation can be specified on Minion types. The `path-regex` annotation specified on Master is ignored, and has no effect on paths defined on Minions. | N/A | [path-regex](https://github.com/nginx/kubernetes-ingress/tree/v{{< nic-version >}}/examples/ingress-resources/path-regex) | diff --git a/content/nic/configuration/policy-resource.md b/content/nic/configuration/policy-resource.md index 00a928774..6b25c28c8 100644 --- a/content/nic/configuration/policy-resource.md +++ b/content/nic/configuration/policy-resource.md @@ -793,7 +793,7 @@ waf: |``securityLog.enable`` | Enables security log. | ``bool`` | No | |``securityLog.apLogConf`` | The [App Protect WAF log conf]({{< ref "/nic/installation/integrations/app-protect-waf/configuration.md#waf-logs" >}}) resource. Accepts an optional namespace. Only works with ``apPolicy``. | ``string`` | No | |``securityLog.apLogBundle`` | The [App Protect WAF log bundle]({{< ref "/nic/installation/integrations/app-protect-waf/configuration.md#waf-bundles" >}}) resource. Only works with ``apBundle``. | ``string`` | No | -|``securityLog.logDest`` | The log destination for the security log. Only accepted variables are ``syslog:server=:``, ``stderr``, ````. | ``string`` | No | +|``securityLog.logDest`` | The log destination for the security log. Only accepted variables are ``syslog:server=; localhost; fqdn>:``, ``stderr``, ````. | ``string`` | No | {{% /table %}} #### WAF Merging Behavior diff --git a/content/nic/configuration/virtualserver-and-virtualserverroute-resources.md b/content/nic/configuration/virtualserver-and-virtualserverroute-resources.md index 31f4d2c3f..df0a4cd48 100644 --- a/content/nic/configuration/virtualserver-and-virtualserverroute-resources.md +++ b/content/nic/configuration/virtualserver-and-virtualserverroute-resources.md @@ -371,6 +371,7 @@ tls: |``buffering`` | Enables buffering of responses from the upstream server. See the [proxy_buffering](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering) directive. The default is set in the ``proxy-buffering`` ConfigMap key. | ``boolean`` | No | |``buffers`` | Configures the buffers used for reading a response from the upstream server for a single connection. | [buffers](#upstreambuffers) | No | |``buffer-size`` | Sets the size of the buffer used for reading the first part of a response received from the upstream server. See the [proxy_buffer_size](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffer_size) directive. The default is set in the ``proxy-buffer-size`` ConfigMap key. | ``string`` | No | +|``busy-buffers-size`` | Sets the size of the buffer used for reading a response from the upstream server when the response is larger than the ``buffer-size``. See the [proxy_busy_buffers_size](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_busy_buffers_size) directive. The default is set in the ``proxy-busy-buffers-size`` ConfigMap key. | ``string`` | No | |``ntlm`` | Allows proxying requests with NTLM Authentication. See the [ntlm](https://nginx.org/en/docs/http/ngx_http_upstream_module.html#ntlm) directive. In order for NTLM authentication to work, it is necessary to enable keepalive connections to upstream servers using the ``keepalive`` field. Note: this feature is supported only in NGINX Plus.| ``boolean`` | No | |``type`` |The type of the upstream. Supported values are ``http`` and ``grpc``. The default is ``http``. For gRPC, it is necessary to enable HTTP/2 in the [ConfigMap]({{< ref "/nic/configuration/global-configuration/configmap-resource.md#listeners" >}}) and configure TLS termination in the VirtualServer. | ``string`` | No | |``backup`` | The name of the backup service of type [ExternalName](https://kubernetes.io/docs/concepts/services-networking/service/#externalname). This will be used when the primary servers are unavailable. Note: The parameter cannot be used along with the ``random`` , ``hash`` or ``ip_hash`` load balancing methods. | ``string`` | No | diff --git a/content/nic/installation/build-nginx-ingress-controller.md b/content/nic/installation/build-nginx-ingress-controller.md index 560a46ca4..e89a1ec03 100644 --- a/content/nic/installation/build-nginx-ingress-controller.md +++ b/content/nic/installation/build-nginx-ingress-controller.md @@ -199,5 +199,5 @@ If you prefer not to build your own NGINX Ingress Controller image, you can use **NGINX Plus Ingress Controller**: You have two options for this: -- Download the image using your NGINX Ingress Controller subscription certificate and key. View the [Get NGINX Ingress Controller from the F5 Registry]({{< ref "/nic/installation/nic-images/get-registry-image" >}}) topic. -- Use your NGINX Ingress Controller subscription JWT token to get the image. View the [Get the NGINX Ingress Controller image with JWT]({{< ref "/nic/installation/nic-images/get-image-using-jwt.md" >}}) topic. +- Download the image using your NGINX Ingress Controller subscription certificate and key. View the [Download NGINX Ingress Controller from the F5 Registry]({{< ref "/nic/installation/nic-images/registry-download.md" >}}) topic. +- Use your NGINX Ingress Controller subscription JWT token to get the image. View the [Add an NGINX Ingress Controller image to your cluster]({{< ref "/nic/installation/nic-images/add-image-to-cluster.md" >}}) topic. diff --git a/content/nic/installation/create-license-secret.md b/content/nic/installation/create-license-secret.md index 3b13bc543..4efb08d7a 100644 --- a/content/nic/installation/create-license-secret.md +++ b/content/nic/installation/create-license-secret.md @@ -27,20 +27,21 @@ The JWT is required for validating your subscription and reporting telemetry dat ### Create the Secret -The JWT needs to be configured before deploying NGINX Ingress Controller. The JWT will be stored in a Kubernetes Secret of type `nginx.com/license`, and can be created with the following command. +The JWT needs to be configured before deploying NGINX Ingress Controller. + +It must be stored in a Kubernetes Secret of type `nginx.com/license` in the same namespace as your NGINX Ingress Controller pod(s). + +Create the Secret with the following command: ```shell -kubectl create secret generic license-token --from-file=license.jwt= --type=nginx.com/license -n +kubectl create secret generic license-token --from-file=license.jwt= --type=nginx.com/license -n ``` -You can now delete the downloaded `.jwt` file. -{{< note >}} -The Secret needs to be in the same Namespace as the NGINX Ingress Controller Pod(s). -{{}} +Once created, you can download the `.jwt` file. {{< include "/nic/installation/jwt-password-note.md" >}} -### Use the NGINX Plus license Secret +### Add the license Secret to your deployment If using a name other than the default `license-token`, provide the name of this Secret when installing NGINX Ingress Controller: @@ -50,7 +51,7 @@ If using a name other than the default `license-token`, provide the name of this Specify the Secret name using the `controller.mgmt.licenseTokenSecretName` Helm value. -For detailed guidance on creating the Management block via Helm, refer to the [Helm configuration documentation]({{< ref "/nic/installation/installing-nic/installation-with-helm/#configuration" >}}). +For detailed guidance on creating the Management block with Helm, refer to the [Helm configuration documentation]({{< ref "/nic/installation/installing-nic/installation-with-helm/#configuration" >}}). {{% /tab %}} @@ -129,11 +130,8 @@ Specify the SSL trusted certificate Secret name in the `ssl-trusted-certificate- {{}} -
- Once these Secrets are created and configured, you can now [install NGINX Ingress Controller ]({{< ref "/nic/installation/installing-nic/" >}}). - ## What’s reported and how it’s protected {#telemetry} NGINX Plus reports the following data every hour by default: diff --git a/content/nic/installation/ingress-nginx.md b/content/nic/installation/ingress-nginx.md index faa3d3b04..100e62946 100644 --- a/content/nic/installation/ingress-nginx.md +++ b/content/nic/installation/ingress-nginx.md @@ -1,12 +1,10 @@ --- -nd-docs: DOCS-1469 -doctypes: -- tutorial -tags: -- docs title: Migrate from Ingress-NGINX Controller to NGINX Ingress Controller toc: true -weight: 500 +weight: 700 +nd-content-type: how-to +nd-product: NIC +nd-docs: DOCS-1469 --- This document describes how to migrate from the community-maintained Ingress-NGINX Controller to F5 NGINX Ingress Controller. @@ -464,6 +462,7 @@ This table maps the Ingress-NGINX Controller annotations to NGINX Ingress Contro | [_nginx.ingress.kubernetes.io/proxy-buffering_](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#proxy-buffering) | [_nginx.org/proxy-buffering_]({{< ref "/nic/configuration/ingress-resources/advanced-configuration-with-annotations.md#general-customization" >}}) | [_proxy_buffering_](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering) | | [_nginx.ingress.kubernetes.io/proxy-buffers-number_](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#proxy-buffers-number) | [_nginx.org/proxy-buffers_]({{< ref "/nic/configuration/ingress-resources/advanced-configuration-with-annotations.md#general-customization" >}}) | [_proxy_buffers_](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffers) | | [_nginx.ingress.kubernetes.io/proxy-buffer-size_](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#proxy-buffer-size) | [_nginx.org/proxy-buffer-size_]({{< ref "/nic/configuration/ingress-resources/advanced-configuration-with-annotations.md#general-customization" >}}) | [_proxy_buffer_size_](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffer_size) | +| [_nginx.ingress.kubernetes.io/proxy-busy-buffers-size_](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#proxy-busy-buffers-size) | [_nginx.org/proxy-busy-buffers-size_]({{< ref "/nic/configuration/ingress-resources/advanced-configuration-with-annotations.md#general-customization" >}}) | [_proxy_busy_buffers_size_](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_busy_buffers_size) | | [_nginx.ingress.kubernetes.io/proxy-connect-timeout_](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#custom-timeouts) | [_nginx.org/proxy-connect-timeout_]({{< ref "/nic/configuration/ingress-resources/advanced-configuration-with-annotations.md#general-customization" >}}) | [_proxy_connect_timeout_](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_connect_timeout) | | [_nginx.ingress.kubernetes.io/proxy-read-timeout_](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#custom-timeouts) | [_nginx.org/proxy-read-timeout_]({{< ref "/nic/configuration/ingress-resources/advanced-configuration-with-annotations.md#general-customization" >}}) | [_proxy_read_timeout_](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_read_timeout) | | [_nginx.ingress.kubernetes.io/proxy-send-timeout_](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#custom-timeouts) | [_nginx.org/proxy-send-timeout_]({{< ref "/nic/configuration/ingress-resources/advanced-configuration-with-annotations.md#general-customization" >}}) | [_proxy_send_timeout_](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_send_timeout) | diff --git a/content/nic/installation/installing-nic/installation-with-helm.md b/content/nic/installation/installing-nic/installation-with-helm.md index 4ad8e1377..e1f79746a 100644 --- a/content/nic/installation/installing-nic/installation-with-helm.md +++ b/content/nic/installation/installing-nic/installation-with-helm.md @@ -9,103 +9,51 @@ nd-docs: DOCS-602 This document explains how to install F5 NGINX Ingress Controller using [Helm](https://helm.sh/). +Following these steps will deploy NGINX Ingress Controller in your Kubernetes cluster with the default configuration. + +The [Helm chart parameters](#helm-chart-parameters) lists the parameters that can be configured during installation. + ## Before you begin {{< call-out "note" >}} All documentation should only be used with the latest stable release, indicated on [the releases page]({{< ref "/nic/releases.md" >}}) of the GitHub repository. {{< /call-out >}} - A [Kubernetes Version Supported by NGINX Ingress Controller]({{< ref "/nic/technical-specifications.md#supported-kubernetes-versions" >}}) - Helm 3.0+. -- If you’d like to use NGINX Plus: - - Get the NGINX Ingress Controller JWT and [create a license secret]({{< ref "/nic/installation/create-license-secret.md" >}}). - - Download the image using your NGINX Ingress Controller subscription certificate and key. View the [Get NGINX Ingress Controller from the F5 Registry]({{< ref "/nic/installation/nic-images/get-registry-image.md" >}}) topic. - - The [Get the NGINX Ingress Controller image with JWT]({{< ref "/nic/installation/nic-images/get-image-using-jwt.md" >}}) topic describes how to use your subscription JWT token to get the image. - - The [Build NGINX Ingress Controller]({{< ref "/nic/installation/build-nginx-ingress-controller.md" >}}) topic explains how to push an image to a private Docker registry. - - Update the `controller.image.repository` field of the `values-plus.yaml` accordingly. - -## Custom Resource Definitions -NGINX Ingress Controller requires custom resource definitions (CRDs) installed in the cluster, which Helm will install. If the CRDs are not installed, NGINX Ingress Controller pods will not become `Ready`. +If you would like to use NGINX Plus, there are few options: you will need to update the `controller.image.repository` field of `values-plus.yaml` accordingly. -If you do not use the custom resources that require those CRDs (which corresponds to `controller.enableCustomResources` set to `false` and `controller.appprotect.enable` set to `false` and `controller.appprotectdos.enable` set to `false`), the installation of the CRDs can be skipped by specifying `--skip-crds` for the helm install command. +- [Download NGINX Ingress Controller from the F5 Registry]({{< ref "/nic/installation/nic-images/registry-download.md" >}}) +- [Add an NGINX Ingress Controller image to your cluster]({{< ref "/nic/installation/nic-images/add-image-to-cluster.md" >}}) -### Upgrade the CRDs +## Install the Helm chart using the OCI Registry -{{< call-out "note" >}} If you are running NGINX Ingress Controller v3.x, you should read [Upgrade from NGINX Ingress Controller v3.x to v4.0.0]({{< ref "/nic/installation/installing-nic/upgrade-to-v4.md" >}}) before continuing. {{< /call-out >}} +Run the following commands to install the chart with the release name _my-release_ (Which you can customize): -To upgrade the CRDs, pull the chart sources as described in [Pull the Chart](#pull-the-chart) and then run: - -```shell -kubectl apply -f crds/ -``` +{{< tabs name="registry-chart-versions" >}} -Alternatively, CRDs can be upgraded without pulling the chart by running: +{{% tab name="NGINX Open Source" %}} ```shell -kubectl apply -f https://raw.githubusercontent.com/nginx/kubernetes-ingress/v{{< nic-version >}}/deploy/crds.yaml +helm install my-release oci://ghcr.io/nginx/charts/nginx-ingress --version {{< nic-helm-version >}} ``` -In the above command, `v{{< nic-version >}}` represents the version of NGINX Ingress Controller release rather than the Helm chart version. - -{{< call-out "note" >}} The following warning is expected and can be ignored: `Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply`. - -Check the [release notes](https://www.github.com/nginx/kubernetes-ingress/releases) for a new release for any special upgrade procedures. -{{< /call-out >}} - -### Uninstall the CRDs - -To remove the CRDs, pull the chart sources as described in [Pull the Chart](#pull-the-chart) and then run: - -```shell -kubectl delete -f crds/ -``` - -{{< call-out "warning" >}} This command will delete all the corresponding custom resources in your cluster across all namespaces. Please ensure there are no custom resources that you want to keep and there are no other NGINX Ingress Controller instances running in the cluster. {{< /call-out >}} - -## Manage the chart with OCI Registry - -### Install the chart - -Run the following commands to install the chart with the release name my-release (my-release is the name that you choose): - -- For NGINX: - - ```shell - helm install my-release oci://ghcr.io/nginx/charts/nginx-ingress --version {{< nic-helm-version >}} - ``` - -- For NGINX Plus: (This assumes you have pushed NGINX Ingress Controller image `nginx-plus-ingress` to your private registry `myregistry.example.com`) - - ```shell - helm install my-release oci://ghcr.io/nginx/charts/nginx-ingress --version {{< nic-helm-version >}} --set controller.image.repository=myregistry.example.com/nginx-plus-ingress --set controller.nginxplus=true - ``` - -These commands install the latest `edge` version of NGINX Ingress Controller from GitHub Container Registry. If you prefer using Docker Hub, you can replace `ghcr.io/nginx/charts/nginx-ingress` with `registry-1.docker.io/nginxcharts/nginx-ingress`. +{{% /tab %}} -### Upgrade the chart +{{% tab name="NGINX Plus" %}} -Helm does not upgrade the CRDs during a release upgrade. Before you upgrade a release, see [Upgrade the CRDs](#upgrade-the-crds). - -To upgrade the release `my-release`: +This assumes you have pushed NGINX Ingress Controller image `nginx-plus-ingress` to your private registry `myregistry.example.com`: ```shell -helm upgrade my-release oci://ghcr.io/nginx/charts/nginx-ingress --version {{< nic-helm-version >}} +helm install my-release oci://ghcr.io/nginx/charts/nginx-ingress --version {{< nic-helm-version >}} --set controller.image.repository=myregistry.example.com/nginx-plus-ingress --set controller.nginxplus=true ``` -### Uninstall the chart - -To uninstall/delete the release `my-release`: +{{% /tab %}} -```shell -helm uninstall my-release -``` - -The command removes all the Kubernetes components associated with the release and deletes the release. +{{< /tabs >}} -Uninstalling the release does not remove the CRDs. To remove the CRDs, see [Uninstall the CRDs](#uninstall-the-crds). -### Edge version +If you'd like to test the latest changes in NGINX Ingress Controller before a new release, you can install the `edge` version, which is built from the `main` branch of the [NGINX Ingress Controller repository](https://github.com/nginx/kubernetes-ingress). -To test the latest changes in NGINX Ingress Controller before a new release, you can install the `edge` version. This version is built from the `main` branch of the NGINX Ingress Controller repository. You can install the `edge` version by specifying the `--version` flag with the value `0.0.0-edge`: ```shell @@ -114,201 +62,79 @@ helm install my-release oci://ghcr.io/nginx/charts/nginx-ingress --version 0.0.0 {{< call-out "warning" >}} The `edge` version is not intended for production use. It is intended for testing and development purposes only. {{< /call-out >}} -## Manage the chart with Sources - -### Pull the chart - -This step is required if you're installing the chart using its sources. It also manages the custom resource definitions (CRDs) which NGINX Ingress Controller requires, and for upgrading or deleting the CRDs. - -1. Pull the chart sources: - - ```shell - helm pull oci://ghcr.io/nginx/charts/nginx-ingress --untar --version {{< nic-helm-version >}} - ``` - -2. Change your working directory to nginx-ingress: - - ```shell - cd nginx-ingress - ``` +## Install the Helm chart from source -### Install the chart +This section covers the steps involved with installing the Helm chart from the source, instead of using the registry. -To install the chart with the release name my-release (my-release is the name that you choose): +It also manages required the custom resource definitions (CRDs) for NGINX Ingress Controller. -- For NGINX: - - ```shell - helm install my-release . - ``` - -- For NGINX Plus: - - ```shell - helm install my-release -f values-plus.yaml . - ``` - -The command deploys the Ingress Controller in your Kubernetes cluster in the default configuration. The configuration section lists the parameters that can be configured during installation. - -### Upgrade the chart - -Helm does not upgrade the CRDs during a release upgrade. Before you upgrade a release, see [Upgrade the CRDs](#upgrade-the-crds). - -To upgrade the release `my-release`: +Pull the chart sources, which are also required for upgrading or deleting the CRDs: ```shell -helm upgrade my-release . +helm pull oci://ghcr.io/nginx/charts/nginx-ingress --untar --version {{< nic-helm-version >}} ``` -### Uninstall the chart - -To uninstall/delete the release `my-release`: +Change your working directory to nginx-ingress: ```shell -helm uninstall my-release +cd nginx-ingress ``` -The command removes all the Kubernetes components associated with the release and deletes the release. - -Uninstalling the release does not remove the CRDs. To remove the CRDs, see [Uninstall the CRDs](#uninstall-the-crds). - -## Upgrade without downtime +Install the chart with the release name _my-release_ (Which you can customize): -### Background +{{< tabs name="source-chart-versions" >}} -In NGINX Ingress Controller version 3.1.0, [changes were introduced](https://github.com/nginx/kubernetes-ingress/pull/3606) to Helm resource names, labels and annotations to fit with Helm best practices. -When using Helm to upgrade from a version prior to 3.1.0, certain resources like Deployment, DaemonSet and Service will be recreated due to the aforementioned changes, which will result in downtime. +{{% tab name="NGINX Open Source" %}} -Although the advisory is to update all resources in accordance with new naming convention, to avoid downtime follow the steps listed below. - -### Upgrade steps - -{{< call-out "note" >}} The following steps apply to both 2.x and 3.0.x releases. {{}} - -The steps you should follow depend on the Helm release name: - -{{}} - -{{%tab name="Helm release name is `nginx-ingress`"%}} - -1. Use `kubectl describe` on deployment/daemonset to get the `Selector` value: - - ```shell - kubectl describe deployments -n - ``` - - Copy the key=value under `Selector`, such as: - - ```shell - Selector: app=nginx-ingress-nginx-ingress - ``` - -2. Checkout the latest available tag using `git checkout v{{< nic-version >}}` - -3. Navigate to `/kubernetes-ingress/charts/nginx-ingress` - -4. Update the `selectorLabels: {}` field in the `values.yaml` file located at `/kubernetes-ingress/charts/nginx-ingress` with the copied `Selector` value. - ```shell - selectorLabels: {app: nginx-ingress-nginx-ingress} - ``` - -5. Run `helm upgrade` with following arguments set: - ```shell - --set serviceNameOverride="nginx-ingress-nginx-ingress" - --set controller.name="" - --set fullnameOverride="nginx-ingress-nginx-ingress" - ``` - It could look as follows: - - ```shell - helm upgrade nginx-ingress oci://ghcr.io/nginx/charts/nginx-ingress --version 0.19.0 --set controller.kind=deployment/daemonset --set controller.nginxplus=false/true --set controller.image.pullPolicy=Always --set serviceNameOverride="nginx-ingress-nginx-ingress" --set controller.name="" --set fullnameOverride="nginx-ingress-nginx-ingress" -f values.yaml - ``` - -6. Once the upgrade process has finished, use `kubectl describe` on the deployment to verify the change by reviewing its events: - ```shell - Type Reason Age From Message - ---- ------ ---- ---- ------- - Normal ScalingReplicaSet 9m11s deployment-controller Scaled up replica set nginx-ingress-nginx-ingress- to 1 - Normal ScalingReplicaSet 101s deployment-controller Scaled up replica set nginx-ingress-nginx-ingress- to 1 - Normal ScalingReplicaSet 98s deployment-controller Scaled down replica set nginx-ingress-nginx-ingress- to 0 from 1 - ``` -{{%/tab%}} - -{{%tab name="Helm release name is not `nginx-ingress`"%}} - -1. Use `kubectl describe` on deployment/daemonset to get the `Selector` value: - - ```shell - kubectl describe deployment/daemonset -n - ``` - - Copy the key=value under ```Selector```, such as: - - ```shell - Selector: app=-nginx-ingress - ``` - -2. Checkout the latest available tag using `git checkout v{{< nic-version >}}` - -3. Navigate to `/kubernetes-ingress/charts/nginx-ingress` - -4. Update the `selectorLabels: {}` field in the `values.yaml` file located at `/kubernetes-ingress/charts/nginx-ingress` with the copied `Selector` value. - - ```shell - selectorLabels: {app: -nginx-ingress} - ``` +```shell +helm install my-release . +``` -5. Run `helm upgrade` with following arguments set: +{{% /tab %}} - ```shell - --set serviceNameOverride="-nginx-ingress" - --set controller.name="" - ``` +{{% tab name="NGINX Plus" %}} - It could look as follows: +```shell +helm install my-release . --set controller.image.repository=myregistry.example.com/nginx-plus-ingress --set controller.nginxplus=true +``` - ```shell - helm upgrade test-release oci://ghcr.io/nginx/charts/nginx-ingress --version 0.19.0 --set controller.kind=deployment/daemonset --set controller.nginxplus=false/true --set controller.image.pullPolicy=Always --set serviceNameOverride="test-release-nginx-ingress" --set controller.name="" -f values.yaml - ``` +{{% /tab %}} -6. Once the upgrade process has finished, use `kubectl describe` on the deployment to verify the change by reviewing its events: +{{< /tabs >}} - ```shell - Type Reason Age From Message - ---- ------ ---- ---- ------- - Normal ScalingReplicaSet 9m11s deployment-controller Scaled up replica set test-release-nginx-ingress- to 1 - Normal ScalingReplicaSet 101s deployment-controller Scaled up replica set test-release-nginx-ingress- to 1 - Normal ScalingReplicaSet 98s deployment-controller Scaled down replica set test-release-nginx-ingress- to 0 from 1 - ``` +## Custom Resource Definitions -{{%/tab%}} +When installing the NGINX Ingress Controller chart, Helm will also install the required custom resource definitions (CRDs). -{{}} +If the CRDs are not installed, NGINX Ingress Controller pods will not become _Ready_. +If you do not use the custom resources that require those CRDs (With `controller.enableCustomResources`,`controller.appprotect.enable` and `controller.appprotectdos.enable` set to `false`), the installation of the CRDs can be skipped by specifying `--skip-crds` in your _helm install_ command. -## Run multiple NGINX Ingress Controllers +{{< call-out "caution" "Running multiple NGINX Ingress Controller instances">}} -If you are running NGINX Ingress Controller releases in your cluster with custom resources enabled, the releases will share a single version of the CRDs. +If you are running multiple NGINX Ingress Controller releases in your cluster with custom resources enabled, the releases will share a single version of the CRDs. Ensure the NGINX Ingress Controller versions match the version of the CRDs. When uninstalling a release, ensure that you don’t remove the CRDs until there are no other NGINX Ingress Controller releases running in the cluster. The [Run multiple NGINX Ingress Controllers]({{< ref "/nic/installation/run-multiple-ingress-controllers.md" >}}) topic has more details. -## Configuration +{{< /call-out >}} + +## Helm chart parameters The following tables lists the configurable parameters of the NGINX Ingress Controller chart and their default values. {{}} |Parameter | Description | Default | | --- | --- | --- | -| **controller.name** | The name of the Ingress Controller daemonset or deployment. | Autogenerated | -| **controller.kind** | The kind of the Ingress Controller installation - deployment or daemonset. | deployment | +| **controller.name** | The name of the NGINX Ingress Controller daemonset or deployment. | Autogenerated | +| **controller.kind** | The kind of the NGINX Ingress Controller installation - deployment or daemonset. | deployment | | **controller.annotations** | Allows for setting of `annotations` for deployment or daemonset. | {} | -| **controller.nginxplus** | Deploys the Ingress Controller for NGINX Plus. | false | +| **controller.nginxplus** | Deploys the NGINX Ingress Controller for NGINX Plus. | false | | **controller.mgmt.licenseTokenSecretName** | Configures the secret used in the [license_token](https://nginx.org/en/docs/ngx_mgmt_module.html#license_token) directive. This key assumes the secret is in the Namespace that NGINX Ingress Controller is deployed in. The secret must be of type `nginx.com/license` with the base64 encoded JWT in the `license.jwt` key. | license-token | | **controller.mgmt.enforceInitialReport** | Configures the [enforce_initial_report](https://nginx.org/en/docs/ngx_mgmt_module.html#enforce_initial_report) directive, which enables or disables the 180-day grace period for sending the initial usage report. | false | | **controller.mgmt.usageReport.endpoint** | Configures the endpoint of the [usage_report](https://nginx.org/en/docs/ngx_mgmt_module.html#usage_report) directive. This is used to configure the endpoint NGINX uses to send usage reports to NIM. | product.connect.nginx.com | -| **controller.mgmt.usageReport.interval** | Configures the interval of the [usage_report](https://nginx.org/en/docs/ngx_mgmt_module.html#usage_report) directive. This specifies the frequency that usage reports are sent. This field takes an [NGINX time](https://nginx.org/en/docs/syntax.html). | 1h | +| **controller.mgmt.usageReport.interval** | Configures the interval of the [usage_report](https://nginx.org/en/docs/ngx_mgmt_module.html#usage_report) directive. This specifies the frequency that usage reports are sent. Only seconds(s), minutes(m), and hours(h) are allowed and must be between 60s and 24h. | 1h | | **controller.mgmt.usageReport.proxyHost** | Configures the host name of the [proxy](https://nginx.org/en/docs/ngx_mgmt_module.html#proxy) directive with optional port. | N/A | | **controller.mgmt.usageReport.proxyCredentialsSecretName** | Configures the [proxy_username](https://nginx.org/en/docs/ngx_mgmt_module.html#proxy_username) directive as well as the [proxy_password](https://nginx.org/en/docs/ngx_mgmt_module.html#proxy_password) directive using a Kubernetes Opaque Secret. The Secret must contain `username` and `password` fields. | N/A | | **controller.mgmt.sslVerify** | Configures the [ssl_verify](https://nginx.org/en/docs/ngx_mgmt_module.html#ssl_verify) directive, which enables or disables verification of the usage reporting endpoint certificate. | true | @@ -318,20 +144,20 @@ The following tables lists the configurable parameters of the NGINX Ingress Cont | **controller.mgmt.sslCertificateSecretName** | Configures the secret used to create the `ssl_certificate` and `ssl_certificate_key` directives. This key assumes the secret is in the Namespace that NGINX Ingress Controller is deployed in. The secret must be of type `kubernetes.io/tls` | N/A | | **controller.mgmt.sslTrustedCertificateSecretName** | Configures the secret used to create the file(s) referenced the in [ssl_trusted_certifcate](https://nginx.org/en/docs/ngx_mgmt_module.html#ssl_trusted_certificate), and [ssl_crl](https://nginx.org/en/docs/ngx_mgmt_module.html#ssl_crl) directives. This key assumes the secret is in the Namespace that NGINX Ingress Controller is deployed in. The secret must be of type `nginx.org/ca`, where the `ca.crt` key contains a base64 encoded trusted cert, and the optional `ca.crl` key can contain a base64 encoded CRL. If the optional `ca.crl` key is supplied, it will configure the NGINX `ssl_crl` directive. | N/A | | **controller.mgmt.configMapName** | Allows changing the name of the MGMT config map. The name should not include a namespace| Autogenerated | -| **controller.nginxReloadTimeout** | The timeout in milliseconds which the Ingress Controller will wait for a successful NGINX reload after a change or at the initial start. | 60000 | -| **controller.hostNetwork** | Enables the Ingress Controller pods to use the host's network namespace. | false | -| **controller.dnsPolicy** | DNS policy for the Ingress Controller pods. | ClusterFirst | +| **controller.nginxReloadTimeout** | The timeout in milliseconds which the NGINX Ingress Controller will wait for a successful NGINX reload after a change or at the initial start. | 60000 | +| **controller.hostNetwork** | Enables the NGINX Ingress Controller pods to use the host's network namespace. | false | +| **controller.dnsPolicy** | DNS policy for the NGINX Ingress Controller pods. | ClusterFirst | | **controller.nginxDebug** | Enables debugging for NGINX. Uses the `nginx-debug` binary. Requires `error-log-level: debug` in the ConfigMap via `controller.config.entries`. | false | -| **controller.logLevel** | The log level of the Ingress Controller. | info | -| **controller.logFormat** | The log format of the Ingress Controller. | glog | -| **controller.image.digest** | The image digest of the Ingress Controller. | None | -| **controller.image.repository** | The image repository of the Ingress Controller. | nginx/nginx-ingress | -| **controller.image.tag** | The tag of the Ingress Controller image. | {{< nic-version >}} | -| **controller.image.pullPolicy** | The pull policy for the Ingress Controller image. | IfNotPresent | -| **controller.lifecycle** | The lifecycle of the Ingress Controller pods. | {} | -| **controller.customConfigMap** | The name of the custom ConfigMap used by the Ingress Controller. If set, then the default config is ignored. | "" | -| **controller.config.name** | The name of the ConfigMap used by the Ingress Controller. | Autogenerated | -| **controller.config.annotations** | The annotations of the Ingress Controller configmap. | {} | +| **controller.logLevel** | The log level of the NGINX Ingress Controller. | info | +| **controller.logFormat** | The log format of the NGINX Ingress Controller. | glog | +| **controller.image.digest** | The image digest of the NGINX Ingress Controller. | None | +| **controller.image.repository** | The image repository of the NGINX Ingress Controller. | nginx/nginx-ingress | +| **controller.image.tag** | The tag of the NGINX Ingress Controller image. | {{< nic-version >}} | +| **controller.image.pullPolicy** | The pull policy for the NGINX Ingress Controller image. | IfNotPresent | +| **controller.lifecycle** | The lifecycle of the NGINX Ingress Controller pods. | {} | +| **controller.customConfigMap** | The name of the custom ConfigMap used by the NGINX Ingress Controller. If set, then the default config is ignored. | "" | +| **controller.config.name** | The name of the ConfigMap used by the NGINX Ingress Controller. | Autogenerated | +| **controller.config.annotations** | The annotations of the NGINX Ingress Controller configmap. | {} | | **controller.config.entries** | The entries of the ConfigMap for customizing NGINX configuration. See [ConfigMap resource docs]({{< ref "/nic/configuration/global-configuration/configmap-resource.md" >}}) for the list of supported ConfigMap keys. | {} | | **controller.customPorts** | A list of custom ports to expose on the NGINX Ingress Controller pod. Follows the conventional Kubernetes yaml syntax for container ports. | [] | | **controller.defaultTLS.cert** | The base64-encoded TLS certificate for the default HTTPS server. **Note:** It is recommended that you specify your own certificate. Alternatively, omitting the default server secret completely will configure NGINX to reject TLS connections to the default server. | @@ -340,28 +166,28 @@ The following tables lists the configurable parameters of the NGINX Ingress Cont | **controller.wildcardTLS.cert** | The base64-encoded TLS certificate for every Ingress/VirtualServer host that has TLS enabled but no secret specified. If the parameter is not set, for such Ingress/VirtualServer hosts NGINX will break any attempt to establish a TLS connection. | None | | **controller.wildcardTLS.key** | The base64-encoded TLS key for every Ingress/VirtualServer host that has TLS enabled but no secret specified. If the parameter is not set, for such Ingress/VirtualServer hosts NGINX will break any attempt to establish a TLS connection. | None | | **controller.wildcardTLS.secret** | The secret with a TLS certificate and key for every Ingress/VirtualServer host that has TLS enabled but no secret specified. The value must follow the following format: `/`. Used as an alternative to specifying a certificate and key using `controller.wildcardTLS.cert` and `controller.wildcardTLS.key` parameters. | None | -| **controller.nodeSelector** | The node selector for pod assignment for the Ingress Controller pods. | {} | -| **controller.terminationGracePeriodSeconds** | The termination grace period of the Ingress Controller pod. | 30 | -| **controller.tolerations** | The tolerations of the Ingress Controller pods. | [] | -| **controller.affinity** | The affinity of the Ingress Controller pods. | {} | -| **controller.topologySpreadConstraints** | The topology spread constraints of the Ingress controller pods. | {} | -| **controller.env** | The additional environment variables to be set on the Ingress Controller pods. | [] | -| **controller.volumes** | The volumes of the Ingress Controller pods. | [] | -| **controller.volumeMounts** | The volumeMounts of the Ingress Controller pods. | [] | -| **controller.initContainers** | InitContainers for the Ingress Controller pods. | [] | -| **controller.extraContainers** | Extra (eg. sidecar) containers for the Ingress Controller pods. | [] | +| **controller.nodeSelector** | The node selector for pod assignment for the NGINX Ingress Controller pods. | {} | +| **controller.terminationGracePeriodSeconds** | The termination grace period of the NGINX Ingress Controller pod. | 30 | +| **controller.tolerations** | The tolerations of the NGINX Ingress Controller pods. | [] | +| **controller.affinity** | The affinity of the NGINX Ingress Controller pods. | {} | +| **controller.topologySpreadConstraints** | The topology spread constraints of the NGINX Ingress Controller pods. | {} | +| **controller.env** | The additional environment variables to be set on the NGINX Ingress Controller pods. | [] | +| **controller.volumes** | The volumes of the NGINX Ingress Controller pods. | [] | +| **controller.volumeMounts** | The volumeMounts of the NGINX Ingress Controller pods. | [] | +| **controller.initContainers** | InitContainers for the NGINX Ingress Controller pods. | [] | +| **controller.extraContainers** | Extra (eg. sidecar) containers for the NGINX Ingress Controller pods. | [] | | **controller.podSecurityContext**| The SecurityContext for Ingress Controller pods. | "seccompProfile": {"type": "RuntimeDefault"} | | **controller.securityContext** | The SecurityContext for Ingress Controller container. | {} | | **controller.initContainerSecurityContext** | The SecurityContext for Ingress Controller init container when `readOnlyRootFilesystem` is enabled by either setting `controller.securityContext.readOnlyRootFilesystem` or `controller.readOnlyRootFilesystem`to `true`. | {} | -| **controller.resources** | The resources of the Ingress Controller pods. | requests: cpu=100m,memory=128Mi | +| **controller.resources** | The resources of the NGINX Ingress Controller pods. | requests: cpu=100m,memory=128Mi | | **controller.initContainerResources** | The resources of the init container which is used when `readOnlyRootFilesystem` is enabled by either setting `controller.securityContext.readOnlyRootFilesystem` or `controller.readOnlyRootFilesystem`to `true`. | requests: cpu=100m,memory=128Mi | -| **controller.replicaCount** | The number of replicas of the Ingress Controller deployment. | 1 | -| **controller.ingressClass.name** | A class of the Ingress Controller. An IngressClass resource with the name equal to the class must be deployed. Otherwise, the Ingress Controller will fail to start. The Ingress Controller only processes resources that belong to its class - i.e. have the "ingressClassName" field resource equal to the class. The Ingress Controller processes all the VirtualServer/VirtualServerRoute/TransportServer resources that do not have the "ingressClassName" field for all versions of Kubernetes. | nginx | +| **controller.replicaCount** | The number of replicas of the NGINX Ingress Controller deployment. | 1 | +| **controller.ingressClass.name** | A class of the NGINX Ingress Controller. An IngressClass resource with the name equal to the class must be deployed. Otherwise, the NGINX Ingress Controller will fail to start. the NGINX Ingress Controller only processes resources that belong to its class - i.e. have the "ingressClassName" field resource equal to the class. the NGINX Ingress Controller processes all the VirtualServer/VirtualServerRoute/TransportServer resources that do not have the "ingressClassName" field for all versions of Kubernetes. | nginx | | **controller.ingressClass.create** | Creates a new IngressClass object with the name `controller.ingressClass.name`. Set to `false` to use an existing ingressClass created using `kubectl` with the same name. If you use `helm upgrade`, do not change the values from the previous release as helm will delete IngressClass objects managed by helm. If you are upgrading from a release earlier than {{< nic-version >}}, do not set the value to false. | true | | **controller.ingressClass.setAsDefaultIngress** | New Ingresses without an `"ingressClassName"` field specified will be assigned the class specified in `controller.ingressClass.name`. Requires `controller.ingressClass.create`. | false | -| **controller.watchNamespace** | Comma separated list of namespaces the Ingress Controller should watch for resources. By default the Ingress Controller watches all namespaces. Mutually exclusive with `controller.watchNamespaceLabel`. Please note that if configuring multiple namespaces using the Helm cli `--set` option, the string needs to wrapped in double quotes and the commas escaped using a backslash - e.g. `--set controller.watchNamespace="default\,nginx-ingress"`. | "" | -| **controller.watchNamespaceLabel** | Configures the Ingress Controller to watch only those namespaces with label foo=bar. By default the Ingress Controller watches all namespaces. Mutually exclusive with `controller.watchNamespace`. | "" | -| **controller.watchSecretNamespace** | Comma separated list of namespaces the Ingress Controller should watch for resources of type Secret. If this arg is not configured, the Ingress Controller watches the same namespaces for all resources, see `controller.watchNamespace` and `controller.watchNamespaceLabel`. All namespaces included with this argument must be part of either `controller.watchNamespace` or `controller.watchNamespaceLabel`. Please note that if configuring multiple namespaces using the Helm cli `--set` option, the string needs to wrapped in double quotes and the commas escaped using a backslash - e.g. `--set controller.watchSecretNamespace="default\,nginx-ingress"`. | "" | +| **controller.watchNamespace** | Comma separated list of namespaces the NGINX Ingress Controller should watch for resources. By default the NGINX Ingress Controller watches all namespaces. Mutually exclusive with `controller.watchNamespaceLabel`. Please note that if configuring multiple namespaces using the Helm cli `--set` option, the string needs to wrapped in double quotes and the commas escaped using a backslash - e.g. `--set controller.watchNamespace="default\,nginx-ingress"`. | "" | +| **controller.watchNamespaceLabel** | Configures the NGINX Ingress Controller to watch only those namespaces with label foo=bar. By default the NGINX Ingress Controller watches all namespaces. Mutually exclusive with `controller.watchNamespace`. | "" | +| **controller.watchSecretNamespace** | Comma separated list of namespaces the NGINX Ingress Controller should watch for resources of type Secret. If this arg is not configured, the NGINX Ingress Controller watches the same namespaces for all resources, see `controller.watchNamespace` and `controller.watchNamespaceLabel`. All namespaces included with this argument must be part of either `controller.watchNamespace` or `controller.watchNamespaceLabel`. Please note that if configuring multiple namespaces using the Helm cli `--set` option, the string needs to wrapped in double quotes and the commas escaped using a backslash - e.g. `--set controller.watchSecretNamespace="default\,nginx-ingress"`. | "" | | **controller.enableCustomResources** | Enable the custom resources. | true | | **controller.enableOIDC** | Enable OIDC policies. | false | | **controller.enableTLSPassthrough** | Enable TLS Passthrough on default port 443. Requires `controller.enableCustomResources`. | false | @@ -369,46 +195,46 @@ The following tables lists the configurable parameters of the NGINX Ingress Cont | **controller.enableCertManager** | Enable x509 automated certificate management for VirtualServer resources using cert-manager (cert-manager.io). Requires `controller.enableCustomResources`. | false | | **controller.enableExternalDNS** | Enable integration with ExternalDNS for configuring public DNS entries for VirtualServer resources using [ExternalDNS](https://github.com/kubernetes-sigs/external-dns). Requires `controller.enableCustomResources`. | false | | **controller.globalConfiguration.create** | Creates the GlobalConfiguration custom resource. Requires `controller.enableCustomResources`. | false | -| **controller.globalConfiguration.spec** | The spec of the GlobalConfiguration for defining the global configuration parameters of the Ingress Controller. | {} | +| **controller.globalConfiguration.spec** | The spec of the GlobalConfiguration for defining the global configuration parameters of the NGINX Ingress Controller. | {} | | **controller.enableSnippets** | Enable custom NGINX configuration snippets in Ingress, VirtualServer, VirtualServerRoute and TransportServer resources. | false | -| **controller.healthStatus** | Add a location "/nginx-health" to the default server. The location responds with the 200 status code for any request. Useful for external health-checking of the Ingress Controller. | false | +| **controller.healthStatus** | Add a location "/nginx-health" to the default server. The location responds with the 200 status code for any request. Useful for external health-checking of the NGINX Ingress Controller. | false | | **controller.healthStatusURI** | Sets the URI of health status location in the default server. Requires `controller.healthStatus`. | "/nginx-health" | | **controller.nginxStatus.enable** | Enable the NGINX stub_status, or the NGINX Plus API. | true | | **controller.nginxStatus.port** | Set the port where the NGINX stub_status or the NGINX Plus API is exposed. | 8080 | | **controller.nginxStatus.allowCidrs** | Add IP/CIDR blocks to the allow list for NGINX stub_status or the NGINX Plus API. Separate multiple IP/CIDR by commas. | 127.0.0.1,::1 | -| **controller.priorityClassName** | The PriorityClass of the Ingress Controller pods. | None | -| **controller.service.create** | Creates a service to expose the Ingress Controller pods. | true | -| **controller.service.type** | The type of service to create for the Ingress Controller. | LoadBalancer | +| **controller.priorityClassName** | The PriorityClass of the NGINX Ingress Controller pods. | None | +| **controller.service.create** | Creates a service to expose the NGINX Ingress Controller pods. | true | +| **controller.service.type** | The type of service to create for the NGINX Ingress Controller. | LoadBalancer | | **controller.service.externalTrafficPolicy** | The externalTrafficPolicy of the service. The value Local preserves the client source IP. | Local | -| **controller.service.annotations** | The annotations of the Ingress Controller service. | {} | +| **controller.service.annotations** | The annotations of the NGINX Ingress Controller service. | {} | | **controller.service.extraLabels** | The extra labels of the service. | {} | | **controller.service.loadBalancerIP** | The static IP address for the load balancer. Requires `controller.service.type` set to `LoadBalancer`. The cloud provider must support this feature. | "" | -| **controller.service.externalIPs** | The list of external IPs for the Ingress Controller service. | [] | -| **controller.service.clusterIP** | The clusterIP for the Ingress Controller service, autoassigned if not specified. | "" | +| **controller.service.externalIPs** | The list of external IPs for the NGINX Ingress Controller service. | [] | +| **controller.service.clusterIP** | The clusterIP for the NGINX Ingress Controller service, autoassigned if not specified. | "" | | **controller.service.loadBalancerSourceRanges** | The IP ranges (CIDR) that are allowed to access the load balancer. Requires `controller.service.type` set to `LoadBalancer`. The cloud provider must support this feature. | [] | | **controller.service.name** | The name of the service. | Autogenerated | -| **controller.service.customPorts** | A list of custom ports to expose through the Ingress Controller service. Follows the conventional Kubernetes yaml syntax for service ports. | [] | -| **controller.service.httpPort.enable** | Enables the HTTP port for the Ingress Controller service. | true | -| **controller.service.httpPort.port** | The HTTP port of the Ingress Controller service. | 80 | +| **controller.service.customPorts** | A list of custom ports to expose through the NGINX Ingress Controller service. Follows the conventional Kubernetes yaml syntax for service ports. | [] | +| **controller.service.httpPort.enable** | Enables the HTTP port for the NGINX Ingress Controller service. | true | +| **controller.service.httpPort.port** | The HTTP port of the NGINX Ingress Controller service. | 80 | | **controller.service.httpPort.nodePort** | The custom NodePort for the HTTP port. Requires `controller.service.type` set to `NodePort`. | "" | -| **controller.service.httpPort.targetPort** | The target port of the HTTP port of the Ingress Controller service. | 80 | -| **controller.service.httpsPort.enable** | Enables the HTTPS port for the Ingress Controller service. | true | -| **controller.service.httpsPort.port** | The HTTPS port of the Ingress Controller service. | 443 | +| **controller.service.httpPort.targetPort** | The target port of the HTTP port of the NGINX Ingress Controller service. | 80 | +| **controller.service.httpsPort.enable** | Enables the HTTPS port for the NGINX Ingress Controller service. | true | +| **controller.service.httpsPort.port** | The HTTPS port of the NGINX Ingress Controller service. | 443 | | **controller.service.httpsPort.nodePort** | The custom NodePort for the HTTPS port. Requires `controller.service.type` set to `NodePort`. | "" | -| **controller.service.httpsPort.targetPort** | The target port of the HTTPS port of the Ingress Controller service. | 443 | -| **controller.serviceAccount.annotations** | The annotations of the Ingress Controller service account. | {} | -| **controller.serviceAccount.name** | The name of the service account of the Ingress Controller pods. Used for RBAC. | Autogenerated | +| **controller.service.httpsPort.targetPort** | The target port of the HTTPS port of the NGINX Ingress Controller service. | 443 | +| **controller.serviceAccount.annotations** | The annotations of the NGINX Ingress Controller service account. | {} | +| **controller.serviceAccount.name** | The name of the service account of the NGINX Ingress Controller pods. Used for RBAC. | Autogenerated | | **controller.serviceAccount.imagePullSecretName** | The name of the secret containing docker registry credentials. Secret must exist in the same namespace as the helm release. | "" | | **controller.serviceAccount.imagePullSecretsNames** | The list of secret names containing docker registry credentials. Secret must exist in the same namespace as the helm release. | [] | -| **controller.reportIngressStatus.enable** | Updates the address field in the status of Ingress resources with an external address of the Ingress Controller. You must also specify the source of the external address either through an external service via `controller.reportIngressStatus.externalService`, `controller.reportIngressStatus.ingressLink` or the `external-status-address` entry in the ConfigMap via `controller.config.entries`. **Note:** `controller.config.entries.external-status-address` takes precedence over the others. | true | -| **controller.reportIngressStatus.externalService** | Specifies the name of the service with the type LoadBalancer through which the Ingress Controller is exposed externally. The external address of the service is used when reporting the status of Ingress, VirtualServer and VirtualServerRoute resources. `controller.reportIngressStatus.enable` must be set to `true`. The default is autogenerated and enabled when `controller.service.create` is set to `true` and `controller.service.type` is set to `LoadBalancer`. | Autogenerated | -| **controller.reportIngressStatus.ingressLink** | Specifies the name of the IngressLink resource, which exposes the Ingress Controller pods via a BIG-IP system. The IP of the BIG-IP system is used when reporting the status of Ingress, VirtualServer and VirtualServerRoute resources. `controller.reportIngressStatus.enable` must be set to `true`. | "" | +| **controller.reportIngressStatus.enable** | Updates the address field in the status of Ingress resources with an external address of the NGINX Ingress Controller. You must also specify the source of the external address either through an external service via `controller.reportIngressStatus.externalService`, `controller.reportIngressStatus.ingressLink` or the `external-status-address` entry in the ConfigMap via `controller.config.entries`. **Note:** `controller.config.entries.external-status-address` takes precedence over the others. | true | +| **controller.reportIngressStatus.externalService** | Specifies the name of the service with the type LoadBalancer through which the NGINX Ingress Controller is exposed externally. The external address of the service is used when reporting the status of Ingress, VirtualServer and VirtualServerRoute resources. `controller.reportIngressStatus.enable` must be set to `true`. The default is autogenerated and enabled when `controller.service.create` is set to `true` and `controller.service.type` is set to `LoadBalancer`. | Autogenerated | +| **controller.reportIngressStatus.ingressLink** | Specifies the name of the IngressLink resource, which exposes the NGINX Ingress Controller pods via a BIG-IP system. The IP of the BIG-IP system is used when reporting the status of Ingress, VirtualServer and VirtualServerRoute resources. `controller.reportIngressStatus.enable` must be set to `true`. | "" | | **controller.reportIngressStatus.enableLeaderElection** | Enable Leader election to avoid multiple replicas of the controller reporting the status of Ingress resources. `controller.reportIngressStatus.enable` must be set to `true`. | true | | **controller.reportIngressStatus.leaderElectionLockName** | Specifies the name of the ConfigMap, within the same namespace as the controller, used as the lock for leader election. controller.reportIngressStatus.enableLeaderElection must be set to true. | Autogenerated | | **controller.reportIngressStatus.annotations** | The annotations of the leader election configmap. | {} | -| **controller.pod.annotations** | The annotations of the Ingress Controller pod. | {} | -| **controller.pod.extraLabels** | The additional extra labels of the Ingress Controller pod. | {} | -| **controller.appprotect.enable** | Enables the App Protect WAF module in the Ingress Controller. | false | +| **controller.pod.annotations** | The annotations of the NGINX Ingress Controller pod. | {} | +| **controller.pod.extraLabels** | The additional extra labels of the NGINX Ingress Controller pod. | {} | +| **controller.appprotect.enable** | Enables the App Protect WAF module in the NGINX Ingress Controller. | false | | **controller.appprotect.v5** | Enables App Protect WAF v5. | false | | **controller.appprotect.volumes** | Volumes for App Protect WAF v5. | [{"name": "app-protect-bd-config", "emptyDir": {}},{"name": "app-protect-config", "emptyDir": {}},{"name": "app-protect-bundles", "emptyDir": {}}] | | **controller.appprotect.enforcer.host** | Host that the App Protect WAF v5 Enforcer runs on. | "127.0.0.1" | @@ -423,26 +249,26 @@ The following tables lists the configurable parameters of the NGINX Ingress Cont | **controller.appprotect.configManager.image.digest** | The digest of the App Protect WAF v5 Configuration Manager. Takes precedence over tag if set. | "" | | **controller.appprotect.configManager.image.pullPolicy** | The pull policy for the App Protect WAF v5 Configuration Manager image. | IfNotPresent | | **controller.appprotect.configManager.securityContext** | The security context for App Protect WAF v5 Configuration Manager container. | {"allowPrivilegeEscalation":false,"runAsUser":101,"runAsNonRoot":true,"capabilities":{"drop":["all"]}} | -| **controller.appprotectdos.enable** | Enables the App Protect DoS module in the Ingress Controller. | false | -| **controller.appprotectdos.enable** | Enables the App Protect DoS module in the Ingress Controller. | false | +| **controller.appprotectdos.enable** | Enables the App Protect DoS module in the NGINX Ingress Controller. | false | +| **controller.appprotectdos.enable** | Enables the App Protect DoS module in the NGINX Ingress Controller. | false | | **controller.appprotectdos.debug** | Enable debugging for App Protect DoS. | false | | **controller.appprotectdos.maxDaemons** | Max number of ADMD instances. | 1 | | **controller.appprotectdos.maxWorkers** | Max number of nginx processes to support. | Number of CPU cores in the machine | | **controller.appprotectdos.memory** | RAM memory size to consume in MB. | 50% of free RAM in the container or 80MB, the smaller | -| **controller.readyStatus.enable** | Enables the readiness endpoint `"/nginx-ready"`. The endpoint returns a success code when NGINX has loaded all the config after the startup. This also configures a readiness probe for the Ingress Controller pods that uses the readiness endpoint. | true | +| **controller.readyStatus.enable** | Enables the readiness endpoint `"/nginx-ready"`. The endpoint returns a success code when NGINX has loaded all the config after the startup. This also configures a readiness probe for the NGINX Ingress Controller pods that uses the readiness endpoint. | true | | **controller.readyStatus.port** | The HTTP port for the readiness endpoint. | 8081 | -| **controller.readyStatus.initialDelaySeconds** | The number of seconds after the Ingress Controller pod has started before readiness probes are initiated. | 0 | +| **controller.readyStatus.initialDelaySeconds** | The number of seconds after the NGINX Ingress Controller pod has started before readiness probes are initiated. | 0 | | **controller.enableLatencyMetrics** | Enable collection of latency metrics for upstreams. Requires `prometheus.create`. | false | | **controller.minReadySeconds** | Specifies the minimum number of seconds for which a newly created Pod should be ready without any of its containers crashing, for it to be considered available. [docs](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#min-ready-seconds) | 0 | | **controller.autoscaling.enabled** | Enables HorizontalPodAutoscaling. | false | -| **controller.autoscaling.annotations** | The annotations of the Ingress Controller HorizontalPodAutoscaler. | {} | +| **controller.autoscaling.annotations** | The annotations of the NGINX Ingress Controller HorizontalPodAutoscaler. | {} | | **controller.autoscaling.behavior** | Behavior configuration for the HPA. | {} | | **controller.autoscaling.minReplicas** | Minimum number of replicas for the HPA. | 1 | | **controller.autoscaling.maxReplicas** | Maximum number of replicas for the HPA. | 3 | | **controller.autoscaling.targetCPUUtilizationPercentage** | The target CPU utilization percentage. | 50 | | **controller.autoscaling.targetMemoryUtilizationPercentage** | The target memory utilization percentage. | 50 | | **controller.podDisruptionBudget.enabled** | Enables PodDisruptionBudget. | false | -| **controller.podDisruptionBudget.annotations** | The annotations of the Ingress Controller pod disruption budget | {} | +| **controller.podDisruptionBudget.annotations** | The annotations of the NGINX Ingress Controller pod disruption budget | {} | | **controller.podDisruptionBudget.minAvailable** | The number of Ingress Controller pods that should be available. This is a mutually exclusive setting with "maxUnavailable". | 0 | | **controller.podDisruptionBudget.maxUnavailable** | The number of Ingress Controller pods that can be unavailable. This is a mutually exclusive setting with "minAvailable". | 0 | | **controller.strategy** | Specifies the strategy used to replace old Pods with new ones. Docs for [Deployment update strategy](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy) and [Daemonset update strategy](https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/#daemonset-update-strategy) | {} | @@ -471,20 +297,50 @@ The following tables lists the configurable parameters of the NGINX Ingress Cont | **serviceInsight.secret** | The namespace / name of a Kubernetes TLS Secret. If specified, this secret is used to secure the Service Insight endpoint with TLS connections. | "" | | **serviceNameOverride** | Used to prevent cloud load balancers from being replaced due to service name change during helm upgrades. | "" | | **nginxServiceMesh.enable** | Enable integration with NGINX Service Mesh. See the NGINX Service Mesh docs for more details. Requires `controller.nginxplus`. | false | -| **nginxServiceMesh.enableEgress** | Enable NGINX Service Mesh workloads to route egress traffic through the Ingress Controller. See the NGINX Service Mesh docs for more details. Requires `nginxServiceMesh.enable`. | false | -|**nginxAgent.enable** | Enable NGINX Agent to integrate the Security Monitoring and App Protect WAF modules. Requires `controller.appprotect.enable`. | false | -|**nginxAgent.instanceGroup** | Set a custom Instance Group name for the deployment, shown when connected to NGINX Instance Manager. `nginx-ingress.controller.fullname` will be used if not set. | "" | -|**nginxAgent.logLevel** | Log level for NGINX Agent. | "error | -|**nginxAgent.instanceManager.host** | FQDN or IP for connecting to NGINX Ingress Controller. Required when `nginxAgent.enable` is set to `true` | "" | -|**nginxAgent.instanceManager.grpcPort** | Port for connecting to NGINX Ingress Controller. | 443 | -|**nginxAgent.instanceManager.sni** | Server Name Indication for Instance Manager. See the NGINX Agent [docs]({{< ref "/agent/configuration/encrypt-communication.md" >}}) for more details. | "" | -|**nginxAgent.instanceManager.tls.enable** | Enable TLS for Instance Manager connection. | true | -|**nginxAgent.instanceManager.tls.skipVerify** | Skip certification verification for Instance Manager connection. | false | -|**nginxAgent.instanceManager.tls.caSecret** | Name of `nginx.org/ca` secret used for verification of Instance Manager TLS. | "" | -|**nginxAgent.instanceManager.tls.secret** | Name of `kubernetes.io/tls` secret with a TLS certificate and key for using mTLS between NGINX Agent and Instance Manager. See the NGINX Instance Manager [docs]({{< ref "/nim/system-configuration/secure-traffic.md#mutual-client-certificate-authentication-setup-mtls" >}}) and the NGINX Agent [docs]({{< ref "/agent/configuration/encrypt-communication.md" >}}) for more details. | "" | -|**nginxAgent.syslog.host** | Address for NGINX Agent to run syslog listener. | 127.0.0.1 | -|**nginxAgent.syslog.port** | Port for NGINX Agent to run syslog listener. | 1514 | -|**nginxAgent.napMonitoring.collectorBufferSize** | Buffer size for collector. Will contain log lines and parsed log lines. | 50000 | -|**nginxAgent.napMonitoring.processorBufferSize** | Buffer size for processor. Will contain log lines and parsed log lines. | 50000 | -|**nginxAgent.customConfigMap** | The name of a custom ConfigMap to use instead of the one provided by default. | "" | +| **nginxServiceMesh.enableEgress** | Enable NGINX Service Mesh workloads to route egress traffic through the NGINX Ingress Controller. See the NGINX Service Mesh docs for more details. Requires `nginxServiceMesh.enable`. | false | +|**nginxAgent.enable** | Enable NGINX Agent 3.x to allow [connecting to NGINX One Console]({{< ref "/nginx-one/k8s/add-nic.md" >}}) or to integrate NGINX Agent 2.x for [Security Monitoring]({{< ref "/nic/tutorials/security-monitoring.md" >}}) . | false | +|**nginxAgent.logLevel** | Log level for NGINX Agent. | "error" | +|**nginxAgent.dataplaneKeySecretName** | Name of the Kubernetes Secret containing the Data Plane key used to authenticate to NGINX One Console. Learn more [here]({{< ref "/nginx-one/k8s/add-nic.md" >}}). Required when `nginxAgent.enable` is set to `true`. Requires NGINX Agent 3.x. | "" | +|**nginxAgent.endpointHost** | Domain or IP address for the NGINX One Console. Requires NGINX Agent 3.x. | "agent.connect.nginx.com" | +|**nginxAgent.endpointPort** | Port for the NGINX One Console endpoint. Requires NGINX Agent 3.x. | 443 | +|**nginxAgent.tlsSkipVerify** | Skip TLS verification for the NGINX One Console endpoint. Requires NGINX Agent 3.x. | false | +|**nginxAgent.instanceGroup** | Set a custom Instance Group name for the deployment, shown when connected to NGINX Instance Manager. `nginx-ingress.controller.fullname` will be used if not set. Requires NGINX Agent 2.x. | "" | +|**nginxAgent.instanceManager.host** | FQDN or IP for connecting to NGINX Ingress Controller. Required when `nginxAgent.enable` is set to `true`. Requires NGINX Agent 2.x. | "" | +|**nginxAgent.instanceManager.grpcPort** | Port for connecting to NGINX Ingress Controller. Requires NGINX Agent 2.x. | 443 | +|**nginxAgent.instanceManager.sni** | Server Name Indication for Instance Manager. See the NGINX Agent [docs]({{< ref "/agent/configuration/encrypt-communication.md" >}}) for more details. Requires NGINX Agent 2.x. | "" | +|**nginxAgent.instanceManager.tls.enable** | Enable TLS for Instance Manager connection. Requires NGINX Agent 2.x. | true | +|**nginxAgent.instanceManager.tls.skipVerify** | Skip certification verification for Instance Manager connection. Requires NGINX Agent 2.x. | false | +|**nginxAgent.instanceManager.tls.caSecret** | Name of `nginx.org/ca` secret used for verification of Instance Manager TLS. Requires NGINX Agent 2.x. | "" | +|**nginxAgent.instanceManager.tls.secret** | Name of `kubernetes.io/tls` secret with a TLS certificate and key for using mTLS between NGINX Agent and Instance Manager. See the NGINX Instance Manager [docs]({{< ref "/nim/system-configuration/secure-traffic.md#mutual-client-certificate-authentication-setup-mtls" >}}) and the NGINX Agent [docs]({{< ref "/agent/configuration/encrypt-communication.md" >}}) for more details. Requires NGINX Agent 2.x. | "" | +|**nginxAgent.syslog.host** | Address for NGINX Agent to run syslog listener. Requires NGINX Agent 2.x. | 127.0.0.1 | +|**nginxAgent.syslog.port** | Port for NGINX Agent to run syslog listener. Requires NGINX Agent 2.x. | 1514 | +|**nginxAgent.napMonitoring.collectorBufferSize** | Buffer size for collector. Will contain log lines and parsed log lines. Requires NGINX Agent 2.x. | 50000 | +|**nginxAgent.napMonitoring.processorBufferSize** | Buffer size for processor. Will contain log lines and parsed log lines. Requires NGINX Agent 2.x. | 50000 | +|**nginxAgent.customConfigMap** | The name of a custom ConfigMap to use instead of the one provided by default. Requires NGINX Agent 2.x.| "" | {{}} + +## Uninstall NGINX Ingress Controller + +To uninstall NGINX Ingress Controller, you must first remove the chart. + +To remove a release named _my-release_, use the following command: + +```shell +helm uninstall my-release +``` + +The command removes all the Kubernetes components associated with the release and deletes the release. + +Uninstalling the release does not remove the CRDs. To do so, first pull the chart sources: + +```shell +helm pull oci://ghcr.io/nginx/charts/nginx-ingress --untar --version {{< nic-helm-version >}} +``` + +Then use _kubectl_ to delete the CRDs: + +```shell +kubectl delete -f crds/ +``` + +{{< call-out "warning" >}} This command will delete all the corresponding custom resources in your cluster across all namespaces. Please ensure there are no custom resources that you want to keep and there are no other NGINX Ingress Controller instances running in the cluster. {{< /call-out >}} \ No newline at end of file diff --git a/content/nic/installation/installing-nic/installation-with-manifests.md b/content/nic/installation/installing-nic/installation-with-manifests.md index 2f44611f6..7cc5e8372 100644 --- a/content/nic/installation/installing-nic/installation-with-manifests.md +++ b/content/nic/installation/installing-nic/installation-with-manifests.md @@ -2,27 +2,27 @@ title: Installation with Manifests toc: true weight: 200 -type: how-to -product: NIC +nd-content-type: how-to +nd-product: NIC nd-docs: DOCS-603 --- This guide explains how to use Manifests to install F5 NGINX Ingress Controller, then create both common and custom resources and set up role-based access control. -## Before you start +## Before you begin If you are using NGINX Plus, get the NGINX Ingress Controller JWT and [create a license secret]({{< ref "/nic/installation/create-license-secret.md" >}}). ### Get the NGINX Controller Image -{{< note >}} Always use the latest stable release listed on the [releases page]({{< ref "/nic/releases.md" >}}). {{< /note >}} +{{< call-out "note" >}} Always use the latest stable release listed on the [releases page]({{< ref "/nic/releases.md" >}}). {{< /call-out >}} Choose one of the following methods to get the NGINX Ingress Controller image: - **NGINX Ingress Controller**: Download the image `nginx/nginx-ingress` from [DockerHub](https://hub.docker.com/r/nginx/nginx-ingress). - **NGINX Plus Ingress Controller**: You have two options for this, both requiring an NGINX Ingress Controller subscription. - - Download the image using your NGINX Ingress Controller subscription certificate and key. View the [Get NGINX Ingress Controller from the F5 Registry]({{< ref "/nic/installation/nic-images/get-registry-image.md" >}}) topic. - - The [Get the NGINX Ingress Controller image with JWT]({{< ref "/nic/installation/nic-images/get-image-using-jwt.md" >}}) topic describes how to use your subscription JWT token to get the image. +- - [Download NGINX Ingress Controller from the F5 Registry]({{< ref "/nic/installation/nic-images/registry-download.md" >}}) topic. +- - [Add an NGINX Ingress Controller image to your cluster]({{< ref "/nic/installation/nic-images/add-image-to-cluster.md" >}}) - **Build your own image**: To build your own image, follow the [Build NGINX Ingress Controller]({{< ref "/nic/installation/build-nginx-ingress-controller.md" >}}) topic. ### Clone the repository diff --git a/content/nic/installation/installing-nic/installation-with-operator.md b/content/nic/installation/installing-nic/installation-with-operator.md index 459d7cfd1..fc659244a 100644 --- a/content/nic/installation/installing-nic/installation-with-operator.md +++ b/content/nic/installation/installing-nic/installation-with-operator.md @@ -9,17 +9,17 @@ nd-docs: DOCS-604 This document explains how to install F5 NGINX Ingress Controller using NGINX Ingress Operator. -## Before you start +## Before you begin If you're using NGINX Plus, get the NGINX Ingress Controller JWT and [create a license secret]({{< ref "/nic/installation/create-license-secret.md" >}}). {{< note >}} We recommend the most recent stable version of NGINX Ingress Controller, available on the GitHub repository's [releases page]({{< ref "/nic/releases.md" >}}). {{< /note >}} 1. Make sure you have access to the NGINX Ingress Controller image: - - For NGINX Ingress Controller, use the image `nginx/nginx-ingress` from [DockerHub](https://hub.docker.com/r/nginx/nginx-ingress). - - For NGINX Plus Ingress Controller, view the [Get the F5 Registry NGINX Ingress Controller image]({{< ref "/nic/installation/nic-images/get-registry-image.md" >}}) topic for details on how to pull the image from the F5 Docker registry. - - The [Get the NGINX Ingress Controller image with JWT]({{< ref "/nic/installation/nic-images/get-image-using-jwt.md" >}}) topic describes how to use your subscription JWT token to get the image. - - The [Build NGINX Ingress Controller]({{< ref "/nic/installation/build-nginx-ingress-controller.md" >}}) topic explains how to push an image to a private Docker registry. +- - For NGINX Ingress Controller, use the image `nginx/nginx-ingress` from [DockerHub](https://hub.docker.com/r/nginx/nginx-ingress). +- - For NGINX Plus Ingress Controller, view the [Download NGINX Ingress Controller from the F5 Registry]({{< ref "/nic/installation/nic-images/registry-download" >}}) topic for details on how to pull the image from the F5 Docker registry. +- - The [Add an NGINX Ingress Controller image to your cluster]({{< ref "/nic/installation/nic-images/add-image-to-cluster.md" >}}) topic describes how to use your subscription JWT token to get the image. +- - The [Build NGINX Ingress Controller]({{< ref "/nic/installation/build-nginx-ingress-controller.md" >}}) topic explains how to push an image to a private Docker registry. 1. Install the NGINX Ingress Operator following the [instructions](https://github.com/nginx/nginx-ingress-helm-operator/blob/main/docs/installation.md). 1. Create the SecurityContextConstraint as outlined in the ["Getting Started" instructions](https://github.com/nginx/nginx-ingress-helm-operator/blob/main/README.md#getting-started). diff --git a/content/nic/installation/installing-nic/upgrade-to-v4.md b/content/nic/installation/installing-nic/upgrade-to-v4.md deleted file mode 100644 index f63d33fb2..000000000 --- a/content/nic/installation/installing-nic/upgrade-to-v4.md +++ /dev/null @@ -1,137 +0,0 @@ ---- -title: Upgrade from NGINX Ingress Controller v3.x to v4.0.0 -toc: true -weight: 400 -nd-content-type: how-to -nd-product: NIC -nd-docs: DOCS-1862 ---- - -This document explains how to upgrade F5 NGINX Ingress Controller from version v3.x to v4.0.0. - -There are two necessary steps required: updating the `apiVersion` value of custom resources and configuring structured logging. - -For NGINX Plus users, there is a third step to create a Secret for your license. - -{{< call-out "warning" "This upgrade path is intended for 3.x to 4.0.0 only" >}} - -The instructions in this document are intended only for users upgrading from NGINX Ingress Controller 3.x to 4.0.0. Internal changes meant that backwards compability was not possible, requiring extra steps to upgrade. - -From NGINX Ingress Controller v4.0.0 onwards, you can upgrade as normal, based on your installation method: [Helm]({{< ref "/nic/installation/installing-nic/installation-with-helm.md">}}) or [Manifests]({{< ref "/nic/installation/installing-nic/installation-with-manifests.md">}}). - -{{< /call-out >}} - ---- - -## Update custom resource apiVersion - -If the Helm chart you have been using is `v2.x`, before upgrading to NGINX Ingress Controller 4.0.0 you must update your GlobalConfiguration, Policy and TransportServer resources from `apiVersion: k8s.nginx.org/v1alpha1` to `apiVersion: k8s.nginx.org/v1`. - -If the Helm chart you have been using is `v1.0.2` or earlier (NGINX Ingress Controller `v3.3.2`), upgrade to Helm chart `v1.4.2` (NGINX Ingress Controller `v3.7.2`) before updating your GlobalConfiguration, Policy and TransportServer resources. - -The example below shows the change for a Policy resource: you must do the same for all GlobalConfiguration and TransportServer resources. - -{{}} - -{{% comment %}} Keep this left aligned. {{% /comment %}} -{{%tab name="Before"%}} - -```yaml -apiVersion: k8s.nginx.org/v1alpha1 -kind: Policy -metadata: - name: rate-limit-policy -spec: - rateLimit: - rate: 1r/s - key: ${binary_remote_addr} - zoneSize: 10M -``` -{{% /tab %}} - -{{%tab name="After"%}} -```yaml -apiVersion: k8s.nginx.org/v1 -kind: Policy -metadata: - name: rate-limit-policy -spec: - rateLimit: - rate: 1r/s - key: ${binary_remote_addr} - zoneSize: 10M -``` -{{% /tab %}} - -{{}} - -{{< warning >}} -If a *GlobalConfiguration*, *Policy* or *TransportServer* resource is deployed with `apiVersion: k8s.nginx.org/v1alpha1`, it will be **deleted** during the upgrade process. -{{}} - -Once above specified custom resources are moved to `v1` ,please run below `kubectl` commands before upgrading to v4.0.0 Custom Resource Definitions (CRDs) to avoid [this issue](https://github.com/nginx/kubernetes-ingress/issues/7010). - -```shell -kubectl patch customresourcedefinitions transportservers.k8s.nginx.org --subresource='status' --type='merge' -p '{"status":{"storedVersions": ["v1"]}}' -``` - -```shell -kubectl patch customresourcedefinitions globalconfigurations.k8s.nginx.org --subresource='status' --type='merge' -p '{"status":{"storedVersions": ["v1"]}}' -``` - ---- - -## Configure structured logging - -To configure structured logging, you must update your log deployment arguments from an integer to a string. The logs themselves can also be rendered in different formats. - -{{< note >}} These options apply to NGINX Ingress Controller logs, and do not affect NGINX logs. {{< /note >}} - -| **Level arguments** | **Format arguments** | -|---------------------|----------------------| -| `trace` | `json` | -| `debug` | `text` | -| `info` | `glog` | -| `warning` | | -| `error` | | -| `fatal` | | - -{{}} - -{{%tab name="Helm"%}} - -The Helm value of `controller.logLevel` has been changed from an integer to a string. - -To change the rendering of the log format, use the `controller.logFormat` key. - -```yaml -controller: - logLevel: info - logFormat: json -``` -{{% /tab %}} - -{{%tab name="Manifests"%}} - -The command line argument `-v` has been replaced with `-log-level`, and takes a string instead of an integer. The argument `-logtostderr` has also been deprecated. - -To change the rendering of the log format, use the `-log-format` argument. - -```yaml -args: - - -log-level=info - - -log-format=json -``` -{{% /tab %}} - -{{}} - ---- - -## Create License secret - -If you're using [NGINX Plus]({{< ref "/nic/overview/nginx-plus.md" >}}) with NGINX Ingress Controller, you should read the [Create License Secret]({{< ref "/nic/installation/create-license-secret.md" >}}) topic to set up your NGINX Plus license. - -The topic also contains guidance for [sending reports to NGINX Instance Manager]({{< ref "/nic/installation/create-license-secret.md#nim">}}), which is necessary for air-gapped environments. - -In prior versions, usage reporting with the cluster connector was required: it is no longer necessary, as it is built into NGINX Plus. diff --git a/content/nic/installation/integrations/_index.md b/content/nic/installation/integrations/_index.md index 83943f248..1690f4888 100644 --- a/content/nic/installation/integrations/_index.md +++ b/content/nic/installation/integrations/_index.md @@ -1,6 +1,6 @@ --- title: Integrations description: -weight: 600 +weight: 800 url: /nginx-ingress-controller/installation/integrations --- diff --git a/content/nic/installation/integrations/app-protect-dos/installation.md b/content/nic/installation/integrations/app-protect-dos/installation.md index 75a4439ae..0a50deb65 100644 --- a/content/nic/installation/integrations/app-protect-dos/installation.md +++ b/content/nic/installation/integrations/app-protect-dos/installation.md @@ -226,5 +226,5 @@ For more information, see the [Configuration guide]({{< ref "/nic/installation/i If you prefer not to build your own NGINX Ingress Controller image, you can use pre-built images. Here are your options: -- Download the image using your NGINX Ingress Controller subscription certificate and key. View the [Get NGINX Ingress Controller from the F5 Registry]({{< ref "/nic/installation/nic-images/get-registry-image.md" >}}) topic. - - The [Get the NGINX Ingress Controller image with JWT]({{< ref "/nic/installation/nic-images/get-image-using-jwt.md" >}}) topic describes how to use your subscription JWT token to get the image. +- Download the image using your NGINX Ingress Controller subscription certificate and key. View the [Download NGINX Ingress Controller from the F5 Registry]({{< ref "/nic/installation/nic-images/registry-download.md" >}}) topic. + - The [Add an NGINX Ingress Controller image to your cluster]({{< ref "/nic/installation/nic-images/add-image-to-cluster.md" >}}) topic describes how to use your subscription JWT token to get the image. diff --git a/content/nic/installation/integrations/app-protect-waf-v5/installation.md b/content/nic/installation/integrations/app-protect-waf-v5/installation.md index 2cdc5964c..680f767fa 100644 --- a/content/nic/installation/integrations/app-protect-waf-v5/installation.md +++ b/content/nic/installation/integrations/app-protect-waf-v5/installation.md @@ -501,5 +501,5 @@ For more information, see the [Configuration guide]({{< ref "/nic/installation/i If you prefer not to build your own NGINX Ingress Controller image, you can use pre-built images. Here are your options: -- Download the image using your NGINX Ingress Controller subscription certificate and key. View the [Get NGINX Ingress Controller from the F5 Registry]({{< ref "/nic/installation/nic-images/get-registry-image.md" >}}) topic. -- The [Get the NGINX Ingress Controller image with JWT]({{< ref "/nic/installation/nic-images/get-image-using-jwt.md" >}}) topic describes how to use your subscription JWT token to get the image. +- Download the image using your NGINX Ingress Controller subscription certificate and key. View the [Download NGINX Ingress Controller from the F5 Registry]({{< ref "/nic/installation/nic-images/registry-download.md" >}}) topic. +- The [Add an NGINX Ingress Controller image to your cluster]({{< ref "/nic/installation/nic-images/add-image-to-cluster.md" >}}) topic describes how to use your subscription JWT token to get the image. diff --git a/content/nic/installation/integrations/app-protect-waf/installation.md b/content/nic/installation/integrations/app-protect-waf/installation.md index ed7732450..b4170499d 100644 --- a/content/nic/installation/integrations/app-protect-waf/installation.md +++ b/content/nic/installation/integrations/app-protect-waf/installation.md @@ -217,5 +217,5 @@ For more information, see the [Configuration guide]({{< ref "/nic/installation/i If you prefer not to build your own NGINX Ingress Controller image, you can use pre-built images. Here are your options: -- Download the image using your NGINX Ingress Controller subscription certificate and key. View the [Get NGINX Ingress Controller from the F5 Registry]({{< ref "/nic/installation/nic-images/get-registry-image.md" >}}) topic. -- The [Get the NGINX Ingress Controller image with JWT]({{< ref "/nic/installation/nic-images/get-image-using-jwt.md" >}}) topic describes how to use your subscription JWT token to get the image. +- Download the image using your NGINX Ingress Controller subscription certificate and key. View the [Download NGINX Ingress Controller from the F5 Registry]({{< ref "/nic/installation/nic-images/registry-download.md" >}}) topic. +- The [Add an NGINX Ingress Controller image to your cluster]({{< ref "/nic/installation/nic-images/add-image-to-cluster.md" >}}) topic describes how to use your subscription JWT token to get the image. diff --git a/content/nic/installation/integrations/nic-n1-console.md b/content/nic/installation/integrations/nic-n1-console.md deleted file mode 100644 index 8602062a0..000000000 --- a/content/nic/installation/integrations/nic-n1-console.md +++ /dev/null @@ -1,130 +0,0 @@ ---- -title: Connect NGINX Ingress Controller to NGINX One Console -toc: true -draft: true -weight: 1800 -nd-type: how-to -nd-product: NIC ---- - -This document explains how to connect F5 NGINX Ingress Controller to NGINX One Console using NGINX Agent. - -Connecting NGINX Ingress Controller to NGINX One Console enables centralized monitoring of all controller instances. - -## Deploy NGINX Ingress Controller with NGINX Agent - -{{}} - -{{%tab name="Helm"%}} - -Edit your `values.yaml` file to enable NGINX Agent and configure it to connect to NGINX One Console: -```yaml -nginxAgent: - enable: true - dataplaneKey: "" -``` - - The `dataplaneKey` is used to authenticate the agent with NGINX One Console. See the NGINX One Console Docs [here]({{< ref "/nginx-one/getting-started.md#generate-data-plane-key" >}}) to generate your dataplane key from the NGINX One Console. - - -Follow the [Installation with Helm]({{< ref "/nic/installation/installing-nic/installation-with-helm.md" >}}) instructions to deploy NGINX Ingress Controller. - -{{%/tab%}} - -{{%tab name="Manifests"%}} - -Add the following flag to the deployment/daemonset file of NGINX Ingress Controller: - -```yaml -args: -- -agent=true -``` - -Create a ConfigMap with an `nginx-agent.conf` file: - -```yaml -kind: ConfigMap -apiVersion: v1 -metadata: - name: nginx-agent-config - namespace: -data: - nginx-agent.conf: |- - log: - # set log level (error, info, debug; default "info") - level: info - # set log path. if empty, don't log to file. - path: "" - - allowed_directories: - - /etc/nginx - - /usr/lib/nginx/modules - - features: - - certificates - - connection - - metrics - - file-watcher - - ## command server settings - command: - server: - host: product.connect.nginx.com - port: 443 - auth: - token: "" - tls: - skip_verify: false -``` - -Make sure you set the namespace in the nginx-agent-config to the same namespace as NGINX Ingress Controller. - -Mount the ConfigMap to the deployment/daemonset file of NGINX Ingress Controller: - -```yaml -volumeMounts: -- name: nginx-agent-config - mountPath: /etc/nginx-agent/nginx-agent.conf - subPath: nginx-agent.conf -volumes: -- name: nginx-agent-config - configMap: - name: nginx-agent-config -``` - -Follow the [Installation with Manifests]({{< ref "/nic/installation/installing-nic/installation-with-manifests.md" >}}) instructions to deploy NGINX Ingress Controller. - -{{%/tab%}} - -{{}} - -## Verify that NGINX Ingress Controller is connected to NGINX One - -After deploying NGINX Ingress Controller with NGINX Agent, you can verify the connection to NGINX One Console. - -Log in to your NGINX One Console account and navigate to the Instances dashboard. Your NGINX Ingress Controller instances should appear in the list, where the instance name will be the pod name. - -## Troubleshooting - -If you encounter issues connecting NGINX Ingress Controller to NGINX One Console, try the following steps based on your image type: - -Check the NGINX Agent version: - -```shell -kubectl exec -it -n -- nginx-agent -v -``` - -If nginx-agent version is v3, continue with the following steps. -Otherwise, make sure you are using an image that does not include App Protect. - -Check the NGINX Agent configuration: - -```shell -kubectl exec -it -n -- cat /etc/nginx-agent/nginx-agent.conf -``` - -Check NGINX Agent logs: - -```shell -kubectl exec -it -n -- nginx-agent -``` diff --git a/content/nic/installation/nic-images/add-image-to-cluster.md b/content/nic/installation/nic-images/add-image-to-cluster.md new file mode 100644 index 000000000..f07d77ff4 --- /dev/null +++ b/content/nic/installation/nic-images/add-image-to-cluster.md @@ -0,0 +1,167 @@ +--- +title: Add an NGINX Ingress Controller image to your cluster +toc: true +weight: 150 +nd-content-type: how-to +nd-product: NIC +nd-docs: DOCS-1454 +--- + +This document describes how to add an F5 NGINX Plus Ingress Controller image from the F5 Docker registry into your Kubernetes cluster using a JWT token. + +## Before you begin + +To follow these steps, you will need the following pre-requisite: + +- [Create a license Secret]({{< ref "/nic/installation/create-license-secret.md" >}}) + +You can also get the NGINX Ingress Controller image using the following alternate methods: + +- [Download NGINX Ingress Controller from the F5 Registry]({{< ref "/nic/installation/nic-images/registry-download.md" >}}) +- [Build NGINX Ingress Controller]({{< ref "/nic/installation/build-nginx-ingress-controller.md" >}}) +- For NGINX Open Source, you can pull the [nginx/nginx-ingress image](https://hub.docker.com/r/nginx/nginx-ingress/) from DockerHub + +## Helm deployments + +If you are using Helm for deployment, there are two main methods: using a _chart_ or _source_. + +### Add the image from chart + +The following command installs NGINX Ingress Controller with a Helm chart, passing required arguments using the `set` parameter. + +```shell +helm install my-release -n nginx-ingress oci://ghcr.io/nginx/charts/nginx-ingress --version {{< nic-helm-version >}} --set controller.image.repository=private-registry.nginx.com/nginx-ic/nginx-plus-ingress --set controller.image.tag={{< nic-version >}} --set controller.nginxplus=true --set controller.serviceAccount.imagePullSecretName=regcred +``` + +You can also use the certificate and key from the MyF5 portal and the Docker registry API to list the available image tags for the repositories, for example: + +```shell +curl https://private-registry.nginx.com/v2/nginx-ic/nginx-plus-ingress/tags/list --key --cert +``` +```json +{ +"name": "nginx-ic/nginx-plus-ingress", +"tags": [ + "{{< nic-version >}}-alpine", + "{{< nic-version >}}-alpine-fips", + "{{< nic-version >}}-ubi", + "{{< nic-version >}}" +] +} +``` + +```shell +curl https://private-registry.nginx.com/v2/nginx-ic-nap/nginx-plus-ingress/tags/list --key --cert +``` +```json +{ +"name": "nginx-ic-nap/nginx-plus-ingress", +"tags": [ + "{{< nic-version >}}-alpine-fips", + "{{< nic-version >}}-ubi", + "{{< nic-version >}}" +] +} +``` + +```shell +curl https://private-registry.nginx.com/v2/nginx-ic-dos/nginx-plus-ingress/tags/list --key --cert +``` +```json +{ +"name": "nginx-ic-dos/nginx-plus-ingress", +"tags": [ + "{{< nic-version >}}-ubi", + "{{< nic-version >}}" +] +} +``` + +The `jq` command was used in these examples to make the JSON output easier to read. + +### Add the image from source + +The [Installation with Helm]({{< ref "/nic/installation/installing-nic/installation-with-helm.md#install-the-helm-chart-from-source" >}}) documentation has a section describing how to use sources: these are the unique steps for Docker secrets using JWT tokens. + +1. Clone the NGINX [`kubernetes-ingress` repository](https://github.com/nginx/kubernetes-ingress). +1. Navigate to the `charts/nginx-ingress` folder of your local clone. +1. Open the `values.yaml` file in an editor. + + You must change a few lines NGINX Ingress Controller with NGINX Plus to be deployed. + + 1. Change the `nginxplus` argument to `true`. + 1. Change the `repository` argument to the NGINX Ingress Controller image you intend to use. + 1. Add an argument to `imagePullSecretName` or `imagePullSecretsNames` to allow Docker to pull the image from the private registry. + +The following code block shows snippets of the parameters you will need to change, and an example of their contents: + +```yaml +## Deploys the Ingress Controller for NGINX Plus +nginxplus: true +## Truncated fields +## ... +## ... +image: +## The image repository for the desired NGINX Ingress Controller image +repository: private-registry.nginx.com/nginx-ic/nginx-plus-ingress + +## The version tag +tag: {{< nic-version >}} + +serviceAccount: + ## The annotations of the service account of the Ingress Controller pods. + annotations: {} + +## Truncated fields +## ... +## ... + + ## The name of the secret containing docker registry credentials. + ## Secret must exist in the same namespace as the helm release. + ## Note that also imagePullSecretsNames can be used here if multiple secrets need to be set. + imagePullSecretName: regcred +``` + +With the modified `values.yaml` file, you can now use Helm to install NGINX Ingress Controller, for example: + +```shell +helm install nicdev01 -n nginx-ingress --create-namespace -f values.yaml . +``` + +The above command will install NGINX Ingress Controller in the `nginx-ingress` namespace. + +If the namespace does not exist, `--create-namespace` will create it. Using `-f values.yaml` tells Helm to use the `values.yaml` file that you modified earlier with the settings you want to apply for your NGINX Ingress Controller deployment. + +## Manifest deployment + +The page ["Installation with Manifests"]({{< ref "/nic/installation/installing-nic/installation-with-manifests.md" >}}) explains how to install NGINX Ingress Controller using manifests. The following snippet is an example of a deployment: + +```yaml +spec: + serviceAccountName: nginx-ingress + imagePullSecrets: + - name: regcred + automountServiceAccountToken: true + securityContext: + seccompProfile: + type: RuntimeDefault + containers: + - image: private-registry.nginx.com/nginx-ic/nginx-plus-ingress:{{< nic-version >}} + imagePullPolicy: IfNotPresent + name: nginx-plus-ingress +``` + +The `imagePullSecrets` and `containers.image` lines represent the Kubernetes secret, as well as the registry and version of NGINX Ingress Controller we are going to deploy. + +## Download an image for local use + +If you need to download an image for local use (Such as to push to a different container registry), use this command: + +```shell +docker login private-registry.nginx.com --username= --password=none +``` + +Replace the contents of `` with the contents of the JWT token itself. +Once you have successfully pulled the image, you can then tag it as needed. + +{{< include "/nic/installation/jwt-password-note.md" >}} diff --git a/content/nic/installation/nic-images/get-image-using-jwt.md b/content/nic/installation/nic-images/get-image-using-jwt.md deleted file mode 100644 index 1dff0c74b..000000000 --- a/content/nic/installation/nic-images/get-image-using-jwt.md +++ /dev/null @@ -1,209 +0,0 @@ ---- -title: Get the NGINX Ingress Controller image with JWT -toc: true -weight: 150 -nd-content-type: how-to -nd-product: NIC -nd-docs: DOCS-1454 ---- - -This document describes how to pull the F5 NGINX Plus Ingress Controller image from the F5 Docker registry into your Kubernetes cluster using your JWT token. - -## Overview - -{{< important >}} - -An NGINX Plus subscription certificate and key will not work with the F5 Docker registry. - -For NGINX Ingress Controller, you must have an NGINX Ingress Controller subscription -- download the NGINX Plus Ingress Controller (per instance) JWT access token from [MyF5](https://my.f5.com). - -To list the available image tags using the Docker registry API, you will also need to download the NGINX Plus Ingress Controller (per instance) certificate (`nginx-repo.crt`) and the key (`nginx-repo.key`) from [MyF5](https://my.f5.com). - -{{< /important >}} - -{{< note >}} - -You can also get the image using alternative methods: - -* You can use Docker to pull an NGINX Ingress Controller image with NGINX Plus and push it to your private registry by following the [Get NGINX Ingress Controller from the F5 Registry]({{< ref "/nic/installation/nic-images/get-registry-image.md" >}}) topic. -* You can follow the [Build NGINX Ingress Controller]({{< ref "/nic/installation/build-nginx-ingress-controller.md" >}}) topic. - -If you would like to use an NGINX Ingress Controller image with NGINX open source, we provide the image through [DockerHub](https://hub.docker.com/r/nginx/nginx-ingress/). - -{{< /note >}} - -## Before You Begin - -You will need the following information from [MyF5](https://my.f5.com) for these steps: - -- A JWT Access Token (per instance) for NGINX Ingress Controller from an active NGINX Ingress Controller subscription. -- The certificate (`nginx-repo.crt`) and key (`nginx-repo.key`) for each NGINX Ingress Controller instance, used to list the available image tags from the Docker registry API. - -## Prepare NGINX Ingress Controller - -1. Choose your desired [NGINX Ingress Controller Image]({{< ref "/nic/technical-specifications.md#images-with-nginx-plus" >}}). -1. Log into the [MyF5 Portal](https://my.f5.com/), navigate to your subscription details, and download the relevant .cert, .key and .JWT files. -1. Create a Kubernetes secret using the JWT token. You should use `cat` to view the contents of the JWT token and store the output for use in later steps. -1. Ensure there are no additional characters or extra whitespace that might have been accidentally added. This will break authorization and prevent the NGINX Ingress Controller image from being downloaded. -1. Modify your deployment (manifest or Helm) to use the Kubernetes secret created in step 3. -1. Deploy NGINX Ingress Controller into your Kubernetes cluster and verify that the installation has been successful. - -## Using the JWT token in a Docker Config Secret - -1. Create a Kubernetes `docker-registry` secret type on the cluster, using the JWT token as the username and `none` for password (as the password is not used). The name of the docker server is `private-registry.nginx.com`. - - ```shell - kubectl create secret docker-registry regcred --docker-server=private-registry.nginx.com --docker-username= --docker-password=none [-n nginx-ingress] - ``` - - It is important that the `--docker-username=` contains the contents of the token and is not pointing to the token itself. Ensure that when you copy the contents of the JWT token, there are no additional characters or extra whitespaces. This can invalidate the token and cause 401 errors when trying to authenticate to the registry. - -1. Confirm the details of the created secret by running: - - ```shell - kubectl get secret regcred --output=yaml - ``` - -1. You can now use the newly created Kubernetes secret in Helm and manifest deployments. - -{{< include "/nic/installation/jwt-password-note.md" >}} - ---- - -## Manifest Deployment - -The page ["Installation with Manifests"]({{< ref "/nic/installation/installing-nic/installation-with-manifests.md" >}}) explains how to install NGINX Ingress Controller using manifests. The following snippet is an example of a deployment: - -```yaml -spec: - serviceAccountName: nginx-ingress - imagePullSecrets: - - name: regcred - automountServiceAccountToken: true - securityContext: - seccompProfile: - type: RuntimeDefault - containers: - - image: private-registry.nginx.com/nginx-ic/nginx-plus-ingress:{{< nic-version >}} - imagePullPolicy: IfNotPresent - name: nginx-plus-ingress -``` - -The `imagePullSecrets` and `containers.image` lines represent the Kubernetes secret, as well as the registry and version of NGINX Ingress Controller we are going to deploy. - ---- - -## Helm Deployment - -If you are using Helm for deployment, there are two main methods: using *sources* or *charts*. - -### Helm Source - -The [Installation with Helm ]({{< ref "/nic/installation/installing-nic/installation-with-helm.md#managing-the-chart-via-sources" >}}) documentation has a section describing how to use sources: these are the unique steps for Docker secrets using JWT tokens. - -1. Clone the NGINX [`kubernetes-ingress` repository](https://github.com/nginx/kubernetes-ingress). -1. Navigate to the `charts/nginx-ingress` folder of your local clone. -1. Open the `values.yaml` file in an editor. - - You must change a few lines NGINX Ingress Controller with NGINX Plus to be deployed. - - 1. Change the `nginxplus` argument to `true`. - 1. Change the `repository` argument to the NGINX Ingress Controller image you intend to use. - 2. Add an argument to `imagePullSecretName` or `imagePullSecretsNames` to allow Docker to pull the image from the private registry. - - The following code block shows snippets of the parameters you will need to change, and an example of their contents: - - ```yaml - ## Deploys the Ingress Controller for NGINX Plus - nginxplus: true - ## Truncated fields - ## ... - ## ... - image: - ## The image repository for the desired NGINX Ingress Controller image - repository: private-registry.nginx.com/nginx-ic/nginx-plus-ingress - - ## The version tag - tag: {{< nic-version >}} - - serviceAccount: - ## The annotations of the service account of the Ingress Controller pods. - annotations: {} - - ## Truncated fields - ## ... - ## ... - - ## The name of the secret containing docker registry credentials. - ## Secret must exist in the same namespace as the helm release. - ## Note that also imagePullSecretsNames can be used here if multiple secrets need to be set. - imagePullSecretName: regcred - ``` - -With the modified `values.yaml` file, you can now use Helm to install NGINX Ingress Controller, for example: - -```shell -helm install nicdev01 -n nginx-ingress --create-namespace -f values.yaml . -``` - -The above command will install NGINX Ingress Controller in the `nginx-ingress` namespace. - -If the namespace does not exist, `--create-namespace` will create it. Using `-f values.yaml` tells Helm to use the `values.yaml` file that you modified earlier with the settings you want to apply for your NGINX Ingress Controller deployment. - - -### Helm Chart - -If you want to install NGINX Ingress Controller using the charts method, the following is an example of using the command line to pass the required arguments using the `set` parameter. - -```shell -helm install my-release -n nginx-ingress oci://ghcr.io/nginx/charts/nginx-ingress --version {{< nic-helm-version >}} --set controller.image.repository=private-registry.nginx.com/nginx-ic/nginx-plus-ingress --set controller.image.tag={{< nic-version >}} --set controller.nginxplus=true --set controller.serviceAccount.imagePullSecretName=regcred -``` -You can also use the certificate and key from the MyF5 portal and the Docker registry API to list the available image tags for the repositories, for example: - -```shell - $ curl https://private-registry.nginx.com/v2/nginx-ic/nginx-plus-ingress/tags/list --key --cert | jq - - { - "name": "nginx-ic/nginx-plus-ingress", - "tags": [ - "{{< nic-version >}}-alpine", - "{{< nic-version >}}-alpine-fips", - "{{< nic-version >}}-ubi", - "{{< nic-version >}}" - ] - } - - $ curl --key --cert | jq - { - "name": "nginx-ic-nap/nginx-plus-ingress", - "tags": [ - "{{< nic-version >}}-alpine-fips", - "{{< nic-version >}}-ubi", - "{{< nic-version >}}" - ] - } - - $ curl --key --cert | jq - { - "name": "nginx-ic-dos/nginx-plus-ingress", - "tags": [ - "{{< nic-version >}}-ubi", - "{{< nic-version >}}" - ] - } -``` - ---- - -## Pulling an Image for Local Use - -If you need to pull the image for local use to then push to a different container registry, use this command: - -```shell -docker login private-registry.nginx.com --username= --password=none -``` - -Replace the contents of `` with the contents of the JWT token itself. -Once you have successfully pulled the image, you can then tag it as needed. - -{{< include "/nic/installation/jwt-password-note.md" >}} diff --git a/content/nic/installation/nic-images/get-registry-image.md b/content/nic/installation/nic-images/registry-download.md similarity index 62% rename from content/nic/installation/nic-images/get-registry-image.md rename to content/nic/installation/nic-images/registry-download.md index 9cf8dd08a..c37c00e05 100644 --- a/content/nic/installation/nic-images/get-registry-image.md +++ b/content/nic/installation/nic-images/registry-download.md @@ -1,5 +1,5 @@ --- -title: Get NGINX Ingress Controller from the F5 Registry +title: Download NGINX Ingress Controller from the F5 Registry toc: true weight: 100 nd-content-type: how-to @@ -7,38 +7,37 @@ nd-product: NIC nd-docs: DOCS-605 --- -Learn how to pull an F5 NGINX Plus Ingress Controller image from the official F5 Docker registry and upload it to your private registry. +This page describes how to download an F5 NGINX Plus Ingress Controller image from the official F5 Docker registry. The F5 Registry images include versions with NGINX App Protect WAF and NGINX App Protect DoS. -This guide covers the prerequisites, image tagging, and troubleshooting steps. - ## Before you begin -Before you start, you'll need these installed on your machine: +To follow these steps, you will need the following pre-requisites: -- [Docker v18.09 or higher](https://docs.docker.com/engine/release-notes/18.09/). -- An NGINX Ingress Controller subscription. Download both the certificate (*nginx-repo.crt*) and key (*nginx-repo.key*) from the [MyF5 Customer Portal](https://my.f5.com). Keep in mind that an NGINX Plus certificate and key won't work for for the steps in this guide. +- [Docker v18.09 or higher](https://docs.docker.com/engine/release-notes/18.09/) ---- +You can also get the NGINX Ingress Controller image using the following alternate methods: -## Set up Docker for F5 Container Registry +- [Add an NGINX Ingress Controller image to your cluster]({{< ref "/nic/installation/nic-images/add-image-to-cluster.md" >}}) +- [Build NGINX Ingress Controller]({{< ref "/nic/installation/build-nginx-ingress-controller.md" >}}) +- For NGINX Open Source, you can pull [an image from DockerHub](https://hub.docker.com/r/nginx/nginx-ingress/) -Start by setting up Docker to communicate with the F5 Container Registry located at `private-registry.nginx.com`. If you're using Linux, follow these steps to create a directory and add your certificate and key: +### Download your subscription credential files -```shell -mkdir -p /etc/docker/certs.d/private-registry.nginx.com -cp /etc/docker/certs.d/private-registry.nginx.com/client.cert -cp /etc/docker/certs.d/private-registry.nginx.com/client.key -``` +{{< include "use-cases/credential-download-instructions.md" >}} + +### Set up Docker for the F5 Container Registry -The steps provided are for Linux. For Mac or Windows, consult the [Docker for Mac](https://docs.docker.com/docker-for-mac/#add-client-certificates) or [Docker for Windows](https://docs.docker.com/docker-for-windows/#how-do-i-add-client-certificates) documentation. For more details on Docker Engine security, you can refer to the [Docker Engine Security documentation](https://docs.docker.com/engine/security/). +{{< include "use-cases/docker-registry-instructions.md" >}} ## Pull the image -Next, pull the image you need from `private-registry.nginx.com`. To find the correct image, consult the [Tech Specs guide]({{< ref "/nic/technical-specifications.md#images-with-nginx-plus" >}}). +Identify which image you need using the [Technical specifications]({{< ref "/nic/technical-specifications.md#images-with-nginx-plus" >}}) topic. -To pull an image, follow these steps. Replace `` with the specific version you need, for example, `{{< nic-version >}}`. +Next, pull the image from `private-registry.nginx.com`. + +Replace `` with the specific version you need, for example, `{{< nic-version >}}`. - For NGINX Plus Ingress Controller, run: @@ -66,7 +65,6 @@ To pull an image, follow these steps. Replace `` with the specific docker pull private-registry.nginx.com/nap/waf-enforcer: ``` - - For NGINX Plus Ingress Controller with NGINX App Protect DoS, run: ```shell @@ -79,10 +77,14 @@ To pull an image, follow these steps. Replace `` with the specific docker pull private-registry.nginx.com/nginx-ic-nap-dos/nginx-plus-ingress: ``` -You can use the Docker registry API to list the available image tags by running the following commands. Replace `` with the location of your client key and `` with the location of your client certificate. The `jq` command is used to format the JSON output for easier reading. +You can use the Docker registry API to list the available image tags by running the following commands. Replace `` with the location of your client key and `` with the location of your client certificate. + +The `jq` command was used in these examples to make the JSON output easier to read. +```shell +curl https://private-registry.nginx.com/v2/nginx-ic/nginx-plus-ingress/tags/list --key --cert +``` ```json -$ curl https://private-registry.nginx.com/v2/nginx-ic/nginx-plus-ingress/tags/list --key --cert | jq { "name": "nginx-ic/nginx-plus-ingress", "tags": [ @@ -92,8 +94,12 @@ $ curl https://private-registry.nginx.com/v2/nginx-ic/nginx-plus-ingress/tags/li "{{< nic-version >}}" ] } +``` -$ curl https://private-registry.nginx.com/v2/nginx-ic-nap/nginx-plus-ingress/tags/list --key --cert | jq +```shell +curl https://private-registry.nginx.com/v2/nginx-ic-nap/nginx-plus-ingress/tags/list --key --cert +``` +```json { "name": "nginx-ic-nap/nginx-plus-ingress", "tags": [ @@ -102,8 +108,12 @@ $ curl https://private-registry.nginx.com/v2/nginx-ic-nap/nginx-plus-ingress/tag "{{< nic-version >}}" ] } +``` -$ curl https://private-registry.nginx.com/v2/nginx-ic-dos/nginx-plus-ingress/tags/list --key --cert | jq +```shell +curl https://private-registry.nginx.com/v2/nginx-ic-dos/nginx-plus-ingress/tags/list --key --cert +``` +```json { "name": "nginx-ic-dos/nginx-plus-ingress", "tags": [ @@ -165,7 +175,7 @@ After pulling the image, tag it and upload it to your private registry. ## Troubleshooting -If you encounter issues while following this guide, here are solutions to common problems: +If you encounter issues while following this guide, here are some possible solutions: - **Certificate errors** - **Likely Cause**: Incorrect certificate or key location, or using an NGINX Plus certificate. @@ -177,17 +187,8 @@ If you encounter issues while following this guide, here are solutions to common - **Can't pull the image** - **Likely Cause**: Mismatched image name or tag. - - **Solution**: Double-check the image name and tag against the [Tech Specs guide]({{< ref "/nic/technical-specifications.md#images-with-nginx-plus" >}}). + - **Solution**: Double-check the image name and tag matches the [Technical specifications]({{< ref "/nic/technical-specifications.md#images-with-nginx-plus" >}}) document. - **Failed to push to private registry** - **Likely Cause**: Not logged into your private registry or incorrect image tagging. - **Solution**: Verify login status and correct image tagging before pushing. Consult the [Docker documentation](https://docs.docker.com/docker-hub/repos/) for more details. - - -## Alternative installation options - -You can also get the NGINX Ingress Controller image using the following alternate methods: - -- [Get the NGINX Ingress Controller image with JWT]({{< ref "/nic/installation/nic-images/get-image-using-jwt.md" >}}). -- [Build NGINX Ingress Controller]({{< ref "/nic/installation/build-nginx-ingress-controller.md" >}}) using the source code from the GitHub repository and your NGINX Plus subscription certificate and key. -- For NGINX Ingress Controller using NGINX OSS, you can pull the [nginx/nginx-ingress image](https://hub.docker.com/r/nginx/nginx-ingress/) from DockerHub. diff --git a/content/nic/installation/run-multiple-ingress-controllers.md b/content/nic/installation/run-multiple-ingress-controllers.md index 98f3417b8..e0d6a6f47 100644 --- a/content/nic/installation/run-multiple-ingress-controllers.md +++ b/content/nic/installation/run-multiple-ingress-controllers.md @@ -1,10 +1,10 @@ --- -nd-docs: DOCS-606 -doctypes: -- '' title: Run multiple NGINX Ingress Controllers toc: true -weight: 400 +weight: 600 +nd-content-type: how-to +nd-product: NIC +nd-docs: DOCS-606 --- This document describes how to run multiple F5 NGINX Ingress Controller instances. @@ -17,8 +17,6 @@ It explains the following topics: {{< note >}} This document refers to [Ingress]({{< ref "/nic/configuration/ingress-resources/basic-configuration.md" >}}), [VirtualServer]({{< ref "/nic/configuration/virtualserver-and-virtualserverroute-resources.md#virtualserver-specification" >}}), [VirtualServerRoute]({{< ref "/nic/configuration/virtualserver-and-virtualserverroute-resources.md#virtualserverroute-specification" >}}), and [TransportServer]({{< ref "/nic/configuration/transportserver-resource.md" >}}) resources as "configuration resources".{{< /note >}} ---- - ## Ingress class The [IngressClass](https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-class) resource allows for multiple Ingress Controller to operate in the same cluster. It also allow developers to select which Ingress Controller implementation to use for their Ingress resource. @@ -35,16 +33,12 @@ The default Ingress class of NGINX Ingress Controller is `nginx`, which means th {{< note >}}- If the class of an Ingress resource is not set, Kubernetes will set it to the class of the default Ingress Controller. To make the Ingress Controller the default one, the `ingressclass.kubernetes.io/is-default-class` property must be set on the IngressClass resource. To learn more, see Step 3 *Create an IngressClass resource* of the [Create Common Resources]({{< ref "/nic/installation/installing-nic/installation-with-manifests.md#create-common-resources" >}}) section. - For VirtualServer, VirtualServerRoute, Policy and TransportServer resources, NGINX Ingress Controller will always handle resources with an empty class.{{< /note >}} ---- - ## Run NGINX Ingress Controller and another Ingress Controller It is possible to run NGINX Ingress Controller and an Ingress Controller for another load balancer in the same cluster. This is often the case if you create your cluster through a cloud provider's managed Kubernetes service that by default might include the Ingress Controller for the HTTP load balancer of the cloud provider, and you want to use NGINX Ingress Controller. To make sure that NGINX Ingress Controller handles specific configuration resources, update those resources with the class set to the value that is configured in NGINX Ingress Controller. By default, this is `nginx`. ---- - ## Run multiple NGINX Ingress Controllers When running NGINX Ingress Controller, you have the following options with regards to which configuration resources it handles: diff --git a/content/nic/installation/upgrade-version.md b/content/nic/installation/upgrade-version.md new file mode 100644 index 000000000..ae5f3982a --- /dev/null +++ b/content/nic/installation/upgrade-version.md @@ -0,0 +1,318 @@ +--- +# We use sentence case and present imperative tone +title: "Upgrade NGINX Ingress Controller" +# Weights are assigned in increments of 100: determines sorting order +weight: 500 +# Creates a table of contents and sidebar, useful for large documents +toc: true +# Types have a 1:1 relationship with Hugo archetypes, so you shouldn't need to change this +nd-content-type: how-to +# Intended for internal catalogue and search, case sensitive: +# Agent, N4Azure, NIC, NIM, NGF, NAP-DOS, NAP-WAF, NGINX One, NGINX+, Solutions, Unit +nd-product: NIC +--- + +This document describes how to upgrade F5 NGINX Ingress Controller when a new version releases. + +It covers the necessary steps for minor versions as well as major versions (Such as 3.x to 4.x). + +Many of the nuances in upgrade paths relate to how custom resource definitions (CRDs) are managed. + +## Minor NGINX Ingress Controller upgrades + +### Upgrade NGINX Ingress Controller CRDs + +{{< call-out "note" >}} If you are running NGINX Ingress Controller v3.x, you should read [Upgrade from NGINX Ingress Controller v3.x to v4.0.0]({{< ref "/nic/installation/upgrade-version.md#upgrade-from-3x-to-4x" >}}) before continuing. {{< /call-out >}} + +To upgrade the CRDs, pull the Helm chart source, then use _kubectl apply_: + +```shell +helm pull oci://ghcr.io/nginx/charts/nginx-ingress --untar --version {{< nic-helm-version >}} +kubectl apply -f crds/ +``` + +Alternatively, CRDs can be upgraded without pulling the chart by running: + +```shell +kubectl apply -f https://raw.githubusercontent.com/nginx/kubernetes-ingress/v{{< nic-version >}}/deploy/crds.yaml +``` + +In the above command, `v{{< nic-version >}}` represents the version of the NGINX Ingress Controller release rather than the Helm chart version. + +{{< call-out "note" >}} The following warning is expected and can be ignored: `Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply`. + +Check the [release notes](https://www.github.com/nginx/kubernetes-ingress/releases) for a new release for any special upgrade procedures. +{{< /call-out >}} + +### Upgrade NGINX Ingress Controller charts + +Once the CRDs have been upgraded, you can then upgrade the release chart. + +The command depends on if you installed the chart using the registry or from source. + +To upgrade a release named _my-release_, use the following command: + +{{< tabs name="upgrade-chart" >}} + +{{% tab name="OCI registry" %}} + +```shell +helm upgrade my-release oci://ghcr.io/nginx/charts/nginx-ingress --version {{< nic-helm-version >}} +``` + +{{% /tab %}} + +{{% tab name="Source" %}} + +```shell +helm upgrade my-release . +``` + +{{% /tab %}} + +{{< /tabs >}} + +## Upgrade from 3.x to 4.x + +{{< call-out "warning" "This upgrade path is intended for 3.x to 4.0.0 only" >}} + +The instructions in this section are intended only for users upgrading from NGINX Ingress Controller 3.x to 4.0.0. Internal changes meant that backwards compability was not possible, requiring extra steps to upgrade. + +{{< /call-out >}} + +This section provides step-by-step instructions for upgrading NGINX Ingress Controller from version v3.x to v4.0.0. + +There are two necessary steps required + +- Update the `apiVersion` value of custom resources +- Configure structured logging. + +If you want to use NGINX Plus, you will also need to follow the [Create a license Secret]({{< ref "/nic/installation/create-license-secret.md" >}}) topic. + +### Update custom resource apiVersion + +If you're using Helm chart version `v2.x`, update your `GlobalConfiguration`, `Policy`, and `TransportServer` resources from `apiVersion: k8s.nginx.org/v1alpha1` to `apiVersion: k8s.nginx.org/v1` before upgrading to NGINX Ingress Controller 4.0.0. + +If the Helm chart you have been using is `v1.0.2` or earlier (NGINX Ingress Controller `v3.3.2`), upgrade to Helm chart `v1.4.2` (NGINX Ingress Controller `v3.7.2`) before updating your GlobalConfiguration, Policy, and TransportServer resources. + +The example below shows the change for a Policy resource: you must do the same for all GlobalConfiguration and TransportServer resources. + +{{< tabs name="resource-version-update" >}} + +{{% tab name="Before" %}} + +```yaml +apiVersion: k8s.nginx.org/v1alpha1 +kind: Policy +metadata: + name: rate-limit-policy +spec: + rateLimit: + rate: 1r/s + key: ${binary_remote_addr} + zoneSize: 10M +``` + +{{% /tab %}} + +{{% tab name="After" %}} + +```yaml +apiVersion: k8s.nginx.org/v1 +kind: Policy +metadata: + name: rate-limit-policy +spec: + rateLimit: + rate: 1r/s + key: ${binary_remote_addr} + zoneSize: 10M +``` + +{{% /tab %}} + +{{< /tabs >}} + +{{< call-out "warning" >}} + +If a *GlobalConfiguration*, *Policy* or *TransportServer* resource is deployed with `apiVersion: k8s.nginx.org/v1alpha1`, it will be **deleted** during the upgrade process. + +{{}} + +After you move the custom resources to `v1`, run the following `kubectl` commands before upgrading to v4.0.0 Custom Resource Definitions (CRDs) to avoid webhook errors caused by leftover `v1alpha1` resources. For details, see [GitHub issue #7010](https://github.com/nginx/kubernetes-ingress/issues/7010). + +```shell +kubectl patch customresourcedefinitions transportservers.k8s.nginx.org --subresource='status' --type='merge' -p '{"status":{"storedVersions": ["v1"]}}' +``` + +```shell +kubectl patch customresourcedefinitions globalconfigurations.k8s.nginx.org --subresource='status' --type='merge' -p '{"status":{"storedVersions": ["v1"]}}' +``` + +### Configure structured logging + +To configure structured logging, you must update your log deployment arguments from an integer to a string. You can also choose different formats for the log output. + +{{< note >}} These options apply to NGINX Ingress Controller logs, and do not affect NGINX logs. {{< /note >}} + +| **Level arguments** | **Format arguments** | +|---------------------|----------------------| +| `trace` | `json` | +| `debug` | `text` | +| `info` | `glog` | +| `warning` | | +| `error` | | +| `fatal` | | + +{{< tabs name="structured logging" >}} + +{{% tab name="Helm" %}} + +The Helm value `controller.logLevel` is now a string instead of an integer. + +To change the rendering of the log format, use the `controller.logFormat` key. + +```yaml +controller: + logLevel: info + logFormat: json +``` +{{% /tab %}} + +{{% tab name="Manifests" %}} + +The command line argument `-v` has been replaced with `-log-level`, and takes a string instead of an integer. The argument `-logtostderr` has also been deprecated. + +To change the rendering of the log format, use the `-log-format` argument. + +```yaml +args: + - -log-level=info + - -log-format=json +``` +{{% /tab %}} + +{{< /tabs >}} + +### Create License secret + +If you're using [NGINX Plus]({{< ref "/nic/overview/nginx-plus.md" >}}) with NGINX Ingress Controller, you should read the [Create a license Secret]({{< ref "/nic/installation/create-license-secret.md" >}}) topic to set up your NGINX Plus license. + +The topic also contains guidance for [sending reports to NGINX Instance Manager]({{< ref "/nic/installation/create-license-secret.md#nim">}}), which is necessary for air-gapped environments. + +Earlier versions required usage reporting through the cluster connector. This is no longer needed because it's now built into NGINX Plus. + +## Upgrade a version older than v3.1.0 + +Starting in version 3.1.0, NGINX Ingress Controller uses updated Helm resource names, labels, and annotations to follow Helm best practices. [See the changes.](https://github.com/nginx/kubernetes-ingress/pull/3606) + +When you upgrade with Helm from a version earlier than 3.1.0, some resources such as `Deployment`, `DaemonSet`, and `Service` are recreated. This causes downtime. + +To reduce downtime, update all resources to use the new naming convention. The following steps help you do that. + +{{< call-out "note" >}} The following steps apply to both 2.x and 3.0.x releases. {{}} + +The steps you should follow depend on your Helm release name: + +{{< tabs name="upgrade-helm" >}} + +{{% tab name="nginx-ingress" %}} + +Use `kubectl describe` on deployment/daemonset to get the `Selector` value: + +```shell +kubectl describe deployments -n +``` + +Copy the key=value under `Selector`, such as: + +```shell +Selector: app=nginx-ingress-nginx-ingress +``` + +Check out the latest available tag using `git checkout v{{< nic-version >}}` + +Go to `/kubernetes-ingress/charts/nginx-ingress` + +Update the `selectorLabels: {}` field in the `values.yaml` file located at `/kubernetes-ingress/charts/nginx-ingress` with the copied `Selector` value. + +```shell +selectorLabels: {app: nginx-ingress-nginx-ingress} +``` + +Run `helm upgrade` with following arguments set: + +```shell +--set serviceNameOverride="nginx-ingress-nginx-ingress" +--set controller.name="" +--set fullnameOverride="nginx-ingress-nginx-ingress" +``` + +It might look like this: + +```shell +helm upgrade nginx-ingress oci://ghcr.io/nginx/charts/nginx-ingress --version 0.19.0 --set controller.kind=deployment/daemonset --set controller.nginxplus=false/true --set controller.image.pullPolicy=Always --set serviceNameOverride="nginx-ingress-nginx-ingress" --set controller.name="" --set fullnameOverride="nginx-ingress-nginx-ingress" -f values.yaml +``` + +Once the upgrade process has finished, use `kubectl describe` on the deployment to verify the change by reviewing its events: + +```text + Type Reason Age From Message +---- ------ ---- ---- ------- +Normal ScalingReplicaSet 9m11s deployment-controller Scaled up replica set nginx-ingress-nginx-ingress- to 1 +Normal ScalingReplicaSet 101s deployment-controller Scaled up replica set nginx-ingress-nginx-ingress- to 1 +Normal ScalingReplicaSet 98s deployment-controller Scaled down replica set nginx-ingress-nginx-ingress- to 0 from 1 +``` + +{{% /tab %}} + +{{< tab name="Other release names" >}} + +Use `kubectl describe` on deployment/daemonset to get the `Selector` value: + +```shell +kubectl describe deployment/daemonset -n +``` + +Copy the key=value under ```Selector```, such as: + +```shell +Selector: app=-nginx-ingress +``` + +Check out the latest available tag using `git checkout v{{< nic-version >}}` + +Go to `/kubernetes-ingress/charts/nginx-ingress`. + +Update the `selectorLabels: {}` field in the `values.yaml` file located at `/kubernetes-ingress/charts/nginx-ingress` with the copied `Selector` value. + +```shell +selectorLabels: {app: -nginx-ingress} +``` + +Run `helm upgrade` with following arguments set: + +```shell +--set serviceNameOverride="-nginx-ingress" +--set controller.name="" +``` + +It might look like this: + +```shell +helm upgrade test-release oci://ghcr.io/nginx/charts/nginx-ingress --version 0.19.0 --set controller.kind=deployment/daemonset --set controller.nginxplus=false/true --set controller.image.pullPolicy=Always --set serviceNameOverride="test-release-nginx-ingress" --set controller.name="" -f values.yaml +``` + +Once the upgrade process has finished, use `kubectl describe` on the deployment to verify the change by reviewing its events: + +```shell +Type Reason Age From Message +---- ------ ---- ---- ------- +Normal ScalingReplicaSet 9m11s deployment-controller Scaled up replica set test-release-nginx-ingress- to 1 +Normal ScalingReplicaSet 101s deployment-controller Scaled up replica set test-release-nginx-ingress- to 1 +Normal ScalingReplicaSet 98s deployment-controller Scaled down replica set test-release-nginx-ingress- to 0 from 1 +``` + +{{% /tab %}} + +{{< /tabs >}} diff --git a/content/nic/logging-and-monitoring/_index.md b/content/nic/logging-and-monitoring/_index.md index c2e745897..cfa385b92 100644 --- a/content/nic/logging-and-monitoring/_index.md +++ b/content/nic/logging-and-monitoring/_index.md @@ -1,9 +1,6 @@ --- -title: Logging And Monitoring +title: Logging and monitoring description: weight: 1500 url: /nginx-ingress-controller/logging-and-monitoring -menu: - docs: - parent: NGINX Ingress Controller --- diff --git a/content/nic/logging-and-monitoring/logging.md b/content/nic/logging-and-monitoring/logging.md index 2407b99a8..97c725a64 100644 --- a/content/nic/logging-and-monitoring/logging.md +++ b/content/nic/logging-and-monitoring/logging.md @@ -1,15 +1,19 @@ --- -title: Logging +title: Logs available from NGINX Ingress Controller toc: true -weight: 1800 +weight: 100 nd-content-type: reference nd-product: NIC nd-docs: DOCS-613 --- -This document gives an overview of logging provided by NGINX Ingress Controller. +This document gives an overview of logging provided by F5 NGINX Ingress Controller. -NGINX Ingress Controller exposes the logs of the Ingress Controller process (the process that generates NGINX configuration and reloads NGINX to apply it) and NGINX access and error logs. All logs are sent to the standard output and error of the NGINX Ingress Controller process. To view the logs, you can execute the `kubectl logs` command for an Ingress Controller pod. For example: +NGINX Ingress Controller exposes the logs of the Ingress Controller process (The process that generates NGINX configuration and reloads NGINX to apply it) and NGINX access and error logs. + +All logs are sent to the standard output and error of the NGINX Ingress Controller process. To view the logs, you can execute the `kubectl logs` command for an Ingress Controller pod. + +For example: ```shell kubectl logs -n nginx-ingress @@ -17,13 +21,17 @@ kubectl logs -n nginx-ingress ## NGINX Ingress Controller Process Logs -The NGINX Ingress Controller process logs are configured through the `-log-level` command-line argument of the NGINX Ingress Controller, which sets the log level. The default value is `info`. Other options include: `trace`, `debug`, `info`, `warning`, `error` and `fatal`. The value `debug` is useful for troubleshooting: you will be able to see how NGINX Ingress Controller gets updates from the Kubernetes API, generates NGINX configuration and reloads NGINX. +The NGINX Ingress Controller process logs are configured through the `-log-level` command-line argument of the NGINX Ingress Controller, which sets the log level. + +The default value is `info`. Other options include: `trace`, `debug`, `info`, `warning`, `error` and `fatal`. + +The value `debug` is useful for troubleshooting: you will be able to see how NGINX Ingress Controller gets updates from the Kubernetes API, generates NGINX configuration and reloads NGINX. -See also the doc about NGINX Ingress Controller [command-line arguments]({{< ref "/nic/configuration/global-configuration/command-line-arguments.md" >}}). +Read more about NGINX Ingress Controller [command-line arguments]({{< ref "/nic/configuration/global-configuration/command-line-arguments.md" >}}). ## NGINX Logs -The NGINX includes two logs: +NGINX includes two logs: - *Access log*, where NGINX writes information about client requests in the access log right after the request is processed. The access log is configured via the [logging-related]({{< ref "/nic/configuration/global-configuration/configmap-resource.md#logging" >}}) ConfigMap keys: - `log-format` for HTTP and HTTPS traffic. @@ -32,4 +40,4 @@ The NGINX includes two logs: Additionally, you can disable access logging with the `access-log-off` ConfigMap key. - *Error log*, where NGINX writes information about encountered issues of different severity levels. It is configured via the `error-log-level` [ConfigMap key]({{< ref "/nic/configuration/global-configuration.md#configmap-resource#logging" >}}). To enable debug logging, set the level to `debug` and also set the `-nginx-debug` [command-line argument]({{< ref "/nic/configuration/global-configuration.md#command-line-arguments" >}}), so that NGINX is started with the debug binary `nginx-debug`. -See also the doc about [NGINX logs]({{< ref "/nginx/admin-guide/monitoring/logging.md" >}}) from NGINX Admin guide. +Read more about [NGINX logs]({{< ref "/nginx/admin-guide/monitoring/logging.md" >}}) from NGINX Admin guide. diff --git a/content/nic/logging-and-monitoring/opentelemetry.md b/content/nic/logging-and-monitoring/opentelemetry.md new file mode 100644 index 000000000..c86779b21 --- /dev/null +++ b/content/nic/logging-and-monitoring/opentelemetry.md @@ -0,0 +1,105 @@ +--- +# We use sentence case and present imperative tone +title: "Enable OpenTelemetry" +# Weights are assigned in increments of 100: determines sorting order +weight: 300 +# Creates a table of contents and sidebar, useful for large documents +toc: true +# Types have a 1:1 relationship with Hugo archetypes, so you shouldn't need to change this +nd-content-type: how-to +# Intended for internal catalogue and search, case sensitive: +# Agent, N4Azure, NIC, NIM, NGF, NAP-DOS, NAP-WAF, NGINX One, NGINX+, Solutions, Unit +nd-product: NIC +--- + +This topic describes how to enable [OpenTelemetry](https://opentelemetry.io/) for F5 NGINX Ingress Controller using the [native NGINX module](https://nginx.org/en/docs/ngx_otel_module.html). + +## Before you begin + +To complete this guide, you need the following pre-requisites: + +- An [NGINX Ingress Controller installation]({{< ref "/nic/installation/" >}}) with OpenTelemetry (v5.1.0+) + +## Load the OpenTelemetry module + +To enable OpenTelemetry, you must first load the module by adding the [_otel-exporter-endpoint_ ConfigMap key]({{< ref "/nic/configuration/global-configuration/configmap-resource.md#modules" >}}), which takes an endpoint argument. + +The following is an example of a OpenTelemetry collector running in your cluster as the target for exporting data: + +```yaml +otel-exporter-endpoint: "http://otel-collector.default.svc.cluster.local:4317" +``` + +A complete ConfigMap example with all OpenTelemetry options could look as follows: + +{{< ghcode "https://raw.githubusercontent.com/nginx/kubernetes-ingress/refs/heads/main/examples/shared-examples/otel/nginx-config.yaml" >}} + +## Enable OpenTelemetry + +Once you have loaded the module, you can now enable OpenTelemetry. + +You can configure it globally for all resources, or on a per resource basis. + +### Global + +To enable OpenTelemetry for all resources, set the _otel-trace-in-http_ ConfigMap key to `true`: + +```yaml +otel-trace-in-http: "true" +``` + +### Per resource + +You can configure OpenTelemetry on a per resource basis in NGINX Ingress Controller. + +For this functionality, you must [enable snippets]({{< ref "/nic/configuration/ingress-resources/advanced-configuration-with-snippets.md" >}}) with the `-enable-snippets` command-line argument. + +Based on the state of global configuration, you can selectively enable or disable metrics for each resource. + +#### Enable a specific resource or path + +With OpenTelemetry **disabled** globally, you can enable it for a specific resource using the server snippet annotation: + +```yaml +nginx.org/server-snippets: | + otel_trace on; +``` + +You can enable it for specific paths using [Mergeable Ingress resources]({{< ref "/nic/configuration/ingress-resources/cross-namespace-configuration.md" >}}). + +Use the server snippet annotation for the paths of a specific Minion Ingress resource: + +```yaml +nginx.org/location-snippets: | + otel_trace on; +``` + +#### Disable a specific resource or path + +With OpenTelemetry **enabled** globally, you can disable it for a specific resource using the server snippet annotation: + + ```yaml +nginx.org/server-snippets: | + otel_trace off; +``` + +You can disable it for specific paths using [Mergeable Ingress resources]({{< ref "/nic/configuration/ingress-resources/cross-namespace-configuration.md" >}}). + +Use the server snippet annotation for the paths of a specific Minion Ingress resource: + +```yaml +nginx.org/location-snippets: | + otel_trace off; +``` + +## Customize OpenTelemetry + +{{< call-out "note" >}} + +You cannot modify the additional directives in the _otel_exporter_ block using snippets. + +{{< /call-out >}} + +You can customize OpenTelemetry through the supported [OpenTelemetry module directives](https://nginx.org/en/docs/ngx_otel_module.html). + +Use the `location-snippets` ConfigMap keys or annotations to insert those directives into the generated NGINX configuration. \ No newline at end of file diff --git a/content/nic/installation/integrations/opentracing.md b/content/nic/logging-and-monitoring/opentracing.md similarity index 91% rename from content/nic/installation/integrations/opentracing.md rename to content/nic/logging-and-monitoring/opentracing.md index 9b131dc8c..9892313fa 100644 --- a/content/nic/installation/integrations/opentracing.md +++ b/content/nic/logging-and-monitoring/opentracing.md @@ -1,18 +1,24 @@ --- -nd-docs: DOCS-618 -doctypes: -- '' -title: OpenTracing (Deprecated in v5.0.0) +title: Enable OpenTracing (Removed in v5.0.0) toc: true -weight: 500 +weight: 700 +nd-content-type: how-to +nd-product: NIC +nd-docs: DOCS-618 --- -OpenTracing support has been deprecated from v5.0.0 of F5 NGINX Ingress Controller. - -Learn how to use OpenTracing with F5 NGINX Ingress Controller. +This topic describes how to OpenTracing with F5 NGINX Ingress Controller. NGINX Ingress Controller supports [OpenTracing](https://opentracing.io/) with the third-party module [opentracing-contrib/nginx-opentracing](https://github.com/opentracing-contrib/nginx-opentracing). +{{< call-out "warning" >}} + +OpenTracing support has been removed from v5.0.0 of NGINX Ingress Controller. + +From v5.1.0 onwards, you should follow the guidance in [Configure OpenTelemetry]({{< ref "/nic/logging-and-monitoring/opentelemetry.md" >}}). + +{{< /call-out >}} + ## Prerequisites 1. Use a NGINX Ingress Controller image that contains OpenTracing. diff --git a/content/nic/logging-and-monitoring/prometheus.md b/content/nic/logging-and-monitoring/prometheus.md index 17f32abe5..0bca69823 100644 --- a/content/nic/logging-and-monitoring/prometheus.md +++ b/content/nic/logging-and-monitoring/prometheus.md @@ -1,17 +1,41 @@ --- -nd-docs: DOCS-614 -doctypes: -- concept -title: Prometheus +title: Enable Prometheus metrics toc: true -weight: 2000 +weight: 400 +nd-content-type: how-to +nd-product: NIC +nd-docs: DOCS-614 --- -NGINX Ingress Controller exposes metrics in the [Prometheus](https://prometheus.io/) format. Those include NGINX/NGINX Plus and the Ingress Controller metrics. +This topic describes how to enable [Prometheus metrics](https://prometheus.io/) for F5 NGINX Ingress Controller. + +The metrics exposed include NGINX Ingress Controller data, as well as NGINX Open Source and NGINX Plus. ## Enabling Metrics +### Using Helm + +To enable Prometheus metrics when using *Helm* to install NGINX Ingress Controller, configure the `prometheus.*` parameters of the Helm chart. + +See the [Installation with Helm]({{< ref "/nic/installation/installing-nic/installation-with-helm.md" >}}) topic. + +#### Using ServiceMonitor + +When deploying with *Helm*, you can deploy a `Service` and `ServiceMonitor` resource using the `prometheus.service.*` and `prometheus.serviceMonitor.*` parameters. +When these resources are deployed, Prometheus metrics exposed by NGINX Ingress Controller can be captured and enumerated using a `Prometheus` resource alongside a Prometheus Operator deployment. + +To view metrics captured this way, you will need: + +* A working [Prometheus resource and Prometheus Operator](https://prometheus-operator.dev/docs/getting-started/introduction/) +* The latest ServiceMonitor CRD from the [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator) repository: + +```shell +LATEST=$(curl -s https://api.github.com/repos/prometheus-operator/prometheus-operator/releases/latest | jq -cr .tag_name) +curl https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/$LATEST/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml | kubectl create -f - +``` + ### Using Manifests + If you're using *Kubernetes manifests* (Deployment or DaemonSet) to install the Ingress Controller, to enable Prometheus metrics: 1. Run the Ingress Controller with the `-enable-prometheus-metrics` [command-line argument]({{< ref "/nic/configuration/global-configuration/command-line-arguments.md" >}}). As a result, the Ingress Controller will expose NGINX or NGINX Plus metrics in the Prometheus format via the path `/metrics` on port `9113` (customizable via the `-prometheus-metrics-listen-port` command-line argument). @@ -32,23 +56,6 @@ If you're using *Kubernetes manifests* (Deployment or DaemonSet) to install the prometheus.io/scheme: http ``` -### Using Helm - -If you're using *Helm* to install the Ingress Controller, to enable Prometheus metrics, configure the `prometheus.*` parameters of the Helm chart. See the [Installation with Helm]({{< ref "/nic/installation/installing-nic/installation-with-helm.md" >}}) doc. - -### Using ServiceMonitor - -When deploying with *Helm*, you can deploy a `Service` and `ServiceMonitor` resource using the `prometheus.service.*` and `prometheus.serviceMonitor.*` parameters. -When these resources are deployed, Prometheus metrics exposed by NGINX Ingress Controller can be captured and enumerated using a `Prometheus` resource alongside a Prometheus Operator deployment. - -To view metrics captured this way, the following is required: -* The latest ServiceMonitor CRD from the [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator) repository: -```shell -LATEST=$(curl -s https://api.github.com/repos/prometheus-operator/prometheus-operator/releases/latest | jq -cr .tag_name) -curl https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/$LATEST/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml | kubectl create -f - -``` -* A working [Prometheus resource and Prometheus Operator](https://prometheus-operator.dev/docs/getting-started/introduction/) - ## Available Metrics The Ingress Controller exports the following metrics: diff --git a/content/nic/logging-and-monitoring/service-insight.md b/content/nic/logging-and-monitoring/service-insight.md index b2908d27c..35b4c89a8 100644 --- a/content/nic/logging-and-monitoring/service-insight.md +++ b/content/nic/logging-and-monitoring/service-insight.md @@ -1,7 +1,7 @@ --- -title: Service Insight +title: Enable Service Insight toc: true -weight: 2100 +weight: 600 nd-content-type: how-to nd-product: NIC nd-docs: DOCS-1180 diff --git a/content/nic/logging-and-monitoring/status-page.md b/content/nic/logging-and-monitoring/status-page.md index 0357f0fb1..8d7c42983 100644 --- a/content/nic/logging-and-monitoring/status-page.md +++ b/content/nic/logging-and-monitoring/status-page.md @@ -1,7 +1,7 @@ --- -title: Status Page +title: View the NGINX status page toc: true -weight: 1900 +weight: 200 nd-content-type: how-to nd-product: NIC nd-docs: DOCS-615 diff --git a/content/nic/releases.md b/content/nic/releases.md index f3480f11a..e7182a11b 100644 --- a/content/nic/releases.md +++ b/content/nic/releases.md @@ -10,13 +10,15 @@ nd-docs: DOCS-616 08 Jul 2025 -This release includes the ability to configure Rate Limiting for your APIs based on a specific NGINX variable and its value. This allows you more granular control over how frequently specific users access your resources. +This NGINX Ingress Controller release brings initial connectivity to the NGINX One Console! You can now use NGINX One Console to monitor NGINX instances that are part of your NGINX Ingress Controller cluster. See [here]({{< ref "/nginx-one/k8s/add-nic.md" >}}) to configure NGINX One Console with NGINX Ingress Controller. -Lastly, in our previous v5.0.0 release, we removed support for Open Tracing. This release replaces that observability capability with native NGINX Open Telemetry traces, allowing you to monitor the internal traffic of your applications. +This release also includes the ability to configure Rate Limiting for your APIs based on a specific NGINX variable and its value. This allows you more granular control over how frequently specific users access your resources. + +Lastly, in our previous v5.0.0 release, we removed support for OpenTracing. This release replaces that observability capability with native [NGINX OpenTelemetry]({{< ref "/nic/logging-and-monitoring/opentelemetry.md" >}}) traces, allowing you to monitor the internal traffic of your applications. ### Features -- [7642](https://github.com/nginx/kubernetes-ingress/pull/7642) Add OpenTelemetry support -- [7916](https://github.com/nginx/kubernetes-ingress/pull/7916) Add support for Agent V3 +- [7642](https://github.com/nginx/kubernetes-ingress/pull/7642) Add [OpenTelemetry support]({{< ref "/nic/logging-and-monitoring/opentelemetry.md" >}}) +- [7916](https://github.com/nginx/kubernetes-ingress/pull/7916) Add support for NGINX Agent version 3 and NGINX One Console - [7884](https://github.com/nginx/kubernetes-ingress/pull/7884) Tiered rate limits with variables - [7765](https://github.com/nginx/kubernetes-ingress/pull/7765) Add OIDC PKCE configuration through Policy - [7832](https://github.com/nginx/kubernetes-ingress/pull/7832) Add request_method to rate-limit Policy @@ -46,7 +48,7 @@ Lastly, in our previous v5.0.0 release, we removed support for Open Tracing. Thi [GitHub Container](https://github.com/nginx/kubernetes-ingress/pkgs/container/kubernetes-ingress), [Amazon ECR Public Gallery](https://gallery.ecr.aws/nginx/nginx-ingress) or [Quay.io](https://quay.io/repository/nginx/nginx-ingress). - For NGINX Plus, use the 5.1.0 images from the F5 Container registry or build your own image using the 5.1.0 source code -- For Helm, use version 2.2.0 of the chart. +- For Helm, use version 2.2.1 of the chart. ### Supported Platforms @@ -61,17 +63,21 @@ versions: 1.25-1.33. Added support for [NGINX Plus R34]({{< ref "/nginx/releases.md#nginxplusrelease-34-r34" >}}), users needing to use a forward proxy for license verification are now able to make use of the [`proxy`](https://nginx.org/en/docs/ngx_mgmt_module.html#proxy) directives available in F5 NGINX Plus. -{{< important >}} -With the removal of the OpenTracing dynamic module from [NGINX Plus R34](({{< ref "/nginx/releases.md#nginxplusrelease-34-r34" >}}), NGINX Ingress Controller also removes full OpenTracing support. This will affect users making use of OpenTracing with the ConfigMap, `server-snippets` & `location-snippets` parameters. Support for tracing with [OpenTelemetry]({{< ref "/nginx/admin-guide/dynamic-modules/opentelemetry.md" >}}) will come in a future release. -{{< /important >}} +{{< call-out "warning" >}} + +With the removal of the OpenTracing dynamic module from [NGINX Plus R34]({{< ref "/nginx/releases.md#nginxplusrelease-34-r34" >}}), NGINX Ingress Controller also removes full OpenTracing support. This will affect users making use of OpenTracing with the ConfigMap, `server-snippets` & `location-snippets` parameters. Support for tracing with [OpenTelemetry]({{< ref "/nginx/admin-guide/dynamic-modules/opentelemetry.md" >}}) will come in a future release. + +{{< /call-out >}} We have extended the rate-limit Policy to allow tiered rate limit groups with JWT claims. This will also allow users to apply different rate limits to their `VirtualServer` or `VirtualServerRoutes` with the value of a JWT claim. See [here](https://github.com/nginx/kubernetes-ingress/tree/v5.0.0/examples/custom-resources/rate-limit-tiered-jwt-claim/) for a working example. We introduced NGINX Plus Zone Sync as a managed service within NGINX Ingress Controller in this release. In previous releases, we had examples using `stream-snippets` for OIDC support, users can now enable `zone-sync` without the need for `snippets`. NGINX Plus Zone Sync is available when utilising two or more replicas, it supports OIDC & rate limiting. -{{< note >}} +{{< call-out "note" >}} + For users who have previously installed OIDC or used the `zone_sync` directive with `stream-snippets`, please see the note in the [Configmap resources]({{< ref "/nic/configuration/global-configuration/configmap-resource.md#zone-sync" >}}) topic to use the new `zone-sync` ConfigMap option. -{{< /note >}} + +{{< /call-out >}} Open Source NGINX Ingress Controller architectures `armv7`, `s390x` & `ppc64le` are deprecated and will be removed in the next minor release. @@ -152,10 +158,10 @@ versions: 1.25-1.32. 16 Dec 2024 With added support for [NGINX R33]({{< ref "/nginx/releases.md#nginxplusrelease-33-r33" >}}), deployments of F5 NGINX Ingress Controller using NGINX Plus now require a valid JSON Web Token to run. -Please see the [Upgrading to v4]({{< ref "/nic/installation/installing-nic/upgrade-to-v4.md#create-license-secret" >}}) for full details on setting up your license `Secret`. +For full details on setting up your license `Secret`, see [Upgrading to v4]({{< ref "/nic/installation/upgrade-version.md#upgrade-from-3x-to-4x" >}}). API Version `v1alpha1` of `GlobalConfiguration`, `Policy` and `TransportServer` resources are now deprecated. -Please see [Update custom resource apiVersion]({{< ref "/nic/installation/installing-nic/upgrade-to-v4.md#update-custom-resource-apiversion" >}}) for full details on updating your resources. +For full details on updating your resources, see [Update custom resource apiVersion]({{< ref "/nic/installation/upgrade-version.md#upgrade-from-3x-to-4x" >}}). Updates have been made to our logging library. For a while, F5 NGINX Ingress Controller has been using the [golang/glog](https://github.com/golang/glog). For this release, we have moved to the native golang library [log/slog](https://pkg.go.dev/log/slog). This change was made for these reasons: @@ -199,7 +205,7 @@ For more details on what this feature does, and how to configure it yourself, pl [Amazon ECR Public Gallery](https://gallery.ecr.aws/nginx/nginx-ingress) or [Quay.io](https://quay.io/repository/nginx/nginx-ingress). - For NGINX Plus, use the 4.0.0 images from the F5 Container registry or build your own image using the 4.0.0 source code - For Helm, use version 2.0.0 of the chart. -- [Upgrading to v4]({{< ref "/nic/installation/installing-nic/upgrade-to-v4.md" >}}) +- [Upgrading to v4]({{< ref "/nic/installation/upgrade-version.md#upgrade-from-3x-to-4x" >}}) ### Supported Platforms @@ -213,14 +219,14 @@ versions: 1.25-1.32. 25 Nov 2024 -{{< note >}} +{{< call-out "note" >}} In our next major release, `v4.0.0`, the default log library for NGINX Ingress Controller will be changed from `golang/glog` to `log/slog`. This will mean that logs generated by NGINX Ingress Controller will be in a structured format with the option to choose a `string` or `json` output. This will not affect logs generated by NGINX. To ensure backwards compatibility, we will ensure the existing log format, `glog`, will be maintained through a configuration option for the next 3 releases. -{{< /note >}} +{{< /call-out >}} -{{< important >}} +{{< call-out "important" >}} CRD version removal notice. In our next major release, `v4.0.0`, support for the following apiVersions for these listed CRDs will be dropped: 1. `k8s.nginx.org/v1alpha` for `GlobalConfiguration` @@ -231,7 +237,7 @@ Prior to upgrading, **please ensure** that any of these resources deployed as `a If a resource of `kind: GlobalConfiguration`, `kind: Policy` or `kind: TransportServer` are deployed as `apiVersion: k8s.nginx.org/v1alpha1`, these resources will be **deleted** when upgrading from, at least, `v3.4.0` to `v4.0.0` When `v4.0.0` is released, the release notes will contain the required upgrade steps to go from `v3.X.X` to `v4.X.X` -{{< /important >}} +{{< /call-out >}} ### Fixes - [6838](https://github.com/nginx/kubernetes-ingress/pull/6838) Update oidc_template and conf @@ -1683,7 +1689,7 @@ We will provide technical support for NGINX Ingress Controller on any Kubernetes ### Upgrade - For NGINX, use the 1.12.1 image from our DockerHub: `nginx/nginx-ingress:1.12.1`, `nginx/nginx-ingress:1.12.1-alpine` or `nginx/nginx-ingress:1.12.1-ubi` -- For NGINX Plus, use the 1.12.1 image from the F5 Container Registry - see [the documentation here]({{< ref "/nic/installation/nic-images/get-registry-image.md">}}) +- For NGINX Plus, use the 1.12.1 image from the F5 Container Registry - see [the documentation here]({{< ref "/nic/installation/nic-images/registry-download.md">}}) - Alternatively, you can also build your own image using the 1.12.1 source code. - For Helm, use version 0.10.1 of the chart. diff --git a/content/nic/technical-specifications.md b/content/nic/technical-specifications.md index 794d5d818..d71e9852a 100644 --- a/content/nic/technical-specifications.md +++ b/content/nic/technical-specifications.md @@ -62,7 +62,7 @@ _NGINX Plus images include NGINX Plus R34._ #### **F5 Container registry** -NGINX Plus images are available through the F5 Container registry `private-registry.nginx.com`, explained in the [Get the NGINX Ingress Controller image with JWT]({{}}) and [Get the F5 Registry NGINX Ingress Controller image]({{}}) topics. +NGINX Plus images are available through the F5 Container registry `private-registry.nginx.com`, explained in the [Download NGINX Ingress Controller from the F5 Registry]({{< ref "/nic/installation/nic-images/registry-download.md" >}}) and [Add an NGINX Ingress Controller image to your cluster]({{< ref "/nic/installation/nic-images/add-image-to-cluster.md" >}}) topics. {{< bootstrap-table "table table-striped table-bordered table-responsive" >}} |
Name
|
Base image
|
Additional modules
| F5 Container Registry Image | Architectures | diff --git a/content/nic/tutorials/security-monitoring.md b/content/nic/tutorials/security-monitoring.md index 4f2f17340..79ff1f816 100644 --- a/content/nic/tutorials/security-monitoring.md +++ b/content/nic/tutorials/security-monitoring.md @@ -58,6 +58,9 @@ If you use custom container images, NGINX Agent must be installed along with NGI server: host: "" grpcPort: 443 + tls: + enable: true + skip_verify: false features: - registration - nginx-counting @@ -79,7 +82,16 @@ If you use custom container images, NGINX Agent must be installed along with NGI {{< note >}} The `features` list must not contain `nginx-config-async` or `nginx-ssl-config` as these features can cause conflicts with NGINX Ingress Controller.{{< /note >}} -3. Follow the [Installation with Manifests]({{< ref "/nic/installation/installing-nic/installation-with-manifests.md" >}}) instructions to deploy NGINX Ingress Controller with custom resources enabled. +3. Make sure that the ConfigMap is mounted to the NGINX Ingress Controller pod at `/etc/nginx-agent/nginx-agent.conf` by adding the following to the NGINX Ingress Controller deployment manifest: + + ```yaml + volumeMounts: + - name: agent-conf + mountPath: /etc/nginx-agent/nginx-agent.conf + subPath: nginx-agent.conf + ``` + +4. Follow the [Installation with Manifests]({{< ref "/nic/installation/installing-nic/installation-with-manifests.md" >}}) instructions to deploy NGINX Ingress Controller with custom resources enabled. {{%/tab%}} diff --git a/content/nim/nginx-app-protect/security-monitoring/set-up-app-protect-instances.md b/content/nim/nginx-app-protect/security-monitoring/set-up-app-protect-instances.md index 2c8408899..2ffcd8014 100644 --- a/content/nim/nginx-app-protect/security-monitoring/set-up-app-protect-instances.md +++ b/content/nim/nginx-app-protect/security-monitoring/set-up-app-protect-instances.md @@ -211,7 +211,6 @@ Take the steps below to update your NGINX App Protect WAF configurations by usin 1. Next, edit the desired configuration file. You will add directives that reference the security policies bundle and enable the NGINX App Protect WAF logs required by the Security Monitoring dashboards. An example configuration is provided below. ```nginx - app_protect_enable on; app_protect_enable on; app_protect_policy_file "/etc/nms/NginxDefaultPolicy.tgz"; app_protect_security_log_enable on; diff --git a/content/nim/nginx-app-protect/setup-waf-config-management.md b/content/nim/nginx-app-protect/setup-waf-config-management.md index 529c3f5ea..5d7751e7e 100644 --- a/content/nim/nginx-app-protect/setup-waf-config-management.md +++ b/content/nim/nginx-app-protect/setup-waf-config-management.md @@ -270,6 +270,185 @@ error when creating the nginx repo retriever - NGINX repo certificates not found If needed, you can also [install the WAF compiler manually](#install-the-waf-compiler). +## Install the WAF compiler in a disconnected environment + +To install the WAF compiler on a system without internet access, complete these steps: + +- **Step 1:** Generate the WAF compiler package on a system that has internet access. +- **Step 2:** Move the generated package to the offline target system and install it. + +{{}} + +{{%tab name="Ubuntu"%}} + +### Install on Ubuntu 24.04, 22.04, and 20.04 + +#### Step 1: On a system with internet access + +Place your `nginx-repo.crt` and `nginx-repo.key` files on this system. +```bash +sudo apt-get update -y +sudo mkdir -p /etc/ssl/nginx/ +sudo mv nginx-repo.crt /etc/ssl/nginx/ +sudo mv nginx-repo.key /etc/ssl/nginx/ + +wget -qO - https://cs.nginx.com/static/keys/nginx_signing.key \ + | gpg --dearmor \ + | sudo tee /usr/share/keyrings/nginx-archive-keyring.gpg >/dev/null + +printf "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] \ +https://pkgs.nginx.com/nms/ubuntu $(lsb_release -cs) nginx-plus\n" | \ +sudo tee /etc/apt/sources.list.d/nms.list + +sudo wget -q -O /etc/apt/apt.conf.d/90pkgs-nginx https://cs.nginx.com/static/files/90pkgs-nginx +mkdir -p compiler && cd compiler +sudo apt-get update +sudo apt-get download nms-nap-compiler-v5.342.0 +cd ../ +mkdir -p compiler/compiler.deps +sudo apt-get install --download-only --reinstall --yes --print-uris nms-nap-compiler-v5.342.0 | grep ^\' | cut -d\' -f2 | xargs -n 1 wget -P ./compiler/compiler.deps +tar -czvf compiler.tar.gz compiler/ +``` + +#### Step 2: On the target (offline) system + +Before running the steps, make sure the OS libraries are up to date, especially `glibc`. +Move the `compiler.tar.gz` file from Step 1 to this system. + +```bash +tar -xzvf compiler.tar.gz +sudo dpkg -i ./compiler/compiler.deps/*.deb +sudo dpkg -i ./compiler/*.deb +``` + +{{%/tab%}} + +{{%tab name="Debian"%}} + +### Install on Debian 11 and 12 + +#### Step 1: On a system with internet access + +Place your `nginx-repo.crt` and `nginx-repo.key` files on this system. +```bash +sudo apt-get update -y +sudo mkdir -p /etc/ssl/nginx/ +sudo mv nginx-repo.crt /etc/ssl/nginx/ +sudo mv nginx-repo.key /etc/ssl/nginx/ + +wget -qO - https://cs.nginx.com/static/keys/nginx_signing.key \ + | gpg --dearmor \ + | sudo tee /usr/share/keyrings/nginx-archive-keyring.gpg >/dev/null + +printf "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] \ +https://pkgs.nginx.com/nms/debian $(lsb_release -cs) nginx-plus\n" | \ +sudo tee /etc/apt/sources.list.d/nms.list + +sudo wget -q -O /etc/apt/apt.conf.d/90pkgs-nginx https://cs.nginx.com/static/files/90pkgs-nginx +mkdir -p compiler && cd compiler +sudo apt-get update +sudo apt-get download nms-nap-compiler-v5.342.0 +cd ../ +mkdir -p compiler/compiler.deps +sudo apt-get install --download-only --reinstall --yes --print-uris nms-nap-compiler-v5.342.0 | grep ^\' | cut -d\' -f2 | xargs -n 1 wget -P ./compiler/compiler.deps +tar -czvf compiler.tar.gz compiler/ +``` + +#### Step 2: On the target (offline) system + +Before running the steps, make sure the OS libraries are up to date, especially `glibc`. +Move the `compiler.tar.gz` file from Step 1 to this system. + +```bash +tar -xzvf compiler.tar.gz +sudo dpkg -i ./compiler/compiler.deps/*.deb +sudo dpkg -i ./compiler/*.deb +``` + +{{%/tab%}} + +{{%tab name="RHEL8, RHEL9, Oracle-9 "%}} + +### Install on RHEL 8, RHEL 9, or Oracle Linux 9 + +#### Step 1: On a system with internet access + +> For RHEL 8, you can skip the `yum-config-manager` line. + +Place your `nginx-repo.crt` and `nginx-repo.key` files on this system. +```bash +sudo yum update -y +sudo yum install yum-utils -y +sudo mkdir -p /etc/ssl/nginx/ +sudo mv nginx-repo.crt /etc/ssl/nginx/ +sudo mv nginx-repo.key /etc/ssl/nginx/ +sudo wget -P /etc/yum.repos.d https://cs.nginx.com/static/files/nms.repo +sudo yum-config-manager --disable rhel-9-appstream-rhui-rpms +sudo yum update -y +sudo mkdir -p nms-nap-compiler +sudo yumdownloader --resolve --destdir=nms-nap-compiler nms-nap-compiler-v5.342.0 +tar -czvf compiler.tar.gz nms-nap-compiler/ +``` + +#### Step 2: On the target (offline) system + +Before running the steps, make sure the OS libraries are up to date, especially `glibc`. +Move the `compiler.tar.gz` file from Step 1 to this system. + +```bash +tar -xzvf compiler.tar.gz +cd nms-nap-compiler +sudo dnf install *.rpm --disablerepo=* +``` + +{{%/tab%}} + +{{%tab name="Oracle-8"%}} + +### Install on Oracle Linux 8 + +#### Step 1: On a system with internet access + +Place your `nginx-repo.crt` and `nginx-repo.key` files on this system. +```bash +sudo yum update -y +sudo yum install yum-utils tar -y +sudo mkdir -p /etc/ssl/nginx/ +sudo mv nginx-repo.crt /etc/ssl/nginx/ +sudo mv nginx-repo.key /etc/ssl/nginx/ +sudo wget -P /etc/yum.repos.d https://cs.nginx.com/static/files/nms.repo + +sudo tee /etc/yum.repos.d/centos-vault-powertools.repo << 'EOF' +[centos-vault-powertools] +name=CentOS Vault - PowerTools +baseurl=https://vault.centos.org/centos/8/PowerTools/x86_64/os/ +enabled=1 +gpgcheck=0 +EOF + +sudo yum update -y +sudo mkdir -p nms-nap-compiler +sudo yumdownloader --resolve --destdir=nms-nap-compiler nms-nap-compiler-v5.342.0 +tar -czvf compiler.tar.gz nms-nap-compiler/ +``` + +#### Step 2: On the target (offline) system + +Before running the steps, make sure the OS libraries are up to date, especially `glibc`. +Move the `compiler.tar.gz` file from Step 1 to this system. + +```bash +sudo yum install tar -y +tar -xzvf compiler.tar.gz +sudo dnf install --disablerepo=* nms-nap-compiler/*.rpm +``` + + +{{%/tab%}} + + +{{}} + --- ## Set up attack signatures and threat campaigns diff --git a/content/nms/acm/_index.md b/content/nms/acm/_index.md index c6dce5b75..2acb5eb31 100644 --- a/content/nms/acm/_index.md +++ b/content/nms/acm/_index.md @@ -3,6 +3,10 @@ title: API Connectivity Manager weight: 500 url: /nginx-management-suite/acm/ cascade: - type: acm-eos + noindex: true + nd-banner: + enabled: true + type: deprecation + md: _banners/eos-acm.md --- diff --git a/content/solutions/about-subscription-licenses.md b/content/solutions/about-subscription-licenses.md index a1c8177c1..3c33a2857 100644 --- a/content/solutions/about-subscription-licenses.md +++ b/content/solutions/about-subscription-licenses.md @@ -14,14 +14,18 @@ We’re updating NGINX Plus to align with F5’s entitlement and visibility poli Starting with NGINX Plus R33, all **NGINX Plus instances require a valid JSON Web Token (JWT) license**. This license is tied to your subscription (not individual instances) and is used to validate your subscription and automatically send usage reports to F5's licensing endpoint (`product.connect.nginx.com`), as required by your subscription agreement. In offline environments, usage reporting is [routed through NGINX Instance Manager]({{< ref "nim/disconnected/report-usage-disconnected-deployment.md" >}}). -### Important changes +## Important changes -##### NGINX Plus won't start if: +If you have multiple subscriptions, you’ll also have multiple JWT licenses. You can assign each NGINX Plus instance to the license you prefer. NGINX combines usage reporting across all licensed instances. + +This feature is available in NGINX Instance Manager 2.20 and later. + +### NGINX Plus won't start if: - The JWT license is missing or invalid. - The JWT license expired over 90 days ago. -##### NGINX Plus will **stop processing traffic** if: +### NGINX Plus will **stop processing traffic** if: - It can't submit an initial usage report to F5's licensing endpoint or NGINX Instance Manager. @@ -41,17 +45,48 @@ When installing or upgrading to NGINX Plus R33 or later, take the following step --- -## Add the JWT license {#add-jwt} +## Download the license from MyF5 {#download-jwt} -Before you install or upgrade to NGINX Plus R33 or later, make sure to: +{{< include "licensing-and-reporting/download-jwt-from-myf5.md" >}} -### Download the license from MyF5 {#download-jwt} +--- -{{< include "licensing-and-reporting/download-jwt-from-myf5.md" >}} +## Deploy the JWT license + +After you download the JWT license, you can deploy it to your NGINX Plus instances using either of the following methods: + +- Use a **Config Sync Group** if you're managing instances with the NGINX One Console (recommended) +- Copy the license manually to each instance + +Each method ensures your NGINX Plus instances have access to the required license file. + +### Deploy with a Config Sync Group (Recommended) + +If you're using the [NGINX One Console]({{< ref "/nginx-one/getting-started.md" >}}), the easiest way to manage your JWT license is with a [Config Sync Group]({{< ref "/nginx-one/nginx-configs/config-sync-groups/manage-config-sync-groups.md" >}}). This method lets you: + +- Avoid manual file copying +- Keep your fleet consistent +- Automatically apply updates to new NGINX Plus instances + +To deploy the JWT license with a Config Sync Group: + +{{< include "/licensing-and-reporting/deploy-jwt-with-csgs.md" >}} + +Your JWT license now syncs to all NGINX Plus instances in the group. + +When your subscription renews and a new JWT license is issued, update the file in the Config Sync Group to apply the change across your fleet. + +New instances added to the group automatically inherit the license. + +{{< call-out "note" "If you’re using NGINX Instance Manager" "" >}} +If you're using NGINX Instance Manager instead of the NGINX One Console, the equivalent feature is called an *instance group*. You can manage your JWT license in the same way by adding or updating the file in the instance group. For details, see [Manage instance groups]({{< ref "/nim/nginx-instances/manage-instance-groups.md" >}}). +{{< /call-out >}} + +### Copy the license manually -### Copy the license to each NGINX Plus instance +If you're not using the NGINX One Console, copy the JWT license file to each NGINX Plus instance manually. -{{< include "licensing-and-reporting/apply-jwt.md" >}} +{{< include "/licensing-and-reporting/apply-jwt.md" >}} ### Custom paths {#custom-paths} diff --git a/content/unit/_index.md b/content/unit/_index.md index a809a67b0..e4548bb09 100644 --- a/content/unit/_index.md +++ b/content/unit/_index.md @@ -1,7 +1,45 @@ --- title: NGINX Unit -description: A lightweight web app server that combines several layers of the typical application stack into a single component. +nd-subtitle: A lightweight web app server that combines several layers of the typical application stack into a single component url: /nginx-unit/ +nd-landing-page: true cascade: logo: "NGINX-Unit-product-icon-RGB.png" ---- \ No newline at end of file +nd-content-type: landing-page +nd-product: NGINX Unit +--- + +## About + +NGINX Unit is a lightweight and versatile application runtime that provides the essential components for your web application as a single open-source server: running application code (including WebAssembly), serving static assets, handling TLS and request routing. + +## Featured content + +{{}} + {{}} + {{}} + Learn about the key features of NGINX Unit, including its support for multiple languages, security, performance, and more + {{}} + {{}} + Get started with NGINX Unit by installing it on your system. Find instructions for various platforms and package managers + {{}} + {{}} + Learn how to configure NGINX Unit for your applications + {{}} + {{}} +{{}} + +## Other resources + +{{}} + {{}} + + {{}} + Learn how to resolve various real-life situations and issues that you may experience with Unit + {{}} + {{}} + See the latest changes and updates in NGINX Unit, including new features, bug fixes, and improvements + {{}} + + {{}} +{{}} \ No newline at end of file diff --git a/documentation/README.md b/documentation/README.md index 6cae42e2e..169a891f5 100644 --- a/documentation/README.md +++ b/documentation/README.md @@ -19,4 +19,5 @@ If you're interested in contributing to the [NGINX documentation website](https: - [Managing content with Hugo](/documentation/writing-hugo.md) - [Proposals](/documentation/proposals/README.md) - [Set up pre-commit](/documentation/pre-commit.md) +- [Using include files](/documentation/include-files.md) - [Writing style guide](/documentation/style-guide.md) diff --git a/documentation/closed-contributions.md b/documentation/closed-contributions.md index cc0970362..207bb5439 100644 --- a/documentation/closed-contributions.md +++ b/documentation/closed-contributions.md @@ -11,6 +11,19 @@ We work in public by default, so this process should only be used on a case by c For standard content releases, review the [Contributing guidelines](/CONTRIBUTING.md). +## Referencing internal information + +During the last stage of this process, or while making any public pull request, you may need to reference something internal. + +As mentioned in our pull request template checklist, you should not link to internal resources: they are considered "sensitive content", defined previously. + +Instead, include the key outputs of the internal resource as part of describing context: + +> "Following an internal discussion, the instructions for foo have been clarified" +> "The phrasing of bar has been changed based on an internal requirement" + +This allows you to retain the necessary context for a change without exposing a potentially important resource to the public. + ## Overview This repository (https://github.com/nginx/documentation) is where we work by default. It has a one-way sync to an internal repository, used for closed content. @@ -18,11 +31,13 @@ This repository (https://github.com/nginx/documentation) is where we work by def The process is as follows: - Add the closed repository as a remote + - The closed repository is also known as `internal` - Create a remote branch with the prefix `internal/` in the closed repository - Open a pull request in the closed repository to get previews and request feedback - Once all stakeholders are happy with changes, close the pull request in the closed repository - Merge the changes from the remote (Closed) repository branch with a new branch in the open repository - Open a new pull request in the open repository, where it can be merged + - You do not have to get approvals a second time You can get the URL through our internal communication channels: it will be represented in the following steps as ``. @@ -36,6 +51,12 @@ git remote add internal git@github.com:.git git fetch --all ``` +You can verfiy access to the closed repository in your `.git/config` file. Look for the code block that corresponds to: + +``` +[remote "internal"] +``` + Check out the remote `main` branch, and use it to create a feature branch. **Ensure that you prefix all branch names with `internal/`** ```shell @@ -51,7 +72,7 @@ git commit git push internal ``` -Open a pull request when you are ready to receive feedback from stakeholders. +Open a pull request when you are ready to receive feedback from stakeholders. You'll see the pull request in the internal repository. After any iterative work, close the pull request. Since the closed repository is a mirror of the open one, we do not merge changes to it. @@ -64,4 +85,9 @@ git merge internal/internal/feature git push origin ``` +This allows you to open a _new_ pull request in the open repository. To verify that pull request reflects the changes made in the closed +repository, check the commit messages. + +If you are a maintainer of https://github.com/nginx/documentation, you can merge the changes without additional approvals. + Once the content changes have been merged in the open repository, they will synchronize back to the closed repository. diff --git a/documentation/include-files.md b/documentation/include-files.md new file mode 100644 index 000000000..096ba4661 --- /dev/null +++ b/documentation/include-files.md @@ -0,0 +1,43 @@ +# Using Include files + +_Include files_, often referred to as _includes_, are Markdown files with self-contained text fragments used by Hugo for content re-use. + +They enable contributors to maintain a single source of truth for information that is often repeated, such as how to download credential files. + +We use them to [avoid repeating ourselves](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself), and create consistency in similar instructional steps. + +Include files are designed to be context-agnostic and should not rely on or assume any prior content. + +The files are located in the [content/includes](https://github.com/nginxinc/docs/tree/main/content/includes) folder, and are implemented using the Hugo `include` shortcode: + +```text +{{< include "use-cases/docker-registry-instructions.md" >}} +``` + +Putting the previous example in any Markdown file would embed the contents of `content/includes/use-cases/docker-registry-instructions.md` wherever the shortcode was used. + +For guidance on other Hugo shortcodes, read the [Managing content with Hugo](/documentation/writing-hugo.md) document. + +## Guidelines for include files + +To make sure includes are effective and easy to maintain, follow these guidelines: + +- **Only use includes for repeated content**: Create an include only if the content appears in at least **two locations**. Using an include for single-use content adds unnecessary complexity and makes maintenance harder. +- **Keep includes small and modular**: Write narrowly scoped snippets to maximize flexibility and reuse. +- **Avoid nesting includes**: If there’s another way to achieve the same outcome, avoid nesting includes. While possible, it complicates reviews and maintenance. A flat structure is simple. +- **Don't include headings**: Do not include headings in include files. These headings won't appear in a document's table of contents and may break the linear flow of the surrounding content. Add headings directly to the document instead. +- **Don't start documents with includes**: The opening of most documents is the introduction which explains its purpose. Includes are reused text, so starting multiple documents with identical content could look odd, especially in search results. +- **Do not add the F5 prefix to product names in includes**: The brand name is required only on [the first mention in a document](/documentation/style-guide.md#f5-brand-trademarks-and-product-names). + +## Include file index + +To aid in discoverability of include files, this index is maintained to offer contributors a reference for existing entries. + +When viewing an include file, you may also see the `files`: parameter in the frontmatter, which shows where the file is currently in use. + +| **_File name_** | **_Description_** | +| ----------------| ------------------ | +| [_licensing-and-reporting/download-jwt-from-myf5.md_](/content/includes/licensing-and-reporting/download-jwt-from-myf5.md) | Instructions for downloading a JSON Web Token from MyF5 | +| [_licensing-and-reporting/download-certificates-from-myf5.md_](/content/includes/licensing-and-reporting/download-certificates-from-myf5.md) | Instructions for downloading certificate files from MyF5 | +| [_use-cases/credential-download-instructions.md_](/content/includes/use-cases/credential-download-instructions.md) | Parallel tabbed instructions for downloading credential files from MyF5 | +| [_use-cases/docker-registry-instructions.md_](/content/includes/use-cases/docker-registry-instructions.md) | Parallel tabbed instructions for listing Docker images from the F5 Registry | \ No newline at end of file diff --git a/documentation/style-guide.md b/documentation/style-guide.md index 560ec05cf..4b4a7e616 100644 --- a/documentation/style-guide.md +++ b/documentation/style-guide.md @@ -101,7 +101,6 @@ The table provides guidelines about the terms you should and should not use for | cookie/cookies (noun) | | | | covers | As in, "this section/topic/chapter covers the following...". Instead, use a phrase such as, "This topic deals with..." or "This topic provides the following information...". More options: communicates, presents, offers, introduces, explains, describes. | | | curly brackets `{}` | The name for the curved `{}` parenthetical markings is "braces". They are not called curly brackets. | | -| CVSS v3.0 | Do not spell out. (Articles with CVSS metrics should include the CVSS link.) | | | daemon | Avoid using this term in generic documentation because it is UNIX-oriented. Instead, we use "agent", "utility", or "application". However, we do refer to specific UNIX daemons, like `named` and `sod`, when daemon is part of the name. | | | data center | Write this as two words. | | | domain name | example.com, example.net, example.org, or localhost per [RFC 2606](https://www.rfc-editor.org/rfc/rfc2606.html). | | @@ -147,7 +146,6 @@ The table provides guidelines about the terms you should and should not use for | Forwarding (IP) | See virtual server types. | | | Forwarding (Layer 2) | See virtual server types. | | | forwards | Use backward and forward, not backwards and forwards. | | -| FTP | Do not spell out. | | | fu, fubar | Do not use; always replace with specific text. Watch for these in code samples. | | | future releases and TBD | Do not use TBD in any content, including release notes. Do not reference future releases, such as This OID will be disabled in future releases. | | | G | Abbreviation for "giga", but in computer terminology represents 230, or 1,073,741,824. Correct: 4G | | @@ -190,7 +188,7 @@ The table provides guidelines about the terms you should and should not use for | IPv4-in-IPv6 vs. IPv4 in IPv6 | You can hyphenate IPv4-in-IPv6 when used as an adjective, such as IPv4-in-IPv6 tunnels. Note that the internal v in IPv4 and IPv6 should remain in lowercase format. | | | ISO 9001:2015 certification | For example: ISO 9001:2015 certified" or ISO 9001:2015 certification Don't use: ISO certified or ISO certification (Per: ISO - Certification, for questions about the use of ISO Certificate terms and logo, please contact the GS quality team at *qmt) | | | it | Avoid ambiguous pronouns. Be explicit: "Check the status of the server. Restart ~~it~~ the server" | | -| jargon | Jargon is the technical terminology or characteristic idiom of a special activity or group. Try hard to avoid it. Think about explaining something to a member of your family or a friend who doesn't know what you know. F5 products are highly technical, but strive to be as plainspoken as possible when describing or instructing. Spell out abbreviations on first use, use the clearest and easiest word to understand that will still accomplish the job, and so on. | | +| jargon | Jargon is technical language used by people in a specific group or field. Avoid it whenever you can. Imagine you’re explaining the topic to a friend or family member who doesn’t know the technology. F5 products are technical, but your writing should still be clear and simple. Spell out acronyms the first time you use them. Use words that are easy to understand and still get the job done. For exceptions, see the [Acronyms](#acronyms) section. | | | JWT license file | Include the word "license" when referring to the JSON Web Token that users download as part of their F5 NGINX subscription. | | | kill | Avoid this term except in command line syntax, where it is a UNIX command for stopping processes. (It's actually an IEEE POSIX standard command.) Alternatives for describing the action are: § End the process § Interrupt the process § Quit the process § Shut down the process § Stop the process | | | known issue | Abbreviate as "KI" when using in public-facing documentation. | | @@ -296,8 +294,6 @@ The table provides guidelines about the terms you should and should not use for | space | Do not use when referring to an input field or checkbox where the user needs to enter info. Recast to identify as a box. | | | SPDY | Correct: a SPDY profile (pronounced speedy). | | | spin up/spin down, spinning up/spinning down | Jargon, but becoming more widely used because of AWS. Do not use in our documentation without adding context on first reference. For example You can spin up (create additional virtual instances) or spin down (remove virtual instances) . . . . | | -| SSH | Do not spell out. | | -| SSL | Do not spell out. | | | SSLi/SSL Intercept | For the SSL Intercept iRule. Spell out. Do not abbreviate except to match UI label. | | | Sync-Failover (and Sync-Only) | Title capitalize and hyphenate to Sync-Failover unless referencing the option in tmsh; then lowercase and hyphenate as sync-failover. These guidelines apply to Sync-Only as well. | | | tap | Describes action of touching the hardware touchscreens in hardware documentation. Do not use in software documentation; use "select" instead. | | @@ -314,7 +310,6 @@ The table provides guidelines about the terms you should and should not use for | Traffic Management Microkernel (TMM) | First mention: the Traffic Management Microkernel (TMM)Subsequent references: the TMM | | | Traffic Management Operating System (TMOS) | Do not use. See TMOS. | | | typically vs. normally | When describing a predictable and expected action in technical content, write in terms of what is typical rather than normal for clarity. Normal implies judgment. Avoid particularly when applying to user actions, practices, or behaviors. Use typical instead. | | -| UDP | User Datagram Protocol. Do not spell out. | | | UI/GUI/WebGUI | Don't use these terms in documentation. For UI, call it the browser interface or user interface if necessary. Don't use GUI, or WebGUI. | | | unsecure and non-secure | Use unsecure and not insecure when describing a lack of security regarding something technical or technology-related in our documentation. If preferred and internally consistent with what you are documenting, non-secure may be OK, but defer to your editor. | | | update | Use when moving from one minor version of a product to another. For example, from NGINX Instance Manager 2.1 to 2.2. For example: Before updating your system, you should read the release notes to understand any new issues. For example: OIDC-authenticated users can't view the Users list after updating to NGINX Instance Manager 2.9.1. | | @@ -328,8 +323,6 @@ The table provides guidelines about the terms you should and should not use for | virtual address | Use instead of virtual IP address or VIP. | | | virtual address vs. virtual IP address | Although this refers to the IP address part of a virtual server destination address, only use virtual address (which also reflects the GUI). | | | virtual edition/Virtual Edition | Use virtual edition as a generic term. For Virtual Edition, see VE. | | -| Virtual Local Area Network (VLAN) | Do not spell out unless necessary to context. | | -| Virtual Private Network (VPN) | Do not spell out unless necessary to context. | | | walk | Don't use. Anthropomorphism. Instead, try guides, leads, conducts, directs, shows… Example: The Setup utility guides you through a series of pages. | | | warning/caution | Caution is less severe than Warning. Use Caution when alerting that damage may occur, such as data loss. Use Warning as the severest form of advisory, reserved for when there's a hazard to personnel (such as you're being directed to install a server rack and there's a chance it may fall on you). | | | wget | command.| | @@ -345,6 +338,42 @@ The table provides guidelines about the terms you should and should not use for | Wizard and wizard | When documenting the GUI, you can capitalize Wizard if appropriate, such as for the Network Access Setup Wizard. When writing about wizards in general, or when a page title of a dialog box or GUI does not show Wizard in uppercase format, you can leave wizard in lowercase format. | | | WWW or www | Do not include www. in web addresses In text, do not use WWW, but use Internet instead. Of course, you can use www as part of a URL. Although we're moving away from that, too. | | +## Acronyms + +The following acronyms do not need to be spelled out: + +| Acronym | Definition | +|---------|------------| +| API | Application Programming Interface | +| CIDR | Classless Inter-Domain Routing | +| CPU | Central Processing Unit | +| CVE | Common Vulnerabilities and Exposures | +| CVSS | Common Vulnerability Scoring System | +| DHCP | Dynamic Host Configuration Protocol | +| DNS | Domain Name System | +| FQDN | Fully Qualified Domain Name | +| FTP | File Transfer Protocol | +| GPU | Graphics Processing Unit | +| gRPC | gRPC Remote Procedure Call | +| HTTP | Hypertext Transfer Protocol | +| HTTPS | Hypertext Transfer Protocol Secure | +| IP | Internet Protocol | +| JSON | JavaScript Object Notation | +| JWT | JSON Web Token | +| NAT | Network Address Translation | +| PEM | Privacy Enhanced Mail (file format) | +| RAM | Random Access Memory | +| SMTP | Simple Mail Transfer Protocol | +| SSH | Secure Shell | +| SSL | Secure Sockets Layer | +| TCP | Transmission Control Protocol | +| TLS | Transport Layer Security | +| UDP | User Datagram Protocol | +| VLAN | Virtual Local Area Network | +| VPN | Virtual Private Network | +| WAF | Web Application Firewall | +| YAML | YAML Ain't Markup Language | + ## Topic types and templates @@ -434,27 +463,6 @@ Ensure content and screenshots are anonymized and don't contain sensitive inform - Limit the use of links to external (non-F5) sources. When necessary, only link to reputable sources and foundational sites, such as GitHub.com, Google.com, and Microsoft.com. - This helps minimize the risk of prompt injection. -## Guidelines for `includes` - -In an ideal world, we'd "write once, publish everywhere." To support this goal, we follow the principle of [Don't repeat yourself](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself) in our documentation. This principle shapes how we create and use `includes`, which pull reusable content from files in the [content/includes](https://github.com/nginxinc/docs/tree/main/content/includes) directory. - -For example: - -```text -{{< include "controller/helper-script-prereqs.md" >}} -``` - -This entry automatically incorporates content from the `helper-script-prereqs.md` file in the `content/includes/controller` subdirectory. - -To make sure includes are effective and easy to maintain, follow these practices: - -- **Use includes only for reusable content**: Create an include only if the content appears in at least **two locations**. Using an include for single-use content adds unnecessary complexity and makes maintenance harder. -- **Keep includes small and modular**: Write narrowly scoped snippets to maximize flexibility and reuse. -- **Avoid branded product names in includes**: Use the full product name (e.g., "NGINX Instance Manager"), but avoid including the branded version (e.g., "F5 NGINX Instance Manager"). The branded name is required only on the first mention in a document; this is a context-specific rule. Includes, however, are designed to be context-agnostic—they should not rely on or assume any prior content—so including the branded name could repeat information unnecessarily in locations where it has already been introduced. -- **Don't include headers**: Avoid adding H2 or other headers inside includes. These headers won't appear in the document's table of contents (TOC) and may not fit well with the surrounding content hierarchy. Add headers directly in the document instead. -- **Avoid nesting includes**: If there’s another way to achieve the same outcome, avoid nesting includes. While technically possible, it complicates reviews and maintenance. Use a flat structure for simplicity. -- **Don't start documents with includes**: The opening of a document is usually the introduction, which explains its purpose. Includes are reused text, so starting multiple documents with identical content could look odd, especially in search results. - ## Guidelines for command-line operations ### Restarting vs. reloading NGINX diff --git a/documentation/writing-hugo.md b/documentation/writing-hugo.md index 00f7d6448..aab79712a 100644 --- a/documentation/writing-hugo.md +++ b/documentation/writing-hugo.md @@ -87,53 +87,58 @@ To install , refer to the [integration instructions]({{< ref "/integ ### How to use Hugo shortcodes -[Hugo shortcodes](https://github.com/nginxinc/nginx-hugo-theme/tree/main/layouts/shortcodes) are used to format callouts, add images, and reuse content across different pages. +[Hugo shortcodes](https://github.com/nginxinc/nginx-hugo-theme/tree/main/layouts/shortcodes) are used to provide extra functionality and special formatting to Markdown content. -For example, to use the `note` callout: +This is an example of a call-out shortcode: ```md -{{< note >}} Provide the text of the note here .{{< /note >}} +{{< call-out "note" >}} Provide the text of the note here .{{< /call-out >}} ``` -The callout shortcodes support multi-line blocks: +Here are some other shortcodes: + +- `include`: Include the content of a file in another file: read the [Re-use content with includes](#re-use-content-with-includes) instructions +- `tabs`: Create mutually exclusive tabbed window panes, useful for parallel instructions +- `table`: Add scrollbars to wide tables for browsers with smaller viewports +- `icon`: Add a [Lucide icon](https://lucide.dev/icons/) by using its name as a parameter +- `link`: Link to a file, prepending its path with the Hugo baseUrl +- `ghcode`: Embeds the contents of a code file: read the [Add code to documentation pages](#add-code-to-documentation-pages) instructions +- `openapi`: Loads an OpenAPI specification and render it as HTML using ReDoc + +#### Add call-outs to documentation pages + +The call out shortcode support multi-line blocks: ```md -{{< caution >}} +{{< call-out "caution" >}} You should probably never do this specific thing in a production environment. If you do, and things break, don't say we didn't warn you. -{{< /caution >}} +{{< /call-out >}} ``` -Supported callouts: +The first parameter determines the type of call-out, which defines the colour given to it. + +Supported types: - `note` - `tip` - `important` - `caution` - `warning` -You can also create custom callouts using the `call-out` shortcode `{{< call-out "type position" "header" "font-awesome icon >}}`. For example: +An optional second parameter will add a title to the call-out: without it, it will fall back to the type. ```md -{{}} -``` +{{< call-out "important" "This instruction only applies to v#.#.#" >}} +These instructions are only intended for versions #.#.# onwards. -By default, all custom callouts are displayed inline, unless you add `side-callout` which places the callout to the right of the content. - -Here are some other shortcodes: +Follow if you're using an older version. +{{< /call-out >}} +``` -- `fa`: Inserts a Font Awesome icon -- `collapse`: Make a section collapsible -- `tab`: Create mutually exclusive tabbed window panes, useful for parallel instructions -- `table`: Add scrollbars to wide tables for browsers with smaller viewports -- `link`: Link to a file, prepending its path with the Hugo baseUrl -- `openapi`: Loads an OpenAPI specification and render it as HTML using ReDoc -- `include`: Include the content of a file in another file; the included file must be present in the '/content/includes/' directory -- `raw-html`: Include a block of raw HTML -- `readfile`: Include the content of another file in the current file, which can be in an arbitrary location. -- `bootstrap-table`: formats a table using Bootstrap classes; accepts any bootstrap table classes as additional arguments, e.g. `{{< bootstrap-table "table-bordered table-hover" }}` +Finally, you can use an optional third parameter to add a [Lucide icon](https://lucide.dev/icons/) using its name. -### Add code to documentation pages +#### Add code to documentation pages For command, binary, and process names, we sparingly use pairs of backticks (\`\\`): ``. @@ -145,22 +150,9 @@ You can also use the `ghcode` shortcode to embed a single file directly from Git An example of this can be seen in [/content/ngf/get-started.md](https://github.com/nginx/documentation/blob/af8a62b15f86a7b7be7944b7a79f44fd5e526c15/content/ngf/get-started.md?plain=1#L233C1-L233C128), which embeds a YAML file. +#### Re-use content with includes -### Add images to documentation pages - -Use the `img` shortcode to add images to documentation pages. It has the same parameters as the Hugo [figure shortcode](https://gohugo.io/content-management/shortcodes/#figure). - -1. Add the image to the `/static/img` directory. -2. Add the `img` shortcode: - - `{{< img src="" alt="">}}` - - Do not include a forward slash at the beginning of the file path or it will [break the image](https://gohugo.io/functions/relurl/#input-begins-with-a-slash). - -> **Important**: We have strict guidelines for using images. Review them in our [style guide](/documentation/style-guide.md#guidelines-for-screenshots). - - -### How to use Hugo includes - -Hugo includes are a custom shortcode that allows you to embed content stored in the [`/content/includes` directory](https://github.com/nginx/documentation/tree/main/content/includes). +The includes are a custom shortcode that allows you to embed content stored in the [`/content/includes` directory](https://github.com/nginx/documentation/tree/main/content/includes). It allows for content to be defined once and display in multiple places without duplication, creating consistency and simplifying the maintenance of items such as reference tables. @@ -180,12 +172,13 @@ This particular include file is used in the following pages: View the [Guidelines for includes](/templates/style-guide.md#guidelines-for-includes) for instructions on how to write effective include files. -## Linting +#### Add images to documentation pages -To use markdownlint to check content, run the following command: +Use the `img` shortcode to add images to documentation pages. It has the same parameters as the Hugo [figure shortcode](https://gohugo.io/content-management/shortcodes/#figure). -```shell -markdownlint -c .markdownlint.yaml -``` +1. Add the image to the `/static/img` directory. +2. Add the `img` shortcode: + - `{{< img src="" alt="">}}` + - Do not include a forward slash at the beginning of the file path or it will [break the image](https://gohugo.io/functions/relurl/#input-begins-with-a-slash). -The content path can be an individual file or a folder. +> **Important**: We have strict guidelines for using images. Review them in our [style guide](/documentation/style-guide.md#guidelines-for-screenshots). diff --git a/go.mod b/go.mod index a026366c2..331a976d8 100644 --- a/go.mod +++ b/go.mod @@ -2,4 +2,4 @@ module github.com/nginxinc/docs go 1.19 -require github.com/nginxinc/nginx-hugo-theme v0.43.5 // indirect +require github.com/nginxinc/nginx-hugo-theme v0.43.6 // indirect diff --git a/go.sum b/go.sum index 22a5ad0ff..c697c88d0 100644 --- a/go.sum +++ b/go.sum @@ -1,4 +1,2 @@ -github.com/nginxinc/nginx-hugo-theme v0.43.4 h1:fnUGTcpR/pwwgBIy2dv0fOkeT++cqv1T6G1LKqQRAFg= -github.com/nginxinc/nginx-hugo-theme v0.43.4/go.mod h1:DPNgSS5QYxkjH/BfH4uPDiTfODqWJ50NKZdorguom8M= -github.com/nginxinc/nginx-hugo-theme v0.43.5 h1:OCsB4Y5mWN0fDzev+rie62zVXciR89vgsQ5Ss8mxUn8= -github.com/nginxinc/nginx-hugo-theme v0.43.5/go.mod h1:DPNgSS5QYxkjH/BfH4uPDiTfODqWJ50NKZdorguom8M= +github.com/nginxinc/nginx-hugo-theme v0.43.6 h1:G9flRI3mMsETE5QHNXQ8ozesJvlRM9qBlrkzYHgZrdc= +github.com/nginxinc/nginx-hugo-theme v0.43.6/go.mod h1:DPNgSS5QYxkjH/BfH4uPDiTfODqWJ50NKZdorguom8M= diff --git a/layouts/index.html b/layouts/index.html index c4834aa1c..e09d14f66 100644 --- a/layouts/index.html +++ b/layouts/index.html @@ -404,7 +404,7 @@

More NGINX Products - +