diff --git a/content/nginx/admin-guide/installing-nginx/installing-nginx-plus-amazon-web-services.md b/content/nginx/admin-guide/installing-nginx/installing-nginx-plus-amazon-web-services.md
index 624ddd614..88cd20853 100644
--- a/content/nginx/admin-guide/installing-nginx/installing-nginx-plus-amazon-web-services.md
+++ b/content/nginx/admin-guide/installing-nginx/installing-nginx-plus-amazon-web-services.md
@@ -30,8 +30,8 @@ To quickly set up an NGINX Plus environment on AWS:
Click the **Continue to Subscribe** button to proceed to the **Launch on EC2** page.
-3. Select the type of launch by clicking the appropriate tab (1‑Click Launch, **Manual Launch**, or **Service Catalog**). Choose the desired options for billing, instance size, and so on, and click the Accept Software Terms… button.
-4. When configuring the firewall rules, add a rule to accept web traffic on TCP ports 80 and 443 (this happens automatically if you launch from the 1-Click Launch tab).
+3. Select the type of launch by clicking the appropriate tab (**1‑Click Launch***, **Manual Launch**, or **Service Catalog**). Choose the desired options for billing, instance size, and so on, and click the **Accept Software Terms…** button.
+4. When configuring the firewall rules, add a rule to accept web traffic on TCP ports 80 and 443 (this happens automatically if you launch from the **1‑Click Launch** tab).
5. As soon as the new EC2 instance launches, NGINX Plus starts automatically and serves a default **index.html** page. To view the page, use a web browser to access the public DNS name of the new instance. You can also check the status of the NGINX Plus server by logging into the EC2 instance and running this command:
```nginx
diff --git a/content/nginx/admin-guide/monitoring/new-relic-plugin.md b/content/nginx/admin-guide/monitoring/new-relic-plugin.md
index c7f1fe631..b56831edb 100644
--- a/content/nginx/admin-guide/monitoring/new-relic-plugin.md
+++ b/content/nginx/admin-guide/monitoring/new-relic-plugin.md
@@ -33,7 +33,7 @@ Download the [plug‑in and installation instructions](https://docs.newrelic.com
## Configuring the Plug‑In
-The configuration file for the NGINX plug‑in is /etc/nginx-nr-agent/nginx-nr-agent.ini. The minimal configuration includes:
+The configuration file for the NGINX plug‑in is **/etc/nginx‑nr‑agent/nginx‑nr‑agent.ini**. The minimal configuration includes:
- Your New Relic license key in the `newrelic_license_key` statement in the `global` section.
@@ -44,7 +44,7 @@ The configuration file for the NGINX plug‑in is /var/log/nginx-nr-agent.log.
+The default log file is **/var/log/nginx‑nr‑agent.log**.
## Running the Plug‑In
diff --git a/content/nginx/deployment-guides/amazon-web-services/high-availability-keepalived.md b/content/nginx/deployment-guides/amazon-web-services/high-availability-keepalived.md
index 90d19f00c..a3023a49b 100644
--- a/content/nginx/deployment-guides/amazon-web-services/high-availability-keepalived.md
+++ b/content/nginx/deployment-guides/amazon-web-services/high-availability-keepalived.md
@@ -96,8 +96,8 @@ Allocate an Elastic IP address and remember its ID. For detailed instructions, s
The NGINX Plus HA solution uses two scripts, which are invoked by `keepalived`:
-- nginx-ha-check – Determines the health of NGINX Plus.
-- nginx-ha-notify – Moves the Elastic IP address when a state transition happens, for example when the backup instance becomes the primary.
+- **nginx‑ha‑check** – Determines the health of NGINX Plus.
+- **nginx‑ha‑notify** – Moves the Elastic IP address when a state transition happens, for example when the backup instance becomes the primary.
1. Create a directory for the scripts, if it doesn’t already exist.
@@ -121,7 +121,7 @@ The NGINX Plus HA solution uses two scripts, which are invoked by `keepalived`:
There are two configuration files for the HA solution:
- **keepalived.conf** – The main configuration file for `keepalived`, slightly different for each NGINX Plus instance.
-- nginx-ha-notify – The script you downloaded in [Step 4](#ha-aws_ha-scripts), with several user‑defined variables.
+- **nginx‑ha‑notify** – The script you downloaded in [Step 4](#ha-aws_ha-scripts), with several user‑defined variables.
### Creating keepalived.conf
@@ -158,8 +158,8 @@ You must change values for the following configuration keywords. As you do so, a
- `script` in the `chk_nginx_service` block – The script that sends health checks to NGINX Plus.
- - On Ubuntu systems, /usr/lib/keepalived/nginx-ha-check
- - On CentOS systems, /usr/libexec/keepalived/nginx-ha-check
+ - On Ubuntu systems, **/usr/lib/keepalived/nginx‑ha‑check**
+ - On CentOS systems, **/usr/libexec/keepalived/nginx‑ha‑check**
- `priority` – The value that controls which instance becomes primary, with a higher value meaning a higher priority. Use `101` for the primary instance and `100` for the backup.
@@ -171,13 +171,13 @@ You must change values for the following configuration keywords. As you do so, a
- `notify` – The script that is invoked during a state transition.
- - On Ubuntu systems, /usr/lib/keepalived/nginx-ha-notify
- - On CentOS systems, /usr/libexec/keepalived/nginx-ha-notify
+ - On Ubuntu systems, **/usr/lib/keepalived/nginx‑ha‑notify**
+ - On CentOS systems, **/usr/libexec/keepalived/nginx‑ha‑notify**
### Creating nginx-ha-notify
-Modify the user‑defined variables section of the nginx-ha-notify script, replacing each `` placeholder with the value specified in the list below:
+Modify the user‑defined variables section of the **nginx‑ha‑notify** script, replacing each `` placeholder with the value specified in the list below:
```none
export AWS_ACCESS_KEY_ID=
@@ -223,7 +223,7 @@ Check the state on the backup instance, confirming that it has transitioned to `
## Troubleshooting
-If the solution doesn’t work as expected, check the `keepalived` logs, which are written to /var/log/syslog. Also, you can manually run the commands that invoke the `awscli` utility in the nginx-ha-notify script to check that the utility is working properly.
+If the solution doesn’t work as expected, check the `keepalived` logs, which are written to **/var/log/syslog**. Also, you can manually run the commands that invoke the `awscli` utility in the **nginx‑ha‑notify** script to check that the utility is working properly.
## Caveats
diff --git a/content/nginx/deployment-guides/amazon-web-services/high-availability-network-load-balancer.md b/content/nginx/deployment-guides/amazon-web-services/high-availability-network-load-balancer.md
index 079825229..6c7a451a9 100644
--- a/content/nginx/deployment-guides/amazon-web-services/high-availability-network-load-balancer.md
+++ b/content/nginx/deployment-guides/amazon-web-services/high-availability-network-load-balancer.md
@@ -254,15 +254,15 @@ Assign the following names to the instances, then install the indicated NGINX so
- Four NGINX Open Source instances:
- App 1:
- - ngx-oss-app1-1
- - ngx-oss-app1-2
+ - **ngx-oss-app1-1**
+ - **ngx-oss-app1-2**
- App 2:
- - ngx-oss-app2-1
- - ngx-oss-app2-2
+ - **ngx-oss-app2-1**
+ - **ngx-oss-app2-2**
- Two NGINX Plus instances:
- - ngx-plus-1
- - ngx-plus-2
+ - **ngx-plus-1**
+ - **ngx-plus-2**
@@ -278,11 +278,11 @@ Use the *Step‑by‑step* instructions in our deployment guide, [Setting Up an
Repeat the instructions on all four web servers:
- Running App 1:
- - ngx-oss-app1-1
- - ngx-oss-app1-2
+ - **ngx-oss-app1-1**
+ - **ngx-oss-app1-2**
- Running App 2:
- - ngx-oss-app2-1
- - ngx-oss-app2-2
+ - **ngx-oss-app2-1**
+ - **ngx-oss-app2-2**
#### Configure NGINX Plus on the load balancers
@@ -291,7 +291,7 @@ Configure NGINX Plus instances as load balancers. These distribute requests to
Use the *Step‑by‑step* instructions in our deployment guide, [Setting Up an NGINX Demo Environment]({{< ref "/nginx/deployment-guides/setting-up-nginx-demo-environment.md" >}}).
-Repeat the instructions on both ngx-plus-1 and ngx-plus-2.
+Repeat the instructions on both **ngx‑plus‑1** and **ngx‑plus‑2**.
### Automate instance setup with Packer and Terraform
@@ -317,7 +317,7 @@ To run the scripts, follow these instructions:
3. Set your AWS credentials in the Packer and Terraform scripts:
- - For Packer, set your credentials in the `variables` block in both packer/ngx-oss/packer.json and packer/ngx-plus/packer.json:
+ - For Packer, set your credentials in the `variables` block in both **packer/ngx‑oss/packer.json** and **packer/ngx‑plus/packer.json**:
```none
"variables": {
diff --git a/content/nginx/deployment-guides/amazon-web-services/ingress-controller-elastic-kubernetes-services.md b/content/nginx/deployment-guides/amazon-web-services/ingress-controller-elastic-kubernetes-services.md
index 3a66f745d..e1c9811b3 100644
--- a/content/nginx/deployment-guides/amazon-web-services/ingress-controller-elastic-kubernetes-services.md
+++ b/content/nginx/deployment-guides/amazon-web-services/ingress-controller-elastic-kubernetes-services.md
@@ -43,14 +43,14 @@ This guide covers the `eksctl` command as it is the simplest option.
1. Follow the instructions in the [eksctl.io documentation](https://eksctl.io/installation/) to install or update the `eksctl` command.
-2. Create an Amazon EKS cluster by following the instructions in the [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html). Select the Managed nodes – Linux option for each step. Note that the `eksctl create cluster` command in the first step can take ten minutes or more.
+2. Create an Amazon EKS cluster by following the instructions in the [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html). Select the **Managed nodes – Linux** option for each step. Note that the `eksctl create cluster` command in the first step can take ten minutes or more.
## Push the NGINX Plus Ingress Controller Image to AWS ECR
This step is only required if you do not plan to use the prebuilt NGINX Open Source image.
-1. Use the [AWS documentation](https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html) to create a repository in the Amazon Elastic Container Registry (ECR). In Step 4 of the AWS instructions, name the repository nginx-plus-ic as that is what we use in this guide.
+1. Use the [AWS documentation](https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html) to create a repository in the Amazon Elastic Container Registry (ECR). In Step 4 of the AWS instructions, name the repository **nginx‑plus‑ic** as that is what we use in this guide.
2. Run the following AWS CLI command. It generates an auth token for your AWS ECR registry, then pipes it into the `docker login` command. This lets AWS ECR authenticate and authorize the upcoming Docker requests. For details about the command, see the [AWS documentation](https://docs.aws.amazon.com/AmazonECR/latest/userguide/registry_auth.html).
diff --git a/content/nginx/deployment-guides/amazon-web-services/route-53-global-server-load-balancing.md b/content/nginx/deployment-guides/amazon-web-services/route-53-global-server-load-balancing.md
index 4a449de16..32038fedb 100644
--- a/content/nginx/deployment-guides/amazon-web-services/route-53-global-server-load-balancing.md
+++ b/content/nginx/deployment-guides/amazon-web-services/route-53-global-server-load-balancing.md
@@ -40,7 +40,7 @@ The setup for global server load balancing (GSLB) in this guide combines Amazon
-Route 53 is a Domain Name System (DNS) service that performs global server load balancing by routing each request to the AWS region closest to the requester's location. This guide uses two regions: US West (Oregon) and US East (N. Virginia).
+Route 53 is a Domain Name System (DNS) service that performs global server load balancing by routing each request to the AWS region closest to the requester's location. This guide uses two regions: **US West (Oregon)** and **US East (N. Virginia)**.
In each region, two or more NGINX Plus load balancers are deployed in a high‑availability (HA) configuration. In this guide, there are two NGINX Plus load balancer instances per region. You can also use NGINX Open Source for this purpose, but it lacks the [application health checks](https://docs.nginx.com/nginx/admin-guide/load-balancer/http-health-check/) that make for more precise error detection. For simplicity, we'll refer to NGINX Plus load balancers throughout this guide, noting when features specific to NGINX Plus are used.
@@ -79,7 +79,7 @@ Create a _hosted zone_, which basically involves designating a domain name to be
1. Log in to the [AWS Management Console](https://console.aws.amazon.com/) (**console.aws.amazon.com/**).
-2. Access the Route 53 dashboard page by clicking **Services** in the top AWS navigation bar, mousing over **Networking** in the All AWS Services column and then clicking **Route 53**.
+2. Access the Route 53 dashboard page by clicking **Services** in the top AWS navigation bar, mousing over **Networking** in the **All AWS Services** column and then clicking **Route 53**.
@@ -87,7 +87,7 @@ Create a _hosted zone_, which basically involves designating a domain name to be
- If you see the Route 53 home page instead, access the **Registered domains** tab by clicking the Get started now button under Domain registration.
+ If you see the Route 53 home page instead, access the **Registered domains** tab by clicking the Get started now button under **Domain registration**.
@@ -123,21 +123,21 @@ Create records sets for your domain:
4. Fill in the fields in the **Create Record Set** column:
- - **Name** – You can leave this field blank, but for this guide we are setting the name to www.nginxroute53.com.
- - **Type** – A – IPv4 address.
- - **Alias** – No.
- - **TTL (Seconds)** – 60.
+ - **Name** – You can leave this field blank, but for this guide we are setting the name to **www.nginxroute53.com**.
+ - **Type** – **A – IPv4 address**.
+ - **Alias** – **No**.
+ - **TTL (Seconds)** – **60**.
- **Note**: Reducing TTL from the default of 300 in this way can decrease the time that it takes for Route 53 to fail over when both NGINX Plus load balancers in the region are down, but there is always a delay of about two minutes regardless of the TTL setting. This is a built‑in limitation of Route 53.
+ **Note**: Reducing TTL from the default of **300** in this way can decrease the time that it takes for Route 53 to fail over when both NGINX Plus load balancers in the region are down, but there is always a delay of about two minutes regardless of the TTL setting. This is a built‑in limitation of Route 53.
- - **Value** – [Elastic IP addresses](#elastic-ip) of the NGINX Plus load balancers in the first region [in this guide, US West (Oregon)].
- - **Routing Policy** – Latency.
+ - **Value** – [Elastic IP addresses](#elastic-ip) of the NGINX Plus load balancers in the first region [in this guide, **US West (Oregon)**].
+ - **Routing Policy** – **Latency**.
-5. A new area opens when you select Latency. Fill in the fields as indicated (see the figure below):
+5. A new area opens when you select **Latency**. Fill in the fields as indicated (see the figure below):
- - **Region** – Region to which the load balancers belong (in this guide, us-west-2).
- - **Set ID** – Identifier for this group of load balancers (in this guide, US West LBs).
- - **Associate with Health Check** – No.
+ - **Region** – Region to which the load balancers belong (in this guide, **us‑west‑2**).
+ - **Set ID** – Identifier for this group of load balancers (in this guide, **US West LBs**).
+ - **Associate with Health Check** – **No**.
When you complete all fields, the tab looks like this:
@@ -145,7 +145,7 @@ Create records sets for your domain:
6. Click the Create button.
-7. Repeat Steps 3 through 6 for the load balancers in the other region [in this guide, US East (N. Virginia)].
+7. Repeat Steps 3 through 6 for the load balancers in the other region [in this guide, **US East (N. Virginia)**].
You can now test your website. Insert your domain name into a browser and see that your request is being load balanced between servers based on your location.
@@ -172,21 +172,21 @@ We create health checks both for each NGINX Plus load balancer individually and
-2. Click the Create health check button. In the Configure health check form that opens, specify the following values, then click the Next button.
+2. Click the Create health check button. In the **Configure health check** form that opens, specify the following values, then click the Next button.
- - **Name** – Identifier for an NGINX Plus load balancer instance, for example US West LB 1.
- - **What to monitor** – Endpoint.
- - **Specify endpoint by** – IP address.
+ - **Name** – Identifier for an NGINX Plus load balancer instance, for example **US West LB 1**.
+ - **What to monitor** – **Endpoint**.
+ - **Specify endpoint by** – **IP address**.
- **IP address** – The [elastic IP address](#elastic-ip) of the NGINX Plus load balancer.
- - **Port** – The port advertised to clients for your domain or web service (the default is 80).
+ - **Port** – The port advertised to clients for your domain or web service (the default is **80**).
-3. On the Get notified when health check fails screen that opens, set the **Create alarm** radio button to **Yes** or **No** as appropriate, then click the Create health check button.
+3. On the **Get notified when health check fails** screen that opens, set the **Create alarm** radio button to **Yes** or **No** as appropriate, then click the Create health check button.
-4. Repeat Steps 2 and 3 for your other NGINX Plus load balancers (in this guide, US West LB 2, US East LB 1, and US East LB 2).
+4. Repeat Steps 2 and 3 for your other NGINX Plus load balancers (in this guide, **US West LB 2**, **US East LB 1**, and **US East LB 2**).
5. Proceed to the next section to configure health checks for the load balancer pairs.
@@ -195,18 +195,18 @@ We create health checks both for each NGINX Plus load balancer individually and
1. Click the Create health check button.
-2. In the Configure health check form that opens, specify the following values, then click the Next button.
+2. In the **Configure health check** form that opens, specify the following values, then click the Next button.
- - **Name** – Identifier for the pair of NGINX Plus load balancers in the first region, for example US West LBs.
- - **What to monitor** – Status of other health checks .
+ - **Name** – Identifier for the pair of NGINX Plus load balancers in the first region, for example **US West LBs**.
+ - **What to monitor** – **Status of other health checks **.
- **Health checks to monitor** – The health checks of the two US West load balancers (add them one after the other by clicking in the box and choosing them from the drop‑down menu as shown).
- - **Report healthy when** – at least 1 of 2 selected health checks are healthy (the choices in this field are obscured in the screenshot by the drop‑down menu).
+ - **Report healthy when** – **at least 1 of 2 selected health checks are healthy** (the choices in this field are obscured in the screenshot by the drop‑down menu).
-3. On the Get notified when health check fails screen that opens, set the **Create alarm** radio button as appropriate (see Step 5 in the previous section), then click the Create health check button.
+3. On the **Get notified when health check fails** screen that opens, set the **Create alarm** radio button as appropriate (see Step 5 in the previous section), then click the Create health check button.
-4. Repeat Steps 1 through 3 for the paired load balancers in the other region [in this guide, US East (N. Virginia)].
+4. Repeat Steps 1 through 3 for the paired load balancers in the other region [in this guide, **US East (N. Virginia)**].
When you have finished configuring all six health checks, the **Health checks** tab looks like this:
@@ -223,13 +223,13 @@ When you have finished configuring all six health checks, the **Health checks**
The tab changes to display the record sets for the domain.
-3. In the list of record sets that opens, click the row for the record set belonging to your first region [in this guide, US West (Oregon)]. The Edit Record Set column opens on the right side of the tab.
+3. In the list of record sets that opens, click the row for the record set belonging to your first region [in this guide, **US West (Oregon)**]. The Edit Record Set column opens on the right side of the tab.
-4. Change the **Associate with Health Check** radio button to Yes.
+4. Change the **Associate with Health Check** radio button to **Yes**.
-5. In the **Health Check to Associate** field, select the paired health check for your first region (in this guide, US West LBs).
+5. In the **Health Check to Associate** field, select the paired health check for your first region (in this guide, **US West LBs**).
6. Click the Save Record Set button.
@@ -242,7 +242,7 @@ These instructions assume that you have configured NGINX Plus on two EC2 instan
**Note:** Some commands require `root` privilege. If appropriate for your environment, prefix commands with the `sudo` command.
-1. Connect to the US West LB 1 instance. For instructions, see Connecting to an EC2 Instance.
+1. Connect to the **US West LB 1** instance. For instructions, see Connecting to an EC2 Instance.
2. Change directory to **/etc/nginx/conf.d**.
@@ -250,7 +250,7 @@ These instructions assume that you have configured NGINX Plus on two EC2 instan
cd /etc/nginx/conf.d
```
-3. Edit the west-lb1.conf file and add the **@healthcheck** location to set up health checks.
+3. Edit the **west‑lb1.conf** file and add the **@healthcheck** location to set up health checks.
```nginx
upstream backend-servers {
@@ -282,9 +282,9 @@ These instructions assume that you have configured NGINX Plus on two EC2 instan
nginx -s reload
```
-5. Repeat Steps 1 through 4 for the other three load balancers (US West LB 2, US East LB 1, and US East LB2).
+5. Repeat Steps 1 through 4 for the other three load balancers (**US West LB 2**, **US East LB 1**, and **US East LB2**).
- In Step 3, change the filename as appropriate (west-lb2.conf, east-lb1.conf, and east-lb2.conf). In the east-lb1.conf and east-lb2.conf files, the `server` directives specify the public DNS names of Backup 3 and Backup 4.
+ In Step 3, change the filename as appropriate (**west‑lb2.conf**, **east‑lb1.conf**, and **east‑lb2.conf**). In the **east‑lb1.conf** and **east‑lb2.conf** files, the `server` directives specify the public DNS names of Backup 3 and Backup 4.
## Appendix
@@ -307,31 +307,31 @@ Step‑by‑step instructions for creating EC2 instances and installing NGINX so
Assign the following names to the instances, and then install the indicated NGINX software.
-- In the first region, which is US West (Oregon) in this guide:
+- In the first region, which is **US West (Oregon)** in this guide:
- Two load balancer instances running NGINX Plus:
- - US West LB 1
- - US West LB 2
+ - **US West LB 1**
+ - **US West LB 2**
- Two backend instances running NGINX Open Source:
- * Backend 1
- - Backend 2
+ * **Backend 1**
+ - **Backend 2**
-- In the second region, which is US East (N. Virginia) in this guide:
+- In the second region, which is **US East (N. Virginia)** in this guide:
- Two load balancer instances running NGINX Plus:
- - US East LB 1
- - US East LB 2
+ - **US East LB 1**
+ - **US East LB 2**
- Two backend instances running NGINX Open Source:
- * Backend 3
- - Backend 4
+ * **Backend 3**
+ - **Backend 4**
-Here's the **Instances** tab after we create the four instances in the N. Virginia region.
+Here's the **Instances** tab after we create the four instances in the **N. Virginia** region.
@@ -359,14 +359,14 @@ Perform these steps on all eight instances.
-After you complete the instructions on all instances, the list for a region (here, Oregon) looks like this:
+After you complete the instructions on all instances, the list for a region (here, **Oregon**) looks like this:
### Configuring NGINX Open Source on the Backend Servers
-Perform these steps on all four backend servers: Backend 1, Backend 2, Backend 3, and Backend 4. In Step 3, substitute the appropriate name for `Backend X` in the **index.html** file.
+Perform these steps on all four backend servers: **Backend 1**, **Backend 2**, **Backend 3**, and **Backend 4**. In Step 3, substitute the appropriate name for `Backend X` in the **index.html** file.
**Note:** Some commands require `root` privilege. If appropriate for your environment, prefix commands with the `sudo` command.
@@ -421,7 +421,7 @@ Perform these steps on all four backend servers:
### Configuring NGINX Plus on the Load Balancers
-Perform these steps on all four backend servers: US West LB 1, US West LB 2, US East LB 1, and US West LB 2.
+Perform these steps on all four backend servers: **US West LB 1**, **US West LB 2**, **US East LB 1**, and **US West LB 2**.
**Note:** Some commands require `root` privilege. If appropriate for your environment, prefix commands with the `sudo` command.
@@ -439,10 +439,10 @@ Perform these steps on all four backend servers: US West LB 1 – west-lb1.conf
- - For US West LB 2 – west-lb2.conf
- - For US East LB 1 – east-lb1.conf
- - For US West LB 2 – east-lb2.conf
+ - For **US West LB 1** – **west‑lb1.conf**
+ - For **US West LB 2** – **west‑lb2.conf**
+ - For **US East LB 1** – **east‑lb1.conf**
+ - For **US West LB 2** – **east‑lb2.conf**
In the `server` directives in the `upstream` block, substitute the public DNS names of the backend instances in the region; to learn them, see the **Instances** tab in the EC2 Dashboard.
diff --git a/content/nginx/deployment-guides/global-server-load-balancing/ns1-global-server-load-balancing.md b/content/nginx/deployment-guides/global-server-load-balancing/ns1-global-server-load-balancing.md
index d1284e106..8e425506e 100644
--- a/content/nginx/deployment-guides/global-server-load-balancing/ns1-global-server-load-balancing.md
+++ b/content/nginx/deployment-guides/global-server-load-balancing/ns1-global-server-load-balancing.md
@@ -69,9 +69,9 @@ The solution functions alongside other NS1 capabilities, such as geo‑proximal
5. The **Add Record** window pops up. Enter the following values:
- - **Record Type** – A (the default).
+ - **Record Type** – **A** (the default).
- name – Leave blank unless you are creating the ``A`` record for a subdomain.
- - **TTL** – 3600 is the default, which we are not changing.
+ - **TTL** – **3600** is the default, which we are not changing.
- **ANSWERS** – The public IP address of the first NGINX Plus instance. To add each of the other instances, click the Add Answer button. (In this guide we're using private IP addresses in the 10.0.0.0/8 range as examples.)
Click the Save All Changes button.
@@ -86,24 +86,24 @@ The solution functions alongside other NS1 capabilities, such as geo‑proximal
-8. In the **Answer Metadata** window that pops up, click Up/down in the STATUS section of the SETTING column, if it is not already selected. Click the **Select** box in the AVAILABLE column, and then select either Up or Down from the drop‑down menu. In this guide we're selecting Up to indicate that the NGINX Plus instance is operational.
+8. In the **Answer Metadata** window that pops up, click Up/down in the STATUS section of the SETTING column, if it is not already selected. Click the **Select** box in the AVAILABLE column, and then select either **Up** or **Down** from the drop‑down menu. In this guide we're selecting **Up** to indicate that the NGINX Plus instance is operational.
9. Click a value in the GEOGRAPHICAL section of the SETTING column and specify the location of the NGINX Plus instance. Begin by choosing one of the several types of codes that NS1 offers for identifying locations:
- **Canadian province(s)** – Two‑letter codes for Canadian provinces
- **Country/countries** – Two‑letter codes for nations and territories
- - **Geographic region(s)** – Identifiers like US-WEST and ASIAPAC
+ - **Geographic region(s)** – Identifiers like **US‑WEST** and **ASIAPAC**
- **ISO region code** – Identification codes for nations and territories as defined in [ISO 3166](https://www.iso.org/iso-3166-country-codes.html)
- **Latitude** – Degrees, minutes, and seconds of latitude (northern or southern hemisphere)
- **Longitude** – Degrees, minutes, and seconds of longitude (eastern or western hemisphere)
- **US State(s)** – Two‑letter codes for US states
- In this guide we're using **Country/countries** codes. For the first NGINX Plus instance, we select Americas > Northern America > United States (US) and click the Ok button.
+ In this guide we're using **Country/countries** codes. For the first NGINX Plus instance, we select **Americas > Northern America > United States (US)** and click the Ok button.
-10. Repeat Steps 7–9 for both of the other two NGINX Plus instances. For the country in Step 9, we're selecting Europe > Western Europe > Germany (DE) for NGINX Plus instance 2 and Asia > South‑Eastern Asia > Singapore (SG) for NGINX Plus instance 3.
+10. Repeat Steps 7–9 for both of the other two NGINX Plus instances. For the country in Step 9, we're selecting **Europe > Western Europe > Germany (DE)** for NGINX Plus instance 2 and **Asia > South‑Eastern Asia > Singapore (SG)** for NGINX Plus instance 3.
When finished with both instances, on the details page for the ``A`` record click the Save Record button.
@@ -113,9 +113,9 @@ The solution functions alongside other NS1 capabilities, such as geo‑proximal
12. In the **Add Filters** window that pops up, click the plus sign (+) on the button for each filter you want to apply. In this guide, we're configuring the filters in this order:
- - Up in the HEALTHCHECKS section
- - Geotarget Country in the GEOGRAPHIC section
- - Select First N in the TRAFFIC MANAGEMENT section
+ - **Up** in the ** HEALTHCHECKS ** section
+ - **Geotarget Country** in the ** GEOGRAPHIC ** section
+ - **Select First N** in the ** TRAFFIC MANAGEMENT ** section
Click the Save Filter Chain button.
@@ -128,17 +128,17 @@ In this section we install and configure the NS1 agent on the same hosts as our
1. Follow the instructions in the [NS1 documentation](https://help.ns1.com/hc/en-us/articles/360020474154) to set up and connect a separate data feed for each of the three NGINX Plus instances, which NS1 calls _answers_.
- On the first page (**Configure a new data source from NSONE Data Feed API v1**) specify a name for the _data source_, which is the administrative container for the data feeds you will be creating. Use the same name each of the three times you go through the instructions. We're naming the data source NGINX-GSLB.
+ On the first page (**Configure a new data source from NSONE Data Feed API v1**) specify a name for the _data source_, which is the administrative container for the data feeds you will be creating. Use the same name each of the three times you go through the instructions. We're naming the data source **NGINX‑GSLB**.
On the next page (**Create Feed from NSONE Data Feed API v1**), create a data feed for the instance. Because the **Name** field is just for internal use, any value is fine. The value in the **Label** field is used in the YAML configuration file for the instance (see Step 4 below). We're specifying labels that indicate the country (using the ISO 3166 codes) in which the instance is running:
- - us-nginxgslb-datafeed for instance 1 in the US
- - de-nginxgslb-datafeed for instance 2 in Germany
- - sg-nginxgslb-datafeed for instance 3 in Singapore
+ - **us‑nginxgslb‑datafeed** for instance 1 in the US
+ - **de‑nginxgslb‑datafeed** for instance 2 in Germany
+ - **sg‑nginxgslb‑datafeed** for instance 3 in Singapore
- After creating the three feeds, note the value in the **Feeds URL** field on the INTEGRATIONS tab. The final element of the URL is the ```` you will specify in the YAML configuration file in Step 4. In the third screenshot in the [NS1 documentation](https://help.ns1.com/hc/en-us/articles/360020474154), for example, it is e566332c5d22c6b66aeaa8837eae90ac.
+ After creating the three feeds, note the value in the **Feeds URL** field on the INTEGRATIONS tab. The final element of the URL is the ```` you will specify in the YAML configuration file in Step 4. In the third screenshot in the [NS1 documentation](https://help.ns1.com/hc/en-us/articles/360020474154), for example, it is **e566332c5d22c6b66aeaa8837eae90ac**.
-2. Follow the instructions in the [NS1 documentation](https://help.ns1.com/hc/en-us/articles/360017341694-Creating-managing-API-keys) to create an NS1 API key for the agent, if you have not already. (To access **Account Settings** in Step 1, click your username in the upper right corner of the NS1 title bar.) We're naming the app NGINX-GSLB. Make note of the key value – you'll specify it as ```` in the YAML configuration file in Step 4. To see the actual hexadecimal value, click on the circled letter **i** in the **API Key** field.
+2. Follow the instructions in the [NS1 documentation](https://help.ns1.com/hc/en-us/articles/360017341694-Creating-managing-API-keys) to create an NS1 API key for the agent, if you have not already. (To access **Account Settings** in Step 1, click your username in the upper right corner of the NS1 title bar.) We're naming the app **NGINX‑GSLB**. Make note of the key value – you'll specify it as ```` in the YAML configuration file in Step 4. To see the actual hexadecimal value, click on the circled letter **i** in the **API Key** field.
3. On each NGINX Plus host, clone the [GitHub repo](https://github.com/nginxinc/nginx-ns1-gslb) for the NS1 agent.
@@ -185,14 +185,14 @@ In this section we install and configure the NS1 agent on the same hosts as our
In this section we describe how to verify that NS1 correctly redistributes traffic to an alternate PoP when the PoP nearest to the client is not operational (in the setup in this guide, each of the three NGINX Plus instances corresponds to a PoP). There are three ways to indicate to NS1 that a PoP is down:
-- [Change the status of the NGINX Plus instance](#verify-when-status-down) to Down in the NS1 ``A`` record
+- [Change the status of the NGINX Plus instance](#verify-when-status-down) to **Down** in the NS1 ``A`` record
- [Take down the servers in the proxied upstream group](#verify-when-upstream-down)
- [Cause traffic to exceed a configured threshold](#verify-when-over-threshold)
### Verifying Traffic Redistribution when an NGINX Plus Instance Is Marked Down
-Here we verify that NS1 switches over to the next‑nearest NGINX Plus instance when we change the metadata on the nearest NGINX Plus instance to Down.
+Here we verify that NS1 switches over to the next‑nearest NGINX Plus instance when we change the metadata on the nearest NGINX Plus instance to **Down**.
1. On a host located in the US, run the following command to determine which site NS1 is returning as nearest. Appropriately, it's returning 10.10.10.1, the IP address of the NGINX Plus instance in the US.
@@ -207,7 +207,7 @@ Here we verify that NS1 switches over to the next‑nearest NGINX Plus instance
Address: 10.10.10.1
```
-2. Change the **Up/Down** answer metadata on the US instance to Down (see Step 8 in [Setting Up NS1](#ns1-setup)).
+2. Change the **Up/Down** answer metadata on the US instance to **Down** (see Step 8 in [Setting Up NS1](#ns1-setup)).
3. Wait an hour – because we didn't change the default time-to-live (TTL) of 3600 seconds on the ``A`` record for **nginxgslb.cf** – and issue the ``nslookup`` command again. NS1 returns 10.10.10.2, the IP address of the NGINX Plus instance in Germany, which is now the nearest.
@@ -349,7 +349,7 @@ First we perform these steps to create the shed filter:
-3. The **Shed Load** filter is added as the fourth (lowest) box in the **Active Filters** section. Move it to be third by clicking and dragging it above the Select First N box.
+3. The **Shed Load** filter is added as the fourth (lowest) box in the **Active Filters** section. Move it to be third by clicking and dragging it above the **Select First N** box.
4. Click the Save Filter Chain button.
@@ -363,9 +363,9 @@ First we perform these steps to create the shed filter:
7. In the **Answer Metadata** window that opens, set values for the following metadata. In each case, click the icon in the FEED column of the metadata's row, then select or enter the indicated value in the AVAILABLE column. (For testing purposes, we're setting very small values for the watermarks so that the threshold is exceeded very quickly.)
- - **Active connections** – us-nginxgslb-datafeed
- - **High watermark** – 5
- - **Low watermark** – 2
+ - **Active connections** – **us‑nginxgslb‑datafeed**
+ - **High watermark** – **5**
+ - **Low watermark** – **2**
After setting all three, click the Ok button. (The screenshot shows the window just before this action.)
diff --git a/content/nginx/deployment-guides/google-cloud-platform/high-availability-all-active.md b/content/nginx/deployment-guides/google-cloud-platform/high-availability-all-active.md
index 2596ddbf5..cf9e705b0 100644
--- a/content/nginx/deployment-guides/google-cloud-platform/high-availability-all-active.md
+++ b/content/nginx/deployment-guides/google-cloud-platform/high-availability-all-active.md
@@ -15,7 +15,7 @@ This guide explains how to deploy F5 NGINX Plus in a high-availability configura
**Notes:**
- The GCE environment changes constantly. This could include names and arrangements of GUI elements. This guide was accurate when published. But, some GCE GUI elements might have changed over time. Use this guide as a reference and adapt to the current GCE working environment.
-- The configuration described in this guide allows anyone from a public IP address to access the NGINX Plus instances. While this works in common scenarios in a test environment, we do not recommend it in production. Block external HTTP/HTTPS access to app-1 and app-2 instances to external IP address before production deployment. Alternatively, remove the external IP addresses for all application instances, so they're accessible only on the internal GCE network.
+- The configuration described in this guide allows anyone from a public IP address to access the NGINX Plus instances. While this works in common scenarios in a test environment, we do not recommend it in production. Block external HTTP/HTTPS access to **app‑1** and **app‑2** instances to external IP address before production deployment. Alternatively, remove the external IP addresses for all application instances, so they're accessible only on the internal GCE network.
@@ -38,7 +38,7 @@ The GCE network LB assigns each new client to a specific NGINX Plus LB. This ass
NGINX Plus LB uses the round-robin algorithm to forward requests to specific app instances. It also adds a session cookie. It keeps future requests from the same client on the same app instance as long as it's running.
-This deployment guide uses two groups of app instances: – app-1 and app-2. It demonstrates [load balancing](https://www.nginx.com/products/nginx/load-balancing/) between different app types. But both groups have the same app configurations.
+This deployment guide uses two groups of app instances: – **app‑1** and **app‑2**. It demonstrates [load balancing](https://www.nginx.com/products/nginx/load-balancing/) between different app types. But both groups have the same app configurations.
You can adapt the deployment to distribute unique connections to different groups of app instances. This can be done by creating discrete upstream blocks and routing content based on the URI.
@@ -67,19 +67,19 @@ All component names, like projects and instances, are examples only. You can cha
Create a new GCE project to host the all‑active NGINX Plus deployment.
-1. Log into the [GCP Console](http://console.cloud.google.com) at console.cloud.google.com.
+1. Log into the [GCP Console](http://console.cloud.google.com) at **console.cloud.google.com**.
-2. The GCP Home > Dashboard tab opens. Its contents depend on whether you have any existing projects.
+2. The GCP **Home > Dashboard** tab opens. Its contents depend on whether you have any existing projects.
- If there are no existing projects, click the Create a project button.
- - If there are existing projects, the name of one of them appears in the upper left of the blue header bar (in the screenshot, it's My Test Project ). Click the project name and select Create project from the menu that opens.
+ - If there are existing projects, the name of one of them appears in the upper left of the blue header bar (in the screenshot, it's My Test Project ). Click the project name and select **Create project** from the menu that opens.
-3. Type your project name in the New Project window that pops up, then click CREATE. We're naming the project NGINX Plus All-Active-LB.
+3. Type your project name in the **New Project** window that pops up, then click CREATE. We're naming the project **NGINX Plus All‑Active‑LB**.
@@ -87,24 +87,24 @@ Create a new GCE project to host the all‑active NGINX Plus deployment.
Create firewall rules that allow access to the HTTP and HTTPS ports on your GCE instances. You'll attach the rules to all the instances you create for the deployment.
-1. Navigate to the Networking > Firewall rules tab and click + CREATE FIREWALL RULE. (The screenshot shows the default rules provided by GCE.)
+1. Navigate to the **Networking > Firewall rules** tab and click + CREATE FIREWALL RULE. (The screenshot shows the default rules provided by GCE.)
-2. Fill in the fields on the Create a firewall rule screen that opens:
+2. Fill in the fields on the **Create a firewall rule** screen that opens:
- - **Name** – nginx-plus-http-fw-rule
- - **Description** – Allow access to ports 80, 8080, and 443 on all NGINX Plus instances
- - Source filter – On the drop-down menu, select either Allow from any source (0.0.0.0/0), or IP range if you want to restrict access to users on your private network. In the second case, fill in the Source IP ranges field that opens. In the screenshot, we are allowing unrestricted access.
- - Allowed protocols and ports – tcp:80; tcp:8080; tcp:443
+ - **Name** – **nginx‑plus‑http‑fw‑rule**
+ - **Description** – **Allow access to ports 80, 8080, and 443 on all NGINX Plus instances**
+ - **Source filter** – On the drop-down menu, select either **Allow from any source (0.0.0.0/0)**, or **IP range** if you want to restrict access to users on your private network. In the second case, fill in the **Source IP ranges** field that opens. In the screenshot, we are allowing unrestricted access.
+ - **Allowed protocols and ports** – **tcp:80; tcp:8080; tcp:443**
**Note:** As noted in the introduction, allowing access from any public IP address is appropriate only in a test environment. Before deploying the architecture in production, create a firewall rule. Use this rule to block access to the external IP address for your application instances. Alternatively, you can disable external IP addresses for the instances. This limits access only to the internal GCE network.
- - Target tags – nginx-plus-http-fw-rule
+ - **Target tags** – **nginx‑plus‑http‑fw‑rule**
-3. Click the Create button. The new rule is added to the table on the Firewall rules tab.
+3. Click the Create button. The new rule is added to the table on the **Firewall rules** tab.
## Task 2: Creating Source Instances
@@ -123,96 +123,96 @@ The methods to create a source instance are different. Once you've created the s
Create three source VM instances based on a GCE VM image. We're basing our instances on the Ubuntu 16.04 LTS image.
-1. Verify that the NGINX Plus All-Active-LB project is still selected in the Google Cloud Platform header bar.
+1. Verify that the **NGINX Plus All‑Active‑LB** project is still selected in the Google Cloud Platform header bar.
-2. Navigate to the Compute Engine > VM instances tab.
+2. Navigate to the **Compute Engine > VM instances** tab.
-3. Click the Create instance button. The Create an instance page opens.
+3. Click the Create instance button. The **Create an instance** page opens.
#### Creating the First Application Instance from a VM Image
-1. On the Create an instance page, modify or verify the fields and checkboxes as indicated (a screenshot of the completed page appears in the next step):
+1. On the **Create an instance** page, modify or verify the fields and checkboxes as indicated (a screenshot of the completed page appears in the next step):
- - **Name** – nginx-plus-app-1
- - **Zone** – The GCP zone that makes sense for your location. We're using us-west1-a.
- - Machine type – The appropriate size for the level of traffic you anticipate. We're selecting micro, which is ideal for testing purposes.
- - Boot disk – Click Change. The Boot disk page opens to the OS images subtab. Perform the following steps:
+ - **Name** – **nginx‑plus‑app‑1**
+ - **Zone** – The GCP zone that makes sense for your location. We're using **us‑west1‑a**.
+ - **Machine type** – The appropriate size for the level of traffic you anticipate. We're selecting **micro**, which is ideal for testing purposes.
+ - **Boot disk** – Click **Change**. The **Boot disk** page opens to the OS images subtab. Perform the following steps:
- - Click the radio button for the Unix or Linux image of your choice (here, Ubuntu 16.04 LTS).
- - Accept the default values in the Boot disk type and Size (GB) fields (Standard persistent disk and 10 respectively).
+ - Click the radio button for the Unix or Linux image of your choice (here, **Ubuntu 16.04 LTS**).
+ - Accept the default values in the **Boot disk type** and **Size (GB)** fields (**Standard persistent disk** and **10** respectively).
- Click the Select button.
- - Identity and API access – Keep the defaults for the Service account field and Access scopes radio button. Unless you need more granular control.
- - **Firewall** – Verify that neither check box is checked (the default). The firewall rule invoked in the **Tags** field on the Management subtab (see Step 3 below) controls this type of access.
+ - **Identity and API access** – Keep the defaults for the **Service account ** field and **Access scopes** radio button. Unless you need more granular control.
+ - **Firewall** – Verify that neither check box is checked (the default). The firewall rule invoked in the **Tags** field on the **Management** subtab (see Step 3 below) controls this type of access.
2. Click Management, disk, networking, SSH keys to open that set of subtabs. (The screenshot shows the values entered in the previous step.)
-3. On the Management subtab, modify or verify the fields as indicated:
+3. On the **Management** subtab, modify or verify the fields as indicated:
- - **Description** – NGINX Plus app-1 Image
- - **Tags** – nginx-plus-http-fw-rule
- - **Preemptibility** – Off (recommended) (the default)
- - Automatic restart – On (recommended) (the default)
- - On host maintenance – Migrate VM instance (recommended) (the default)
+ - **Description** – **NGINX Plus app‑1 Image**
+ - **Tags** – **nginx‑plus‑http‑fw‑rule**
+ - **Preemptibility** – **Off (recommended)** (the default)
+ - **Automatic restart** – **On (recommended)** (the default)
+ - **On host maintenance** – **Migrate VM instance (recommended)** (the default)
-4. On the Disks subtab, uncheck the checkbox labeled Delete boot disk when instance is deleted.
+4. On the **Disks** subtab, uncheck the checkbox labeled **Delete boot disk when instance is deleted**.
-5. On the Networking subtab, verify the default settings, in particular Ephemeral for External IP and Off for IP Forwarding.
+5. On the **Networking** subtab, verify the default settings, in particular **Ephemeral** for **External IP** and **Off** for **IP Forwarding**.
-6. If you're using your own SSH public key instead of your default GCE keys, paste the hexadecimal key string on the SSH Keys subtab. Right into the box that reads Enter entire key data.
+6. If you're using your own SSH public key instead of your default GCE keys, paste the hexadecimal key string on the **SSH Keys** subtab. Right into the box that reads **Enter entire key data**.
-7. Click the Create button at the bottom of the Create an instance page.
+7. Click the Create button at the bottom of the **Create an instance** page.
- The VM instances summary page opens. It can take several minutes for the instance to be created. Wait to continue until the green check mark appears.
+ The **VM instances** summary page opens. It can take several minutes for the instance to be created. Wait to continue until the green check mark appears.
#### Creating the Second Application Instance from a VM Image
-1. On the VM instances summary page, click CREATE INSTANCE.
+1. On the **VM instances** summary page, click CREATE INSTANCE.
2. Repeat the steps in Creating the First Application Instance to create the second application instance. Specify the same values as for the first application instance, except:
- - In Step 1, **Name** – nginx-plus-app-2
- - In Step 3, **Description** – NGINX Plus app-2 Image
+ - In Step 1, **Name** – **nginx‑plus‑app‑2**
+ - In Step 3, **Description** – **NGINX Plus app‑2 Image**
#### Creating the Load-Balancing Instance from a VM Image
-1. On the VM instances summary page, click CREATE INSTANCE.
+1. On the **VM instances** summary page, click CREATE INSTANCE.
2. Repeat the steps in Creating the First Application Instance to create the load‑balancing instance. Specify the same values as for the first application instance, except:
- - In Step 1, **Name** – nginx-plus-lb
- - In Step 3, **Description** – NGINX Plus Load Balancing Image
+ - In Step 1, **Name** – **nginx‑plus‑lb**
+ - In Step 3, **Description** – **NGINX Plus Load Balancing Image**
#### Configuring PHP and FastCGI on the VM-Based Instances
Install and configure PHP and FastCGI on the instances.
-Repeat these instructions for all three source instances (nginx-plus-app-1, nginx-plus-app-2, and nginx-plus-lb).
+Repeat these instructions for all three source instances (**nginx‑plus‑app‑1**, **nginx‑plus‑app‑2**, and **nginx‑plus‑lb**).
**Note:** Some commands require `root` privilege. If appropriate for your environment, prefix commands with the `sudo` command.
1. Connect to the instance over SSH using the method of your choice. GCE provides a built-in mechanism:
- - Navigate to the Compute Engine > VM instances tab.
- - In the instance's row in the table, click the triangle icon in the Connect column at the far right and select a method (for example, Open in browser window).
+ - Navigate to the **Compute Engine > VM instances** tab.
+ - In the instance's row in the table, click the triangle icon in the **Connect** column at the far right and select a method (for example, **Open in browser window**).
@@ -222,7 +222,7 @@ Install and configure PHP and FastCGI on the instances.
apt-get install php7.0-fpm
```
-3. Edit the PHP 7 configuration to bind to a local network port instead of a Unix socket. Using your preferred text editor, remove the following line from /etc/php/7.0/fpm/pool.d:
+3. Edit the PHP 7 configuration to bind to a local network port instead of a Unix socket. Using your preferred text editor, remove the following line from **/etc/php/7.0/fpm/pool.d**:
```none
listen = /run/php/php7.0-fpm.sock
@@ -253,7 +253,7 @@ Now install NGINX Plus and download files that are specific to the all‑active
Both the configuration and content files are available at the [NGINX GitHub repository](https://github.com/nginxinc/NGINX-Demos/tree/master/gce-nginx-plus-deployment-guide-files).
-Repeat these instructions for all three source instances (nginx-plus-app-1, nginx-plus-app-2, and nginx-plus-lb).
+Repeat these instructions for all three source instances (**nginx‑plus‑app‑1**, **nginx‑plus‑app‑2**, and **nginx‑plus‑lb**).
**Note:** Some commands require `root` privilege. If appropriate for your environment, prefix commands with the `sudo` command.
@@ -261,11 +261,11 @@ Both the configuration and content files are available at the [NGINX GitHub repo
2. Clone the GitHub repository for the [all‑active load balancing deployment](https://github.com/nginxinc/NGINX-Demos/tree/master/gce-nginx-plus-deployment-guide-files). (Instructions for downloading the files directly from the GitHub repository are provided below, in case you prefer not to clone it.)
-3. Copy the contents of the usr\_share\_nginx subdirectory from the cloned repository to the local /usr/share/nginx directory. Create the local directory if needed. (If you choose not to clone the repository, you need to download each file from the GitHub repository individually.)
+3. Copy the contents of the **usr\_share\_nginx** subdirectory from the cloned repository to the local **/usr/share/nginx** directory. Create the local directory if needed. (If you choose not to clone the repository, you need to download each file from the GitHub repository individually.)
-4. Copy the right configuration file from the etc\_nginx\_conf.d subdirectory of the cloned repository to /etc/nginx/conf.d:
+4. Copy the right configuration file from the **etc\_nginx\_conf.d** subdirectory of the cloned repository to **/etc/nginx/conf.d**:
- - On both nginx-plus-app-1 and nginx-plus-app-2, copy gce-all-active-app.conf.
+ - On both **nginx‑plus‑app‑1** and **nginx‑plus‑app‑2**, copy **gce‑all‑active‑app.conf**.
You can also run the following commands to download the configuration file directly from the GitHub repository:
@@ -281,7 +281,7 @@ Both the configuration and content files are available at the [NGINX GitHub repo
wget https://github.com/nginxinc/NGINX-Demos/blob/master/gce-nginx-plus-deployment-guide-files/etc_nginx_conf.d/gce-all-active-app.conf
```
- - On nginx-plus-lb, copy gce-all-active-lb.conf.
+ - On **nginx‑plus‑lb**, copy **gce‑all‑active‑lb.conf**.
You can also run the following commands to download the configuration file directly from the GitHub repository:
@@ -297,9 +297,9 @@ Both the configuration and content files are available at the [NGINX GitHub repo
wget https://github.com/nginxinc/NGINX-Demos/blob/master/gce-nginx-plus-deployment-guide-files/etc_nginx_conf.d/gce-all-active-lb.conf
```
-5. On the LB instance (nginx-plus-lb), use a text editor to open gce-all-active-lb.conf. Change the `server` directives in the `upstream` block to reference the internal IP addresses of the nginx-plus-app-1 and nginx-plus-app-2 instances (substitute the address for the expression in angle brackets). You do not need to modify the two application instances.
+5. On the LB instance (**nginx‑plus‑lb**), use a text editor to open **gce‑all‑active‑lb.conf**. Change the `server` directives in the `upstream` block to reference the internal IP addresses of the **nginx‑plus‑app‑1** and **nginx‑plus‑app‑2** instances (substitute the address for the expression in angle brackets). You do not need to modify the two application instances.
- You can look up internal IP addresses in the Internal IP column of the table on the Compute Engine > VM instances summary page.
+ You can look up internal IP addresses in the **Internal IP** column of the table on the **Compute Engine > VM instances** summary page.
```nginx
upstream upstream_app_pool {
@@ -318,7 +318,7 @@ Both the configuration and content files are available at the [NGINX GitHub repo
mv default.conf default.conf.bak
```
-7. Enable the NGINX Plus [live activity monitoring](https://www.nginx.com/products/nginx/live-activity-monitoring/) dashboard for the instance. Copy status.html from the etc\_nginx\_conf.d subdirectory of the cloned repository to /etc/nginx/conf.d.
+7. Enable the NGINX Plus [live activity monitoring](https://www.nginx.com/products/nginx/live-activity-monitoring/) dashboard for the instance. Copy **status.html** from the **etc\_nginx\_conf.d** subdirectory of the cloned repository to **/etc/nginx/conf.d**.
You can also run the following commands to download the configuration file directly from the GitHub repository:
@@ -341,9 +341,9 @@ Both the configuration and content files are available at the [NGINX GitHub repo
nginx -s reload
```
-9. Verify the instance is working by accessing it at its external IP address. (As previously noted, we recommend blocking access to the external IP addresses of the application instances in a production environment.) The external IP address for the instance appears on the Compute Engine > VM instances summary page, in the External IP column of the table.
+9. Verify the instance is working by accessing it at its external IP address. (As previously noted, we recommend blocking access to the external IP addresses of the application instances in a production environment.) The external IP address for the instance appears on the **Compute Engine > VM instances** summary page, in the **External IP** column of the table.
- - Access the index.html page either in a browser or by running this `curl` command.
+ - Access the **index.html** page either in a browser or by running this `curl` command.
```shell
curl http://
@@ -351,7 +351,7 @@ Both the configuration and content files are available at the [NGINX GitHub repo
- Access its NGINX Plus live activity monitoring dashboard in a browser, at:
- https://_external-IP-address_:8080/status.html
+ **https://_external‑IP‑address_:8080/status.html**
10. Proceed to [Task 3: Creating "Gold" Images](#gold).
@@ -363,26 +363,26 @@ Create three source instances based on a prebuilt NGINX Plus image running on <
#### Creating the First Application Instance from a Prebuilt Image
-1. Verify that the NGINX Plus All-Active-LB project is still selected in the Google Cloud Platform header bar.
+1. Verify that the **NGINX Plus All‑Active‑LB** project is still selected in the Google Cloud Platform header bar.
-2. Navigate to the GCP Marketplace and search for nginx plus.
+2. Navigate to the GCP Marketplace and search for **nginx plus**.
-3. Click the NGINX Plus box in the results area.
+3. Click the **NGINX Plus** box in the results area.
-4. On the NGINX Plus page that opens, click the Launch on Compute Engine button.
+4. On the **NGINX Plus** page that opens, click the Launch on Compute Engine button.
-5. Fill in the fields on the New NGINX Plus deployment page as indicated.
+5. Fill in the fields on the **New NGINX Plus deployment** page as indicated.
- - Deployment name – nginx-plus-app-1
- - **Zone** – The GCP zone that makes sense for your location. We're using us-west1-a.
- - Machine type – The appropriate size for the level of traffic you anticipate. We're selecting micro, which is ideal for testing purposes.
- - Disk type – Standard Persistent Disk (the default)
- - Disk size in GB – 10 (the default and minimum allowed)
- - Network name – default
- - Subnetwork name – default
- - **Firewall** – Verify that the Allow HTTP traffic checkbox is checked.
+ - **Deployment name** – **nginx‑plus‑app‑1**
+ - **Zone** – The GCP zone that makes sense for your location. We're using **us‑west1‑a**.
+ - **Machine type** – The appropriate size for the level of traffic you anticipate. We're selecting **micro**, which is ideal for testing purposes.
+ - **Disk type** – **Standard Persistent Disk** (the default)
+ - **Disk size in GB** – **10** (the default and minimum allowed)
+ - **Network name** – **default**
+ - **Subnetwork name** – **default**
+ - **Firewall** – Verify that the **Allow HTTP traffic** checkbox is checked.
@@ -392,25 +392,25 @@ Create three source instances based on a prebuilt NGINX Plus image running on <
-7. Navigate to the Compute Engine > VM instances tab and click nginx-plus-app-1-vm in the Name column in the table. (The -vm suffix is added automatically to the name of the newly created instance.)
+7. Navigate to the **Compute Engine > VM instances** tab and click **nginx‑plus‑app‑1‑vm** in the Name column in the table. (The **‑vm** suffix is added automatically to the name of the newly created instance.)
-8. On the VM instances page that opens, click EDIT at the top of the page. In fields that can be edited, the value changes from static text to text boxes, drop‑down menus, and checkboxes.
+8. On the **VM instances** page that opens, click EDIT at the top of the page. In fields that can be edited, the value changes from static text to text boxes, drop‑down menus, and checkboxes.
9. Modify or verify the indicated editable fields (non‑editable fields are not listed):
- - **Tags** – If a default tag appears (for example, nginx-plus-app-1-tcp-80), click the X after its name to remove it. Then, type in nginx-plus-http-fw-rule.
- - External IP – Ephemeral (the default)
- - Boot disk and local disks – Uncheck the checkbox labeled Delete boot disk when instance is deleted.
- - Additional disks – No changes
- - **Network** – If you must change the defaults, for example, when configuring a production environment, select default Then, select EDIT on the opened Network details page. After making your changes select the Save button.
+ - **Tags** – If a default tag appears (for example, **nginx‑plus‑app‑1‑tcp‑80**), click the **X** after its name to remove it. Then, type in **nginx‑plus‑http‑fw‑rule**.
+ - **External IP** – **Ephemeral** (the default)
+ - **Boot disk and local disks** – Uncheck the checkbox labeled **Delete boot disk when instance is deleted**.
+ - **Additional disks** – No changes
+ - **Network** – If you must change the defaults, for example, when configuring a production environment, select default Then, select EDIT on the opened **Network details** page. After making your changes select the Save button.
- **Firewall** – Verify that neither check box is checked (the default). The firewall rule named in the **Tags** field that's above on the current page (see the first bullet in this list) controls this type of access.
- - Automatic restart – On (recommended) (the default)
- - On host maintenance – Migrate VM instance (recommended) (the default)
- - Custom metadata – No changes
- - SSH Keys – If you're using your own SSH public key instead of your default GCE keys, paste the hexadecimal key string into the box labeled Enter entire key data.
- - Serial port – Verify that the check box labeled Enable connecting to serial ports is not checked (the default).
+ - **Automatic restart** – **On (recommended)** (the default)
+ - **On host maintenance** – **Migrate VM instance (recommended)** (the default)
+ - **Custom metadata** – No changes
+ - **SSH Keys** – If you're using your own SSH public key instead of your default GCE keys, paste the hexadecimal key string into the box labeled **Enter entire key data**.
+ - **Serial port** – Verify that the check box labeled **Enable connecting to serial ports** is not checked (the default).
The screenshot shows the results of your changes. It omits some fields that can't be edited or for which we recommend keeping the defaults.
@@ -423,29 +423,29 @@ Create three source instances based on a prebuilt NGINX Plus image running on <
Create the second application instance by cloning the first one.
-1. Navigate back to the summary page on the Compute Engine > VM instances tab (click the arrow that is circled in the following figure).
+1. Navigate back to the summary page on the **Compute Engine > VM instances** tab (click the arrow that is circled in the following figure).
-2. Click nginx-plus-app-1-vm in the Name column of the table (shown in the screenshot in Step 7 of Creating the First Application Instance).
+2. Click **nginx‑plus‑app‑1‑vm** in the Name column of the table (shown in the screenshot in Step 7 of Creating the First Application Instance).
-3. On the VM instances page that opens, click CLONE at the top of the page.
+3. On the **VM instances** page that opens, click CLONE at the top of the page.
-4. On the Create an instance page that opens, modify or verify the fields and checkboxes as indicated:
+4. On the **Create an instance** page that opens, modify or verify the fields and checkboxes as indicated:
- - **Name** – nginx-plus-app-2-vm. Here we're adding the -vm suffix to make the name consistent with the first instance; GCE does not add it automatically when you clone an instance.
- - **Zone** – The GCP zone that makes sense for your location. We're using us-west1-a.
- - Machine type – The appropriate size for the level of traffic you anticipate. We're selecting f1-micro, which is ideal for testing purposes.
- - Boot disk type – New 10 GB standard persistent disk (the value inherited from nginx-plus-app-1-vm)
- - Identity and API access – Set the Access scopes radio button to Allow default access and accept the default values in all other fields. If you want more granular control over access than is provided by these settings, modify the fields in this section as appropriate.
+ - **Name** – **nginx‑plus‑app‑2‑vm**. Here we're adding the **‑vm** suffix to make the name consistent with the first instance; GCE does not add it automatically when you clone an instance.
+ - **Zone** – The GCP zone that makes sense for your location. We're using **us‑west1‑a**.
+ - **Machine type** – The appropriate size for the level of traffic you anticipate. We're selecting **f1‑micro**, which is ideal for testing purposes.
+ - **Boot disk type** – **New 10 GB standard persistent disk** (the value inherited from **nginx‑plus‑app‑1‑vm**)
+ - **Identity and API access** – Set the **Access scopes** radio button to **Allow default access** and accept the default values in all other fields. If you want more granular control over access than is provided by these settings, modify the fields in this section as appropriate.
- **Firewall** – Verify that neither check box is checked (the default).
5. Click Management, disk, networking, SSH keys to open that set of subtabs.
6. Verify the following settings on the subtabs, modifying them as necessary:
- - Management – In the **Tags** field: nginx-plus-http-fw-rule
- - Disks – The Deletion rule checkbox (labeled Delete boot disk when instance is deleted) is not checked
+ - **Management** – In the **Tags** field: **nginx‑plus‑http‑fw‑rule**
+ - **Disks** – The **Deletion rule** checkbox (labeled **Delete boot disk when instance is deleted**) is not checked
7. Select the Create button.
@@ -454,21 +454,21 @@ Create the second application instance by cloning the first one.
Create the source load‑balancing instance by cloning the first instance again.
-Repeat Steps 2 through 7 of Creating the Second Application Instance. In Step 4, specify nginx-plus-lb-vm as the name.
+Repeat Steps 2 through 7 of Creating the Second Application Instance. In Step 4, specify **nginx‑plus‑lb‑vm** as the name.
#### Configuring PHP and FastCGI on the Prebuilt-Based Instances
Install and configure PHP and FastCGI on the instances.
-Repeat these instructions for all three source instances (nginx-plus-app-1-vm, nginx-plus-app-2-vm, and nginx-plus-lb-vm).
+Repeat these instructions for all three source instances (**nginx‑plus‑app‑1‑vm**, **nginx‑plus‑app‑2‑vm**, and **nginx‑plus‑lb‑vm**).
**Note:** Some commands require `root` privilege. If appropriate for your environment, prefix commands with the `sudo` command.
1. Connect to the instance over SSH using the method of your choice. GCE provides a built‑in mechanism:
- - Navigate to the Compute Engine > VM instances tab.
- - In the table, find the row for the instance. Select the triangle icon in the Connect column at the far right. Then, select a method (for example, Open in browser window).
+ - Navigate to the **Compute Engine > VM instances** tab.
+ - In the table, find the row for the instance. Select the triangle icon in the **Connect** column at the far right. Then, select a method (for example, **Open in browser window**).
The screenshot shows instances based on the prebuilt NGINX Plus images.
@@ -480,7 +480,7 @@ Install and configure PHP and FastCGI on the instances.
apt-get install php5-fpm
```
-3. Edit the PHP 5 configuration to bind to a local network port instead of a Unix socket. Using your preferred text editor, remove the following line from /etc/php5/fpm/pool.d:
+3. Edit the PHP 5 configuration to bind to a local network port instead of a Unix socket. Using your preferred text editor, remove the following line from **/etc/php5/fpm/pool.d**:
```none
Listen = /run/php/php5-fpm.sock
@@ -511,18 +511,18 @@ Now download files that are specific to the all‑active deployment:
Both the configuration and content files are available at the [NGINX GitHub repository](https://github.com/nginxinc/NGINX-Demos/tree/master/gce-nginx-plus-deployment-guide-files).
-Repeat these instructions for all three source instances (nginx-plus-app-1-vm, nginx-plus-app-2-vm, and nginx-plus-lb-vm).
+Repeat these instructions for all three source instances (**nginx‑plus‑app‑1‑vm**, **nginx‑plus‑app‑2‑vm**, and **nginx‑plus‑lb‑vm**).
**Note:** Some commands require `root` privilege. If appropriate for your environment, prefix commands with the `sudo` command.
1. Clone the GitHub repository for the [all‑active load balancing deployment](https://github.com/nginxinc/NGINX-Demos/tree/master/gce-nginx-plus-deployment-guide-files). (See the instructions below for downloading the files from GitHub if you choose not to clone it.)
-2. Copy the contents of the usr\_share\_nginx subdirectory from the cloned repo to the local /usr/share/nginx directory. Create the local directory if necessary. (If you choose not to clone the repository, you need to download each file from the GitHub repository one at a time.)
+2. Copy the contents of the **usr\_share\_nginx** subdirectory from the cloned repo to the local **/usr/share/nginx** directory. Create the local directory if necessary. (If you choose not to clone the repository, you need to download each file from the GitHub repository one at a time.)
-3. Copy the right configuration file from the etc\_nginx\_conf.d subdirectory of the cloned repository to /etc/nginx/conf.d:
+3. Copy the right configuration file from the **etc\_nginx\_conf.d** subdirectory of the cloned repository to **/etc/nginx/conf.d**:
- - On both nginx-plus-app-1-vm and nginx-plus-app-2-vm, copy gce-all-active-app.conf.
+ - On both **nginx‑plus‑app‑1‑vm** and **nginx‑plus‑app‑2‑vm**, copy **gce‑all‑active‑app.conf**.
You can also run these commands to download the configuration file from GitHub:
@@ -538,7 +538,7 @@ Both the configuration and content files are available at the [NGINX GitHub repo
wget https://github.com/nginxinc/NGINX-Demos/blob/master/gce-nginx-plus-deployment-guide-files/etc_nginx_conf.d/gce-all-active-app.conf
```
- - On nginx-plus-lb-vm, copy gce-all-active-lb.conf.
+ - On **nginx‑plus‑lb‑vm**, copy **gce‑all‑active‑lb.conf**.
You can also run the following commands to download the configuration file directly from the GitHub repository:
@@ -554,9 +554,9 @@ Both the configuration and content files are available at the [NGINX GitHub repo
wget https://github.com/nginxinc/NGINX-Demos/blob/master/gce-nginx-plus-deployment-guide-files/etc_nginx_conf.d/gce-all-active-lb.conf
```
-4. On the LB instance (nginx-plus-lb-vm), use a text editor to open gce-all-active-lb.conf. Change the `server` directives in the `upstream` block to reference the internal IP addresses of the nginx-plus-app-1-vm and nginx-plus-app-2-vm instances. (No action is required on the two application instances themselves.)
+4. On the LB instance (**nginx‑plus‑lb‑vm**), use a text editor to open **gce‑all‑active‑lb.conf**. Change the `server` directives in the `upstream` block to reference the internal IP addresses of the **nginx‑plus‑app‑1‑vm** and **nginx‑plus‑app‑2‑vm** instances. (No action is required on the two application instances themselves.)
- You can look up internal IP addresses in the Internal IP column of the table on the Compute Engine > VM instances summary page.
+ You can look up internal IP addresses in the **Internal IP** column of the table on the **Compute Engine > VM instances** summary page.
```nginx
upstream upstream_app_pool {
@@ -575,7 +575,7 @@ Both the configuration and content files are available at the [NGINX GitHub repo
mv default.conf default.conf.bak
```
-6. Enable the NGINX Plus [live activity monitoring](https://www.nginx.com/products/nginx/live-activity-monitoring/) dashboard for the instance. To do this, copy status.html from the etc\_nginx\_conf.d subdirectory of the cloned repository to /etc/nginx/conf.d.
+6. Enable the NGINX Plus [live activity monitoring](https://www.nginx.com/products/nginx/live-activity-monitoring/) dashboard for the instance. To do this, copy **status.html** from the **etc\_nginx\_conf.d** subdirectory of the cloned repository to **/etc/nginx/conf.d**.
You can also run the following commands to download the configuration file directly from the GitHub repository:
@@ -598,9 +598,9 @@ Both the configuration and content files are available at the [NGINX GitHub repo
nginx -s reload
```
-8. Verify the instance is working by accessing it at its external IP address. (As noted, we recommend blocking access, in production, to the external IPs of the app.) The external IP address for the instance appears on the Compute Engine > VM instances summary page, in the External IP column of the table.
+8. Verify the instance is working by accessing it at its external IP address. (As noted, we recommend blocking access, in production, to the external IPs of the app.) The external IP address for the instance appears on the **Compute Engine > VM instances** summary page, in the **External IP** column of the table.
- - Access the index.html page either in a browser or by running this `curl` command.
+ - Access the **index.html** page either in a browser or by running this `curl` command.
```shell
curl http://
@@ -608,7 +608,7 @@ Both the configuration and content files are available at the [NGINX GitHub repo
- Access the NGINX Plus live activity monitoring dashboard in a browser, at:
- https://_external-IP-address-of-NGINX-Plus-server_:8080/dashboard.html
+ **https://_external‑IP‑address‑of‑NGINX‑Plus‑server_:8080/dashboard.html**
9. Proceed to [Task 3: Creating "Gold" Images](#gold).
@@ -617,14 +617,14 @@ Both the configuration and content files are available at the [NGINX GitHub repo
Create _gold images_, which are base images that GCE clones automatically when it needs to scale up the number of instances. They are derived from the instances you created in [Creating Source Instances](#source). Before creating the images, delete the source instances. This breaks the attachment between them and the disk. (you can't create an image from a disk attached to a VM instance).
-1. Verify that the NGINX Plus All-Active-LB project is still selected in the Google Cloud Platform header bar.
+1. Verify that the **NGINX Plus All‑Active‑LB** project is still selected in the Google Cloud Platform header bar.
-2. Navigate to the Compute Engine > VM instances tab.
+2. Navigate to the **Compute Engine > VM instances** tab.
3. In the table, select all three instances:
- - If you created source instances from [VM (Ubuntu) images](#source-vm): nginx-plus-app-1, nginx-plus-app-2, and nginx-plus-lb
- - If you created source instances from [prebuilt NGINX Plus images](#source-prebuilt): nginx-plus-app-1-vm, nginx-plus-app-2-vm, and nginx-plus-lb-vm
+ - If you created source instances from [VM (Ubuntu) images](#source-vm): **nginx‑plus‑app‑1**, **nginx‑plus‑app‑2**, and **nginx‑plus‑lb**
+ - If you created source instances from [prebuilt NGINX Plus images](#source-prebuilt): **nginx‑plus‑app‑1‑vm**, **nginx‑plus‑app‑2‑vm**, and **nginx‑plus‑lb‑vm**
4. Click STOP in the top toolbar to stop the instances.
@@ -634,43 +634,43 @@ Create _gold images_, which are base images that GCE clones automatically when i
**Note:** If the pop-up warns that it will delete the boot disk for any instance, cancel the deletion. Then, perform the steps below for each affected instance:
- - Navigate to the Compute Engine > VM instances tab and click the instance in the Name column in the table. (The screenshot shows nginx-plus-app-1-vm.)
+ - Navigate to the **Compute Engine > VM instances** tab and click the instance in the Name column in the table. (The screenshot shows **nginx‑plus‑app‑1‑vm**.)
- - On the VM instances page that opens, click EDIT at the top of the page. In fields that can be edited, the value changes from static text to text boxes, drop‑down menus, and checkboxes.
- - In the Boot disk and local disks field, uncheck the checkbox labeled Delete boot disk when instance is deleted.
+ - On the **VM instances** page that opens, click EDIT at the top of the page. In fields that can be edited, the value changes from static text to text boxes, drop‑down menus, and checkboxes.
+ - In the **Boot disk and local disks** field, uncheck the checkbox labeled **Delete boot disk when instance is deleted**.
- Click the Save button.
- - On the VM instances summary page, select the instance in the table and click DELETE in the top toolbar to delete it.
+ - On the **VM instances** summary page, select the instance in the table and click DELETE in the top toolbar to delete it.
-6. Navigate to the Compute Engine > Images tab.
+6. Navigate to the **Compute Engine > Images** tab.
7. Click [+] CREATE IMAGE.
-8. On the Create an image page that opens, modify or verify the fields as indicated:
+8. On the **Create an image** page that opens, modify or verify the fields as indicated:
- - **Name** – nginx-plus-app-1-image
+ - **Name** – **nginx‑plus‑app‑1‑image**
- **Family** – Leave the field empty
- - **Description** – NGINX Plus Application 1 Gold Image
- - **Encryption** – Automatic (recommended) (the default)
- - **Source** – Disk (the default)
- - Source disk – nginx-plus-app-1 or nginx-plus-app-1-vm, depending on the method you used to create source instances (select the source instance from the drop‑down menu)
+ - **Description** – **NGINX Plus Application 1 Gold Image**
+ - **Encryption** – **Automatic (recommended)** (the default)
+ - **Source** – **Disk** (the default)
+ - **Source disk** – **nginx‑plus‑app‑1** or **nginx‑plus‑app‑1‑vm**, depending on the method you used to create source instances (select the source instance from the drop‑down menu)
9. Click the Create button.
10. Repeat Steps 7 through 9 to create a second image with the following values (retain the default values in all other fields):
- - **Name** – nginx-plus-app-2-image
- - **Description** – NGINX Plus Application 2 Gold Image
- - Source disk – nginx-plus-app-2 or nginx-plus-app-2-vm, depending on the method you used to create source instances (select the source instance from the drop‑down menu)
+ - **Name** – **nginx‑plus‑app‑2‑image**
+ - **Description** – **NGINX Plus Application 2 Gold Image**
+ - **Source disk** – **nginx‑plus‑app‑2** or **nginx‑plus‑app‑2‑vm**, depending on the method you used to create source instances (select the source instance from the drop‑down menu)
11. Repeat Steps 7 through 9 to create a third image with the following values (retain the default values in all other fields):
- - **Name** – nginx-plus-lb-image
- - **Description** – NGINX Plus LB Gold Image
- - Source disk – nginx-plus-lb or nginx-plus-lb-vm, depending on the method you used to create source instances (select the source instance from the drop‑down menu)
+ - **Name** – **nginx‑plus‑lb‑image**
+ - **Description** – **NGINX Plus LB Gold Image**
+ - **Source disk** – **nginx‑plus‑lb** or **nginx‑plus‑lb‑vm**, depending on the method you used to create source instances (select the source instance from the drop‑down menu)
-12. Verify that the three images appear at the top of the table on the Compute Engine > Images tab.
+12. Verify that the three images appear at the top of the table on the **Compute Engine > Images** tab.
## Task 4: Creating Instance Templates
@@ -681,58 +681,58 @@ Create _instance templates_. They are the compute workloads in instance groups.
### Creating the First Application Instance Template
-1. Verify that the NGINX Plus All-Active-LB project is still selected in the Google Cloud Platform header bar.
+1. Verify that the **NGINX Plus All‑Active‑LB** project is still selected in the Google Cloud Platform header bar.
-2. Navigate to the Compute Engine > Instance templates tab.
+2. Navigate to the **Compute Engine > Instance templates** tab.
3. Click the Create instance template button.
-4. On the Create an instance template page that opens, modify or verify the fields as indicated:
+4. On the **Create an instance template** page that opens, modify or verify the fields as indicated:
- - **Name** – nginx-plus-app-1-instance-template
- - Machine type – The appropriate size for the level of traffic you anticipate. We're selecting micro, which is ideal for testing purposes.
- - Boot disk – Click Change. The Boot disk page opens. Perform the following steps:
+ - **Name** – **nginx‑plus‑app‑1‑instance‑template**
+ - **Machine type** – The appropriate size for the level of traffic you anticipate. We're selecting **micro**, which is ideal for testing purposes.
+ - **Boot disk** – Click **Change**. The **Boot disk** page opens. Perform the following steps:
- - Open the Custom Images subtab.
+ - Open the **Custom Images** subtab.
- - Select NGINX Plus All-Active-LB from the drop-down menu labeled Show images from.
+ - Select **NGINX Plus All‑Active‑LB** from the drop-down menu labeled **Show images from**.
- - Click the nginx-plus-app-1-image radio button.
+ - Click the **nginx‑plus‑app‑1‑image** radio button.
- - Accept the default values in the Boot disk type and Size (GB) fields (Standard persistent disk and 10 respectively).
+ - Accept the default values in the **Boot disk type** and **Size (GB)** fields (**Standard persistent disk** and **10** respectively).
- Click the Select button.
- - Identity and API access – Unless you want more granular control over access, keep the defaults in the Service account field (Compute Engine default service account) and Access scopes field (Allow default access).
- - **Firewall** – Verify that neither check box is checked (the default). The firewall rule invoked in the **Tags** field on the Management subtab (see Step 6 below) controls this type of access.
+ - **Identity and API access** – Unless you want more granular control over access, keep the defaults in the **Service account** field (**Compute Engine default service account**) and **Access scopes** field (**Allow default access**).
+ - **Firewall** – Verify that neither check box is checked (the default). The firewall rule invoked in the **Tags** field on the **Management** subtab (see Step 6 below) controls this type of access.
5. Select Management, disk, networking, SSH keys (indicated with a red arrow in the following screenshot) to open that set of subtabs.
-6. On the Management subtab, modify or verify the fields as indicated:
+6. On the **Management** subtab, modify or verify the fields as indicated:
- - **Description** – NGINX Plus app-1 Instance Template
- - **Tags** – nginx-plus-http-fw-rule
- - **Preemptibility** – Off (recommended) (the default)
- - Automatic restart – On (recommended) (the default)
- - On host maintenance – Migrate VM instance (recommended) (the default)
+ - **Description** – **NGINX Plus app‑1 Instance Template**
+ - **Tags** – **nginx‑plus‑http‑fw‑rule**
+ - **Preemptibility** – **Off (recommended)** (the default)
+ - **Automatic restart** – **On (recommended)** (the default)
+ - **On host maintenance** – **Migrate VM instance (recommended)** (the default)
-7. On the Disks subtab, verify that the checkbox labeled Delete boot disk when instance is deleted is checked.
+7. On the **Disks** subtab, verify that the checkbox labeled **Delete boot disk when instance is deleted** is checked.
Instances from this template are ephemeral instantiations of the gold image. So, we want GCE to reclaim the disk when the instance is terminated. New instances are always based on the gold image. So, there is no reason to keep the instantiations on disk when the instance is deleted.
-8. On the Networking subtab, verify the default settings of Ephemeral for External IP and Off for IP Forwarding.
+8. On the **Networking** subtab, verify the default settings of **Ephemeral** for **External IP** and **Off** for **IP Forwarding**.
-9. If you're using your own SSH public key instead of your default keys, paste the hexadecimal key string on the SSH Keys subtab. Right into the box that reads Enter entire key data.
+9. If you're using your own SSH public key instead of your default keys, paste the hexadecimal key string on the **SSH Keys** subtab. Right into the box that reads **Enter entire key data**.
@@ -741,55 +741,55 @@ Create _instance templates_. They are the compute workloads in instance groups.
### Creating the Second Application Instance Template
-1. On the Instance templates summary page, click CREATE INSTANCE TEMPLATE.
+1. On the **Instance templates** summary page, click CREATE INSTANCE TEMPLATE.
2. Repeat Steps 4 through 10 of Creating the First Application Instance Template to create a second application instance template. Use the same values as for the first instance template, except as noted:
- In Step 4:
- - **Name** – nginx-plus-app-2-instance-template
- - Boot disk – Click the nginx-plus-app-2-image radio button
- - In Step 6, **Description** – NGINX Plus app-2 Instance Template
+ - **Name** – **nginx‑plus‑app‑2‑instance‑template**
+ - **Boot disk** – Click the **nginx‑plus‑app‑2‑image** radio button
+ - In Step 6, **Description** – **NGINX Plus app‑2 Instance Template**
### Creating the Load-Balancing Instance Template
-1. On the Instance templates summary page, click CREATE INSTANCE TEMPLATE.
+1. On the **Instance templates** summary page, click CREATE INSTANCE TEMPLATE.
2. Repeat Steps 4 through 10 of Creating the First Application Instance Template to create the load‑balancing instance template. Use the same values as for the first instance template, except as noted:
- In Step 4:
- - **Name** – nginx-plus-lb-instance-template.
- - Boot disk – Click the nginx-plus-lb-image radio button
+ - **Name** – **nginx‑plus‑lb‑instance‑template**.
+ - **Boot disk** – Click the **nginx‑plus‑lb‑image** radio button
- - In Step 6, **Description** – NGINX Plus Load‑Balancing Instance Template
+ - In Step 6, **Description** – **NGINX Plus Load‑Balancing Instance Template**
## Task 5: Creating Image Health Checks
Define the simple HTTP health check that GCE uses. This verifies that each NGINX Plus LB image is running (and to re-create any LB instance that isn't running).
-1. Verify that the NGINX Plus All-Active-LB project is still selected in the Google Cloud Platform header bar.
+1. Verify that the **NGINX Plus All‑Active‑LB** project is still selected in the Google Cloud Platform header bar.
-2. Navigate to the Compute Engine > Health checks tab.
+2. Navigate to the **Compute Engine > Health checks** tab.
3. Click the Create a health check button.
-4. On the Create a health check page that opens, modify or verify the fields as indicated:
+4. On the **Create a health check** page that opens, modify or verify the fields as indicated:
- - **Name** – nginx-plus-http-health-check
- - **Description** – Basic HTTP health check to monitor NGINX Plus instances
- - **Protocol** – HTTP (the default)
- - **Port** – 80 (the default)
- - Request path – /status-old.html
+ - **Name** – **nginx‑plus‑http‑health‑check**
+ - **Description** – **Basic HTTP health check to monitor NGINX Plus instances**
+ - **Protocol** – **HTTP** (the default)
+ - **Port** – **80** (the default)
+ - **Request path** – **/status‑old.html**
-5. If the Health criteria section is not already open, click More.
+5. If the **Health criteria** section is not already open, click More.
6. Modify or verify the fields as indicated:
- - Check interval – 10 seconds
- - **Timeout** – 10 seconds
- - Healthy threshold – 2 consecutive successes (the default)
- - Unhealthy threshold – 10 consecutive failures
+ - **Check interval** – **10 seconds**
+ - **Timeout** – **10 seconds**
+ - **Healthy threshold** – **2 consecutive successes** (the default)
+ - **Unhealthy threshold** – **10 consecutive failures**
7. Click the Create button.
@@ -800,28 +800,28 @@ Define the simple HTTP health check that GCE uses. This verifies that each NGINX
Create three independent instance groups, one for each type of function-specific instance.
-1. Verify that the NGINX Plus All-Active-LB project is still selected in the Google Cloud Platform header bar.
+1. Verify that the **NGINX Plus All‑Active‑LB** project is still selected in the Google Cloud Platform header bar.
-2. Navigate to the Compute Engine > Instance groups tab.
+2. Navigate to the **Compute Engine > Instance groups** tab.
3. Click the Create instance group button.
### Creating the First Application Instance Group
-1. On the Create a new instance group page that opens, modify or verify the fields as indicated. Ignore fields that are not mentioned:
+1. On the **Create a new instance group** page that opens, modify or verify the fields as indicated. Ignore fields that are not mentioned:
- - **Name** – nginx-plus-app-1-instance-group
- - **Description** – Instance group to host NGINX Plus app-1 instances
+ - **Name** – **nginx‑plus‑app‑1‑instance‑group**
+ - **Description** – **Instance group to host NGINX Plus app-1 instances**
- **Location** –
- - Click the Single-zone radio button (the default).
- - **Zone** – The GCP zone you specified when you created source instances (Step 1 of [Creating the First Application Instance from a VM Image](#source-vm-app-1) or Step 5 of [Creating the First Application Instance from a Prebuilt Image](#source-prebuilt)). We're using us-west1-a.
- - Creation method – Use instance template radio button (the default)
- - Instance template – nginx-plus-app-1-instance-template (select from the drop-down menu)
- - **Autoscaling** – Off (the default)
- - Number of instances – 2
- - Health check – nginx-plus-http-health-check (select from the drop-down menu)
- - Initial delay – 300 seconds (the default)
+ - Click the **Single‑zone** radio button (the default).
+ - **Zone** – The GCP zone you specified when you created source instances (Step 1 of [Creating the First Application Instance from a VM Image](#source-vm-app-1) or Step 5 of [Creating the First Application Instance from a Prebuilt Image](#source-prebuilt)). We're using **us‑west1‑a**.
+ - **Creation method** – **Use instance template** radio button (the default)
+ - **Instance template** – **nginx‑plus‑app‑1‑instance‑template** (select from the drop-down menu)
+ - **Autoscaling** – **Off** (the default)
+ - **Number of instances** – **2**
+ - **Health check** – **nginx‑plus‑http‑health‑check** (select from the drop-down menu)
+ - **Initial delay** – **300 seconds** (the default)
3. Click the Create button.
@@ -834,25 +834,25 @@ Create three independent instance groups, one for each type of function-specific
2. Repeat the steps in [Creating the First Application Instance Group](#groups-app-1) to create a second application instance group. Specify the same values as for the first instance template, except for these fields:
- - **Name** – nginx-plus-app-2-instance-group
- - **Description** – Instance group to host NGINX Plus app-2 instances
- - Instance template – nginx-plus-app-2-instance-template (select from the drop-down menu)
+ - **Name** – **nginx‑plus‑app‑2‑instance‑group**
+ - **Description** – **Instance group to host NGINX Plus app-2 instances**
+ - **Instance template** – **nginx‑plus‑app‑2‑instance‑template** (select from the drop-down menu)
### Creating the Load-Balancing Instance Group
-1. On the Instance groups summary page, click CREATE INSTANCE GROUP.
+1. On the **Instance groups** summary page, click CREATE INSTANCE GROUP.
2. Repeat the steps in [Creating the First Application Instance Group](#groups-app-1) to create the load‑balancing instance group. Specify the same values as for the first instance template, except for these fields:
- - **Name** – nginx-plus-lb-instance-group
- - **Description** – Instance group to host NGINX Plus load balancing instances
- - Instance template – nginx-plus-lb-instance-template (select from the drop-down menu)
+ - **Name** – **nginx‑plus‑lb‑instance‑group**
+ - **Description** – **Instance group to host NGINX Plus load balancing instances**
+ - **Instance template** – **nginx‑plus‑lb‑instance‑template** (select from the drop-down menu)
### Updating and Testing the NGINX Plus Configuration
-Update the NGINX Plus configuration on the two LB instances (nginx-plus-lb-instance-group-[a...z]). It should list the internal IP addresses of the four application servers (two instances each of nginx-plus-app-1-instance-group-[a...z] and nginx-plus-app-2-instance-group-[a...z]).
+Update the NGINX Plus configuration on the two LB instances (**nginx‑plus‑lb‑instance‑group‑[a...z]**). It should list the internal IP addresses of the four application servers (two instances each of **nginx‑plus‑app‑1‑instance‑group‑[a...z]** and **nginx‑plus‑app‑2‑instance‑group‑[a...z]**).
Repeat these instructions for both LB instances.
@@ -860,10 +860,10 @@ Update the NGINX Plus configuration on the two LB instances (Compute Engine > VM instances tab.
- - In the table, find the row for the instance. Click the triangle icon in the Connect column at the far right. Then, select a method (for example, Open in browser window).
+ - Navigate to the **Compute Engine > VM instances** tab.
+ - In the table, find the row for the instance. Click the triangle icon in the **Connect** column at the far right. Then, select a method (for example, **Open in browser window**).
-2. In the SSH terminal, use your preferred text editor to edit gce-all-active-lb.conf. Change the `server` directives in the `upstream` block to reference the internal IPs of the two nginx-plus-app-1-instance-group-[a...z] instances and the two nginx-plus-app-2-instance-group-[a...z] instances. You can check the addresses in the Internal IP column of the table on the Compute Engine > VM instances summary page. For example:
+2. In the SSH terminal, use your preferred text editor to edit **gce‑all‑active‑lb.conf**. Change the `server` directives in the `upstream` block to reference the internal IPs of the two **nginx‑plus‑app‑1‑instance‑group‑[a...z]** instances and the two **nginx‑plus‑app‑2‑instance‑group‑[a...z]** instances. You can check the addresses in the **Internal IP** column of the table on the **Compute Engine > VM instances** summary page. For example:
```nginx
upstream upstream_app_pool {
@@ -887,9 +887,9 @@ Update the NGINX Plus configuration on the two LB instances (nginx-plus-lb-instance-group-[a...z]). You can see the instance's external IP address on the Compute Engine > VM instances summary page in the External IP column of the table.
+4. Verify that the four application instances are receiving traffic and responding. To do this, access the NGINX Plus live activity monitoring dashboard on the load-balancing instance (**nginx‑plus‑lb‑instance‑group‑[a...z]**). You can see the instance's external IP address on the **Compute Engine > VM instances** summary page in the **External IP** column of the table.
- https://_LB-external-IP-address_:8080/status.html
+ **https://_LB‑external‑IP‑address_:8080/status.html**
5. Verify that NGINX Plus is load balancing traffic among the four application instance groups. Do this by running this command on a separate client machine:
@@ -904,54 +904,54 @@ Update the NGINX Plus configuration on the two LB instances (NGINX Plus All-Active-LB project is still selected in the Google Cloud Platform header bar.
+1. Verify that the **NGINX Plus All‑Active‑LB** project is still selected in the Google Cloud Platform header bar.
-2. Navigate to the Networking > External IP addresses tab.
+2. Navigate to the **Networking > External IP addresses** tab.
3. Click the Reserve static address button.
-4. On the Reserve a static address page that opens, modify or verify the fields as indicated:
+4. On the **Reserve a static address** page that opens, modify or verify the fields as indicated:
- - **Name** – nginx-plus-network-lb-static-ip
- - **Description** – Static IP address for Network LB frontend to NGINX Plus LB instances
- - **Type** – Click the Regional radio button (the default)
- - **Region** – The GCP zone you specified when you created source instances (Step 1 of [Creating the First Application Instance from a VM Image](#source-vm-app-1) or Step 5 of [Creating the First Application Instance from a Prebuilt Image](#source-prebuilt)). We're using us-west1.
- - Attached to – None (the default)
+ - **Name** – **nginx‑plus‑network‑lb‑static‑ip**
+ - **Description** – **Static IP address for Network LB frontend to NGINX Plus LB instances**
+ - **Type** – Click the **Regional** radio button (the default)
+ - **Region** – The GCP zone you specified when you created source instances (Step 1 of [Creating the First Application Instance from a VM Image](#source-vm-app-1) or Step 5 of [Creating the First Application Instance from a Prebuilt Image](#source-prebuilt)). We're using **us‑west1**.
+ - **Attached to** – **None** (the default)
5. Click the Reserve button.
-6. Navigate to the Networking > Load balancing tab.
+6. Navigate to the **Networking > Load balancing** tab.
7. Click the Create load balancer button.
-8. On the Load balancing page that opens, click Start configuration in the TCP Load Balancing box.
+8. On the **Load balancing** page that opens, click **Start configuration** in the **TCP Load Balancing** box.
-9. On the page that opens, click the From Internet to my VMs and No (TCP) radio buttons (the defaults).
+9. On the page that opens, click the **From Internet to my VMs** and **No (TCP)** radio buttons (the defaults).
-10. Click the Continue button. The New TCP load balancer page opens.
+10. Click the Continue button. The **New TCP load balancer** page opens.
-11. In the **Name** field, type nginx-plus-network-lb-frontend.
+11. In the **Name** field, type **nginx‑plus‑network‑lb‑frontend**.
-12. Click Backend configuration in the left column to open the Backend configuration interface in the right column. Fill in the fields as indicated:
+12. Click **Backend configuration** in the left column to open the **Backend configuration** interface in the right column. Fill in the fields as indicated:
- - **Region** – The GCP region you specified in Step 4. We're using us-west1.
- - **Backends** – With Select existing instance groups selected, select nginx-plus-lb-instance-group from the drop-down menu
- - Backup pool – None (the default)
- - Failover ratio – 10 (the default)
- - Health check – nginx-plus-http-health-check
- - Session affinity – Client IP
+ - **Region** – The GCP region you specified in Step 4. We're using **us‑west1**.
+ - **Backends** – With **Select existing instance groups** selected, select **nginx‑plus‑lb‑instance‑group** from the drop-down menu
+ - **Backup pool** – **None** (the default)
+ - **Failover ratio** – **10** (the default)
+ - **Health check** – **nginx‑plus‑http‑health‑check**
+ - **Session affinity** – **Client IP**
-13. Select Frontend configuration in the left column. This opens up the Frontend configuration interface on the right column.
+13. Select **Frontend configuration** in the left column. This opens up the **Frontend configuration** interface on the right column.
-14. Create three Protocol-IP-Port tuples, each with:
+14. Create three **Protocol‑IP‑Port** tuples, each with:
- - **Protocol** – TCP
+ - **Protocol** – **TCP**
- **IP** – The address you reserved in Step 5, selected from the drop-down menu (if there is more than one address, select the one labeled in parentheses with the name you specified in Step 5)
- - **Port** – 80, 8080, and 443 in the three tuples respectively
+ - **Port** – **80**, **8080**, and **443** in the three tuples respectively
15. Click the Create button.
@@ -978,7 +978,7 @@ If load balancing is working properly, the unique **Server** field from the inde
To verify that high availability is working:
-1. Connect to one of the instances in the nginx-plus-lb-instance-group over SSH and run this command to force it offline:
+1. Connect to one of the instances in the **nginx‑plus‑lb‑instance‑group** over SSH and run this command to force it offline:
```shell
iptables -A INPUT -p tcp --destination-port 80 -j DROP
diff --git a/content/nginx/deployment-guides/load-balance-third-party/apache-tomcat.md b/content/nginx/deployment-guides/load-balance-third-party/apache-tomcat.md
index a244a1475..df0ff8cb6 100644
--- a/content/nginx/deployment-guides/load-balance-third-party/apache-tomcat.md
+++ b/content/nginx/deployment-guides/load-balance-third-party/apache-tomcat.md
@@ -171,7 +171,7 @@ http {
}
```
-You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files _function_-http.conf, this is an appropriate `include` directive:
+You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files **_function_‑http.conf**, this is an appropriate `include` directive:
```nginx
http {
@@ -294,7 +294,7 @@ To configure load balancing, first create a named _upstream group_, which lists
2. In the `server` block for HTTPS traffic that we created in [Configuring Virtual Servers for HTTP and HTTPS Traffic](#virtual-servers), include two `location` blocks:
- - The first one matches HTTPS requests in which the path starts with /tomcat-app/, and proxies them to the **tomcat** upstream group we created in the previous step.
+ - The first one matches HTTPS requests in which the path starts with **/tomcat‑app/**, and proxies them to the **tomcat** upstream group we created in the previous step.
- The second one funnels all traffic to the first `location` block, by doing a temporary redirect of all requests for **"http://example.com/"**.
@@ -409,7 +409,7 @@ To enable basic caching in NGINX Open Source<
Directive documentation: [proxy_cache_path](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_path)
-2. In the `location` block that matches HTTPS requests in which the path starts with /tomcat-app/, include the `proxy_cache` directive to reference the cache created in the previous step.
+2. In the `location` block that matches HTTPS requests in which the path starts with **/tomcat‑app/**, include the `proxy_cache` directive to reference the cache created in the previous step.
```nginx
# In the 'server' block for HTTPS traffic
@@ -440,11 +440,11 @@ HTTP/2 is fully supported in both NGINX Open
- In NGINX Plus R8 and later, NGINX Plus supports HTTP/2 by default. (Support for SPDY is deprecated as of that release). Specifically:
- In NGINX Plus R11 and later, the nginx-plus package continues to support HTTP/2 by default, but the nginx-plus-extras package available in previous releases is deprecated by [dynamic modules](https://www.nginx.com/products/nginx/dynamic-modules/).
+ In NGINX Plus R11 and later, the **nginx‑plus** package continues to support HTTP/2 by default, but the **nginx‑plus‑extras** package available in previous releases is deprecated by [dynamic modules](https://www.nginx.com/products/nginx/dynamic-modules/).
- For NGINX Plus R8 through R10, the nginx-plus and nginx-plus-extras packages support HTTP/2 by default.
+ For NGINX Plus R8 through R10, the **nginx‑plus** and **nginx‑plus‑extras** packages support HTTP/2 by default.
- If using NGINX Plus R7, you must install the nginx-plus-http2 package instead of the nginx-plus or nginx-plus-extras package.
+ If using NGINX Plus R7, you must install the **nginx‑plus‑http2** package instead of the **nginx‑plus** or **nginx‑plus‑extras** package.
To enable HTTP/2 support, add the `http2` directive in the `server` block for HTTPS traffic that we created in [Configuring Virtual Servers for HTTP and HTTPS Traffic](#virtual-servers), so that it looks like this:
@@ -636,7 +636,7 @@ Health checks are out-of-band HTTP req
Because the `health_check` directive is placed in the `location` block, we can enable different health checks for each application.
-1. In the `location` block that matches HTTPS requests in which the path starts with /tomcat-app/ (created in [Configuring Basic Load Balancing](#load-balancing-basic)), add the `health_check` directive.
+1. In the `location` block that matches HTTPS requests in which the path starts with **/tomcat‑app/** (created in [Configuring Basic Load Balancing](#load-balancing-basic)), add the `health_check` directive.
Here we configure NGINX Plus to send an out-of-band request for the top‑level URI **/** (slash) to each of the servers in the **tomcat** upstream group every 2 seconds, which is more aggressive than the default 5‑second interval. If a server does not respond correctly, it is marked down and NGINX Plus stops sending requests to it until it passes five subsequent health checks in a row. We include the `match` parameter to define a nondefault set of health‑check tests.
@@ -719,7 +719,7 @@ The quickest way to configure the module and the built‑in dashboard is to down
Directive documentation: [include](https://nginx.org/en/docs/ngx_core_module.html#include)
- If you are using the conventional configuration scheme and your existing `include` directives use the wildcard notation discussed in [Creating and Modifying Configuration Files](#config-files), you can either add a separate `include` directive for **status.conf** as shown above, or change the name of **status.conf** so it is captured by the wildcard in an existing `include` directive in the `http` block. For example, changing it to status-http.conf means it is captured by the `include` directive for `*-http.conf`.
+ If you are using the conventional configuration scheme and your existing `include` directives use the wildcard notation discussed in [Creating and Modifying Configuration Files](#config-files), you can either add a separate `include` directive for **status.conf** as shown above, or change the name of **status.conf** so it is captured by the wildcard in an existing `include` directive in the `http` block. For example, changing it to **status‑http.conf** means it is captured by the `include` directive for `*-http.conf`.
3. Comments in **status.conf** explain which directives you must customize for your deployment. In particular, the default settings in the sample configuration file allow anyone on any network to access the dashboard. We strongly recommend that you restrict access to the dashboard with one or more of the following methods:
diff --git a/content/nginx/deployment-guides/load-balance-third-party/microsoft-exchange.md b/content/nginx/deployment-guides/load-balance-third-party/microsoft-exchange.md
index 3a9158c19..912302769 100644
--- a/content/nginx/deployment-guides/load-balance-third-party/microsoft-exchange.md
+++ b/content/nginx/deployment-guides/load-balance-third-party/microsoft-exchange.md
@@ -371,7 +371,7 @@ To set up the conventional configuration scheme, perform these steps:
Directive documentation: [include](https://nginx.org/en/docs/ngx_core_module.html#include)
- You can also use wildcard notation to read all function‑specific files for either HTTP or TCP traffic into the appropriate context block. For example, if you name all HTTP configuration files _function_-http.conf and all TCP configuration files _function_-stream.conf (the filenames we specify in this section conform to this pattern), the wildcarded `include` directives are:
+ You can also use wildcard notation to read all function‑specific files for either HTTP or TCP traffic into the appropriate context block. For example, if you name all HTTP configuration files **_function_‑http.conf** and all TCP configuration files **_function_‑stream.conf** (the filenames we specify in this section conform to this pattern), the wildcarded `include` directives are:
```nginx
http {
@@ -383,9 +383,9 @@ To set up the conventional configuration scheme, perform these steps:
}
```
-2. In the **/etc/nginx/conf.d** directory, create a new file called exchange-http.conf for directives that pertain to Exchange HTTP and HTTPS traffic (or substitute the name you chose in Step 1). Copy in the directives from the `http` configuration block in the downloaded configuration file. Remember not to copy the first line (`http` `{`) or the closing curly brace (`}`) for the block, because the `http` block you created in Step 1 already has them.
+2. In the **/etc/nginx/conf.d** directory, create a new file called **exchange‑http.conf** for directives that pertain to Exchange HTTP and HTTPS traffic (or substitute the name you chose in Step 1). Copy in the directives from the `http` configuration block in the downloaded configuration file. Remember not to copy the first line (`http` `{`) or the closing curly brace (`}`) for the block, because the `http` block you created in Step 1 already has them.
-3. Also in the **/etc/nginx/conf.d** directory, create a new file called exchange-stream.conf for directives that pertain to Exchange TCP traffic (or substitute the name you chose in Step 1). Copy in the directives from the `stream` configuration block in the dowloaded configuration file. Again, do not copy the first line (`stream` `{`) or the closing curly brace (`}`).
+3. Also in the **/etc/nginx/conf.d** directory, create a new file called **exchange‑stream.conf** for directives that pertain to Exchange TCP traffic (or substitute the name you chose in Step 1). Copy in the directives from the `stream` configuration block in the dowloaded configuration file. Again, do not copy the first line (`stream` `{`) or the closing curly brace (`}`).
For reference purposes, the text of the full configuration files is included in this document:
@@ -468,7 +468,7 @@ The directives in the top‑level `stream` configuration block configure TCP loa
}
```
-3. This `server` block defines the virtual server that proxies traffic on port 993 to the exchange-imaps upstream group configured in Step 1.
+3. This `server` block defines the virtual server that proxies traffic on port 993 to the **exchange‑imaps** upstream group configured in Step 1.
```nginx
# In the 'stream' block
@@ -481,7 +481,7 @@ The directives in the top‑level `stream` configuration block configure TCP loa
Directive documentation: [listen](https://nginx.org/en/docs/stream/ngx_stream_core_module.html#listen), [proxy_pass](https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_pass), [server](https://nginx.org/en/docs/stream/ngx_stream_core_module.html#server), [status_zone](https://nginx.org/en/docs/http/ngx_http_status_module.html#status_zone)
-4. This `server` block defines the virtual server that proxies traffic on port 25 to the exchange-smtp upstream group configured in Step 2. If you wish to change the port number from 25 (for example, to 587), change the `listen` directive.
+4. This `server` block defines the virtual server that proxies traffic on port 25 to the **exchange‑smtp** upstream group configured in Step 2. If you wish to change the port number from 25 (for example, to 587), change the `listen` directive.
```nginx
# In the 'stream' block
@@ -615,11 +615,11 @@ HTTP/2 is fully supported in NGINX Plus R7NGINX Plus R8 and later, NGINX Plus supports HTTP/2 by default, and does not support SPDY:
-- In NGINX Plus R11 and later, the nginx-plus package continues to support HTTP/2 by default, but the nginx-plus-extras package available in previous releases is deprecated by [dynamic modules](https://www.nginx.com/products/nginx/dynamic-modules/).
+- In NGINX Plus R11 and later, the **nginx‑plus** package continues to support HTTP/2 by default, but the **nginx‑plus‑extras** package available in previous releases is deprecated by [dynamic modules](https://www.nginx.com/products/nginx/dynamic-modules/).
-- For NGINX Plus R8 through R10, the nginx-plus and nginx-plus-extras packages support HTTP/2 by default.
+- For NGINX Plus R8 through R10, the **nginx‑plus** and **nginx‑plus‑extras** packages support HTTP/2 by default.
-If using NGINX Plus R7, you must install the nginx-plus-http2 package instead of the nginx-plus or nginx-plus-extras package.
+If using NGINX Plus R7, you must install the **nginx‑plus‑http2** package instead of the **nginx‑plus** or **nginx‑plus‑extras** package.
To enable HTTP/2 support, add the `http2` directive in the `server` block for HTTPS traffic that we created in [Configuring Virtual Servers for HTTP and HTTPS Traffic](#virtual-servers), so that it looks like this:
@@ -926,7 +926,7 @@ Exchange CASs interact with various applications used by clients on different ty
}
```
- - Mobile clients like iPhone and Android access the ActiveSync location (/Microsoft-Server-ActiveSync).
+ - Mobile clients like iPhone and Android access the ActiveSync location (**/Microsoft‑Server‑ActiveSync**).
```nginx
# In the 'server' block for HTTPS traffic
@@ -1092,7 +1092,7 @@ The quickest way to configure the module and the built‑in dashboard is to down
include conf.d/status.conf;
```
- If you are using the conventional configuration scheme and your existing `include` directives use the wildcard notation discussed in [Creating and Modifying Configuration Files](#config-files), you can either add a separate `include` directive for **status.conf** as shown above, or change the name of **status.conf** so it is captured by the wildcard in an existing `include` directive in the `http` block. For example, changing it to status-http.conf means it is captured by the `include` directive for `*-http.conf`.
+ If you are using the conventional configuration scheme and your existing `include` directives use the wildcard notation discussed in [Creating and Modifying Configuration Files](#config-files), you can either add a separate `include` directive for **status.conf** as shown above, or change the name of **status.conf** so it is captured by the wildcard in an existing `include` directive in the `http` block. For example, changing it to **status‑http.conf** means it is captured by the `include` directive for `*-http.conf`.
Directive documentation: [include](https://nginx.org/en/docs/ngx_core_module.html#include)
diff --git a/content/nginx/deployment-guides/load-balance-third-party/node-js.md b/content/nginx/deployment-guides/load-balance-third-party/node-js.md
index af2a2c964..c37183f64 100644
--- a/content/nginx/deployment-guides/load-balance-third-party/node-js.md
+++ b/content/nginx/deployment-guides/load-balance-third-party/node-js.md
@@ -175,7 +175,7 @@ http {
Directive documentation: [include](https://nginx.org/en/docs/ngx_core_module.html#include)
-You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files _function_-http.conf, this is an appropriate `include` directive:
+You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files **_function_‑http.conf**, this is an appropriate `include` directive:
```nginx
http {
@@ -433,13 +433,13 @@ HTTP/2 is fully supported in both NGINX 1.9.5 and later, and NGINX Plus R7 and
- If using NGINX Open Source, note that in version 1.9.5 and later the SPDY module is completely removed from the codebase and replaced with the [HTTP/2](https://nginx.org/en/docs/http/ngx_http_v2_module.html) module. After upgrading to version 1.9.5 or later, you can no longer configure NGINX Open Source to use SPDY. If you want to keep using SPDY, you need to compile NGINX Open Source from the sources in the [NGINX 1.8.x branch](https://nginx.org/en/download.html).
-- If using NGINX Plus, in R11 and later the nginx-plus package supports HTTP/2 by default, and the nginx-plus-extras package available in previous releases is deprecated by separate [dynamic modules](https://www.nginx.com/products/nginx/modules/) authored by NGINX.
+- If using NGINX Plus, in R11 and later the **nginx‑plus** package supports HTTP/2 by default, and the **nginx‑plus‑extras** package available in previous releases is deprecated by separate [dynamic modules](https://www.nginx.com/products/nginx/modules/) authored by NGINX.
- In NGINX Plus R8 through R10, the nginx-plus and nginx-plus-extras packages support HTTP/2 by default.
+ In NGINX Plus R8 through R10, the **nginx‑plus** and **nginx‑plus‑extras** packages support HTTP/2 by default.
In NGINX Plus R8 and later, NGINX Plus supports HTTP/2 by default, and does not support SPDY.
- If using NGINX Plus R7, you must install the nginx-plus-http2 package instead of the nginx-plus or nginx-plus-extras package.
+ If using NGINX Plus R7, you must install the **nginx‑plus‑http2** package instead of the **nginx‑plus** or **nginx‑plus‑extras** package.
To enable HTTP/2 support, add the [http2](https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2) directive in the `server` block for HTTPS traffic that we created in [Configuring Virtual Servers for HTTP and HTTPS Traffic](#virtual-servers), so that it looks like this:
@@ -459,7 +459,7 @@ To verify that HTTP/2 translation is working, you can use the "HTTP/2 and SPDY i
The full configuration for basic load balancing appears here for your convenience. It goes in the `http` context. The complete file is available for [download](https://www.nginx.com/resource/conf/nodejs-basic.conf) from the NGINX website.
-We recommend that you do not copy text directly from this document, but instead use the method described in [Creating and Modifying Configuration Files](#config-files) to include these directives in your configuration – add an `include` directive to the `http` context of the main **nginx.conf** file to read in the contents of /etc/nginx/conf.d/nodejs-basic.conf.
+We recommend that you do not copy text directly from this document, but instead use the method described in [Creating and Modifying Configuration Files](#config-files) to include these directives in your configuration – add an `include` directive to the `http` context of the main **nginx.conf** file to read in the contents of **/etc/nginx/conf.d/nodejs‑basic.conf**.
```nginx
proxy_cache_path /tmp/NGINX_cache/ keys_zone=backcache:10m;
@@ -785,9 +785,9 @@ Parameter documentation: [service](https://nginx.org/en/docs/http/ngx_http_upstr
The full configuration for enhanced load balancing appears here for your convenience. It goes in the `http` context. The complete file is available for [download](https://www.nginx.com/resource/conf/nodejs-enhanced.conf) from the NGINX website.
-We recommend that you do not copy text directly from this document, but instead use the method described in [Creating and Modifying Configuration Files](#config-files) to include these directives in your configuration – namely, add an `include` directive to the `http` context of the main **nginx.conf** file to read in the contents of /etc/nginx/conf.d/nodejs-enhanced.conf.
+We recommend that you do not copy text directly from this document, but instead use the method described in [Creating and Modifying Configuration Files](#config-files) to include these directives in your configuration – namely, add an `include` directive to the `http` context of the main **nginx.conf** file to read in the contents of **/etc/nginx/conf.d/nodejs‑enhanced.conf**.
-**Note:** The `api` block in this configuration summary and the [downloadable](https://www.nginx.com/resource/conf/nodejs-enhanced.conf) nodejs-enhanced.conf file is for the [API method](#reconfiguration-api) of dynamic reconfiguration. If you want to use the [DNS method](#reconfiguration-dns) instead, make the appropriate changes to the block. (You can also remove or comment out the directives for the NGINX Plus API in that case, but they do not conflict with using the DNS method and enable features other than dynamic reconfiguration.)
+**Note:** The `api` block in this configuration summary and the [downloadable](https://www.nginx.com/resource/conf/nodejs-enhanced.conf) **nodejs‑enhanced.conf** file is for the [API method](#reconfiguration-api) of dynamic reconfiguration. If you want to use the [DNS method](#reconfiguration-dns) instead, make the appropriate changes to the block. (You can also remove or comment out the directives for the NGINX Plus API in that case, but they do not conflict with using the DNS method and enable features other than dynamic reconfiguration.)
```nginx
proxy_cache_path /tmp/NGINX_cache/ keys_zone=backcache:10m;
diff --git a/content/nginx/deployment-guides/load-balance-third-party/oracle-e-business-suite.md b/content/nginx/deployment-guides/load-balance-third-party/oracle-e-business-suite.md
index 88457fdfe..6e456bacd 100644
--- a/content/nginx/deployment-guides/load-balance-third-party/oracle-e-business-suite.md
+++ b/content/nginx/deployment-guides/load-balance-third-party/oracle-e-business-suite.md
@@ -322,7 +322,7 @@ http {
Directive documentation: [include](https://nginx.org/en/docs/ngx_core_module.html#include)
-You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files _function_-http.conf, this is an appropriate include directive:
+You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files **_function_‑http.conf**, this is an appropriate include directive:
```nginx
http {
@@ -505,13 +505,13 @@ HTTP/2 is fully supported in both NGINX 1.9.5 and later, and NGINX Plus R7 and
- If using open source NGINX, note that in version 1.9.5 and later the SPDY module is completely removed from the NGINX codebase and replaced with the [HTTP/2](https://nginx.org/en/docs/http/ngx_http_v2_module.html) module. After upgrading to version 1.9.5 or later, you can no longer configure NGINX to use SPDY. If you want to keep using SPDY, you need to compile NGINX from the sources in the [NGINX 1.8 branch](https://nginx.org/en/download.html).
-- If using NGINX Plus, in R11 and later the nginx-plus package supports HTTP/2 by default, and the nginx-plus-extras package available in previous releases is deprecated by separate [dynamic modules](https://www.nginx.com/products/nginx/modules/) authored by NGINX.
+- If using NGINX Plus, in R11 and later the **nginx‑plus** package supports HTTP/2 by default, and the **nginx‑plus‑extras** package available in previous releases is deprecated by separate [dynamic modules](https://www.nginx.com/products/nginx/modules/) authored by NGINX.
- In NGINX Plus R8 through R10, the nginx-plus and nginx-plus-extras packages support HTTP/2 by default.
+ In NGINX Plus R8 through R10, the **nginx‑plus** and **nginx‑plus‑extras** packages support HTTP/2 by default.
In NGINX Plus R8 and later, NGINX Plus supports HTTP/2 by default, and does not support SPDY.
- If using NGINX Plus R7, you must install the nginx-plus-http2 package instead of the nginx-plus or nginx-plus-extras package.
+ If using NGINX Plus R7, you must install the **nginx‑plus‑http2** package instead of the **nginx‑plus** or **nginx‑plus‑extras** package.
To enable HTTP/2 support, add the `http2` directive in the `server` block for HTTPS traffic that we created in [Configuring Virtual Servers for HTTP and HTTPS Traffic](#virtual-servers), so that it looks like this:
diff --git a/content/nginx/deployment-guides/load-balance-third-party/oracle-weblogic-server.md b/content/nginx/deployment-guides/load-balance-third-party/oracle-weblogic-server.md
index 1c0188761..de3b44837 100644
--- a/content/nginx/deployment-guides/load-balance-third-party/oracle-weblogic-server.md
+++ b/content/nginx/deployment-guides/load-balance-third-party/oracle-weblogic-server.md
@@ -173,7 +173,7 @@ http {
}
```
-You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files _function_-http.conf, this is an appropriate include directive:
+You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files **_function_‑http.conf**, this is an appropriate include directive:
```nginx
http {
@@ -299,7 +299,7 @@ By putting NGINX Open Source or NGINX Plus in front of WebLogic Server servers
2. In the `server` block for HTTPS traffic that we created in [Configuring Virtual Servers for HTTP and HTTPS Traffic](#virtual-servers), include two `location` blocks:
- - The first one matches HTTPS requests in which the path starts with /weblogic-app/, and proxies them to the **weblogic** upstream group we created in the previous step.
+ - The first one matches HTTPS requests in which the path starts with **/weblogic‑app/**, and proxies them to the **weblogic** upstream group we created in the previous step.
- The second one funnels all traffic to the first `location` block, by doing a temporary redirect of all requests for **"http://example.com/"**.
@@ -414,7 +414,7 @@ To create a very simple caching configuration:
Directive documentation: [proxy_cache_path](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_path)
-2. In the `location` block that matches HTTPS requests in which the path starts with /weblogic-app/, include the `proxy_cache` directive to reference the cache created in the previous step.
+2. In the `location` block that matches HTTPS requests in which the path starts with **/weblogic‑app/**, include the `proxy_cache` directive to reference the cache created in the previous step.
```nginx
# In the 'server' block for HTTPS traffic
@@ -443,13 +443,13 @@ HTTP/2 is fully supported in both NGINX 1.9.5 and later, and NGINX Plus R7 and
- If using NGINX Open Source, note that in version 1.9.5 and later the SPDY module is completely removed from the codebase and replaced with the [HTTP/2](https://nginx.org/en/docs/http/ngx_http_v2_module.html) module. After upgrading to version 1.9.5 or later, you can no longer configure NGINX Open Source to use SPDY. If you want to keep using SPDY, you need to compile NGINX Open Source from the sources in the [NGINX 1.8.x branch](https://nginx.org/en/download.html).
-- If using NGINX Plus, in R11 and later the nginx-plus package supports HTTP/2 by default, and the nginx-plus-extras package available in previous releases is deprecated by separate [dynamic modules](https://www.nginx.com/products/nginx/modules/) authored by NGINX.
+- If using NGINX Plus, in R11 and later the **nginx‑plus** package supports HTTP/2 by default, and the **nginx‑plus‑extras** package available in previous releases is deprecated by separate [dynamic modules](https://www.nginx.com/products/nginx/modules/) authored by NGINX.
- In NGINX Plus R8 through R10, the nginx-plus and nginx-plus-extras packages support HTTP/2 by default.
+ In NGINX Plus R8 through R10, the **nginx‑plus** and **nginx‑plus‑extras** packages support HTTP/2 by default.
In NGINX Plus R8 and later, NGINX Plus supports HTTP/2 by default, and does not support SPDY.
- If using NGINX Plus R7, you must install the nginx-plus-http2 package instead of the nginx-plus or nginx-plus-extras package.
+ If using NGINX Plus R7, you must install the **nginx‑plus‑http2** package instead of the **nginx‑plus** or **nginx‑plus‑extras** package.
To enable HTTP/2 support, add the `http2` directive in the `server` block for HTTPS traffic that we created in [Configuring Virtual Servers for HTTP and HTTPS Traffic](#virtual-servers), so that it looks like this:
@@ -601,7 +601,7 @@ Health checks are out‑of‑band HTTP requests sent to a server at fixed interv
Because the `health_check` directive is placed in the `location` block, we can enable different health checks for each application.
-1. In the `location` block that matches HTTPS requests in which the path starts with /weblogic-app/ (created in [Configuring Basic Load Balancing](#load-balancing-basic)), add the `health_check` directive.
+1. In the `location` block that matches HTTPS requests in which the path starts with **/weblogic‑app/** (created in [Configuring Basic Load Balancing](#load-balancing-basic)), add the `health_check` directive.
Here we configure NGINX Plus to send an out‑of‑band request for the URI **/benefits** to each of the servers in the **weblogic** upstream group every 5 seconds (the default frequency). If a server does not respond correctly, it is marked down and NGINX Plus stops sending requests to it until it passes a subsequent health check. We include the `match` parameter to the `health_check` directive to define a nondefault set of health‑check tests.
@@ -814,7 +814,7 @@ To enable dynamic reconfiguration of your upstream group of WebLogic Server app
The full configuration for enhanced load balancing appears here for your convenience. It goes in the `http` context. The complete file is available for [download](https://www.nginx.com/resource/conf/weblogic-enhanced.conf) from the NGINX website.
-We recommend that you do not copy text directly from this document, but instead use the method described in [Creating and Modifying Configuration Files](#config-files) to include these directives in your configuration – add an `include` directive to the `http` context of the main **nginx.conf** file to read in the contents of /etc/nginx/conf.d/weblogic-enhanced.conf.
+We recommend that you do not copy text directly from this document, but instead use the method described in [Creating and Modifying Configuration Files](#config-files) to include these directives in your configuration – add an `include` directive to the `http` context of the main **nginx.conf** file to read in the contents of **/etc/nginx/conf.d/weblogic‑enhanced.conf**.
```nginx
proxy_cache_path /tmp/NGINX_cache/ keys_zone=backcache:10m;
diff --git a/content/nginx/deployment-guides/load-balance-third-party/wildfly.md b/content/nginx/deployment-guides/load-balance-third-party/wildfly.md
index 5f319051b..2e92a9243 100644
--- a/content/nginx/deployment-guides/load-balance-third-party/wildfly.md
+++ b/content/nginx/deployment-guides/load-balance-third-party/wildfly.md
@@ -169,7 +169,7 @@ http {
Directive documentation: [include](https://nginx.org/en/docs/ngx_core_module.html#include)
-You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files _function_-http.conf, this is an appropriate include directive:
+You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files **_function_‑http.conf**, this is an appropriate include directive:
```nginx
http {
@@ -429,9 +429,9 @@ HTTP/2 is fully supported in both NGINX Open
In [NGINX Plus R11]({{< ref "/nginx/releases.md#r11" >}}) and later, the **nginx-plus** package continues to support HTTP/2 by default, but the **nginx-plus-extras** package available in previous releases is deprecated and replaced by [dynamic modules]({{< ref "/nginx/admin-guide/dynamic-modules/dynamic-modules.md" >}}).
- For NGINX Plus R8 through R10, the nginx-plus and nginx-plus-extras packages support HTTP/2 by default.
+ For NGINX Plus R8 through R10, the **nginx‑plus** and **nginx‑plus‑extras** packages support HTTP/2 by default.
- If using NGINX Plus R7, you must install the nginx-plus-http2 package instead of the nginx-plus or nginx-plus-extras package.
+ If using NGINX Plus R7, you must install the **nginx‑plus‑http2** package instead of the **nginx‑plus** or **nginx‑plus‑extras** package.
To enable HTTP/2 support, add the `http2` directive in the `server` block for HTTPS traffic that we created in [Configuring Virtual Servers for HTTP and HTTPS Traffic](#virtual-servers), so that it looks like this:
@@ -793,7 +793,7 @@ The full configuration for enhanced load balancing appears here for your conveni
We recommend that you do not copy text directly from this document, but instead use the method described in [Creating and Modifying Configuration Files](#config-files) to include these directives in your configuration – add an `include` directive to the `http` context of the main **nginx.conf** file to read in the contents of /etc/nginx/conf.d/jboss-enhanced.conf.
-**Note:** The `api` block in this configuration summary and the [downloadable](https://www.nginx.com/resource/conf/jboss-enhanced.conf) jboss-enhanced.conf file is for the [API method](#reconfiguration-api) of dynamic reconfiguration. If you want to use the [DNS method](#reconfiguration-dns) instead, make the appropriate changes to the block. (You can also remove or comment out the directives for the NGINX Plus API in that case, but they do not conflict with using the DNS method and enable features other than dynamic reconfiguration.)
+**Note:** The `api` block in this configuration summary and the [downloadable](https://www.nginx.com/resource/conf/jboss-enhanced.conf) **jboss‑enhanced.conf** file is for the [API method](#reconfiguration-api) of dynamic reconfiguration. If you want to use the [DNS method](#reconfiguration-dns) instead, make the appropriate changes to the block. (You can also remove or comment out the directives for the NGINX Plus API in that case, but they do not conflict with using the DNS method and enable features other than dynamic reconfiguration.)
```nginx
proxy_cache_path /tmp/NGINX_cache/ keys_zone=backcache:10m;
diff --git a/content/nginx/deployment-guides/microsoft-azure/high-availability-standard-load-balancer.md b/content/nginx/deployment-guides/microsoft-azure/high-availability-standard-load-balancer.md
index 25a8a9ba8..792d14bbb 100644
--- a/content/nginx/deployment-guides/microsoft-azure/high-availability-standard-load-balancer.md
+++ b/content/nginx/deployment-guides/microsoft-azure/high-availability-standard-load-balancer.md
@@ -71,7 +71,7 @@ These instructions assume you have the following:
- An Azure [account](https://azure.microsoft.com/en-us/free/).
- An Azure [subscription](https://docs.microsoft.com/en-us/azure/azure-glossary-cloud-terminology?toc=/azure/virtual-network/toc.json#subscription).
-- An Azure [resource group](https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/overview#resource-groups), preferably dedicated to the HA solution. In this guide, it is called NGINX-Plus-HA.
+- An Azure [resource group](https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/overview#resource-groups), preferably dedicated to the HA solution. In this guide, it is called **NGINX‑Plus‑HA**.
- An Azure [virtual network](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-overview).
- Six Azure VMs, four running NGINX Open Source and two running NGINX Plus (in each region where you deploy the solution). You need a paid or trial subscription for each NGINX Plus instance.
@@ -100,17 +100,17 @@ With NGINX Open Source and NGINX Plus installed and configured on the Azure VMs
4. On the **Create load balancer** page that opens (to the **Basics** tab), enter the following values:
- - **Subscription** – Name of your subscription (NGINX-Plus-HA-subscription in this guide)
- - **Resource group** – Name of your resource group (NGINX-Plus-HA in this guide)
- - **Name** – Name of your Standard Load Balancer (lb in this guide)
- - **Region** – Name selected from the drop‑down menu ((US) West US 2 in this guide)
- - **Type** – Public
- - **SKU** – Standard
- - **Public IP address** – Create new
- - **Public IP address name** – Name for the address (public\_ip\_lb in this guide)
- - **Public IP address SKU** – Standard
- - **Availability zone** – Zone‑redundant
- - **Add a public IPv6 address** – No
+ - **Subscription** – Name of your subscription (**NGINX‑Plus‑HA‑subscription** in this guide)
+ - **Resource group** – Name of your resource group (**NGINX‑Plus‑HA** in this guide)
+ - **Name** – Name of your Standard Load Balancer (**lb** in this guide)
+ - **Region** – Name selected from the drop‑down menu (**(US) West US 2** in this guide)
+ - **Type** – **Public**
+ - **SKU** – **Standard**
+ - **Public IP address** – **Create new**
+ - **Public IP address name** – Name for the address (**public\_ip\_lb** in this guide)
+ - **Public IP address SKU** – **Standard**
+ - **Availability zone** – **Zone‑redundant**
+ - **Add a public IPv6 address** – **No**
@@ -130,7 +130,7 @@ With NGINX Open Source and NGINX Plus installed and configured on the Azure VMs
1. If you are not already on the **Load balancers** page, click **Load balancers** in the left navigation column.
-2. Click the name of the load balancer in the **Name** column of the table (lb in this guide).
+2. Click the name of the load balancer in the **Name** column of the table (**lb** in this guide).
@@ -139,61 +139,61 @@ With NGINX Open Source and NGINX Plus installed and configured on the Azure VMs
-4. On the lb | Backend Pools page that opens, click **+ Add** in the upper left corner of the main pane.
+4. On the **lb | Backend Pools** page that opens, click **+ Add** in the upper left corner of the main pane.
-5. On the Add backend pool page that opens, enter the following values, then click the Add button:
+5. On the **Add backend pool** page that opens, enter the following values, then click the Add button:
- - **Name** – Name of the new backend pool (lb\_backend_pool in this guide)
- - **IP version** – IPv4
- - **Virtual machines** – ngx-plus-1 and ngx-plus-2
+ - **Name** – Name of the new backend pool (**lb\_backend_pool** in this guide)
+ - **IP version** – **IPv4**
+ - **Virtual machines** – **ngx‑plus‑1** and **ngx‑plus‑2**
After a few moments the virtual machines appear in the new backend pool.
-6. Click **Health probes** in the left navigation column, and then **+ Add** in the upper left corner of the main pane on the lb | Health probes page that opens.
+6. Click **Health probes** in the left navigation column, and then **+ Add** in the upper left corner of the main pane on the **lb | Health probes** page that opens.
-7. On the Add health probe page that opens, enter the following values, then click the OK button.
+7. On the **Add health probe** page that opens, enter the following values, then click the OK button.
- - **Name** – Name of the new backend pool (lb\_probe in this guide)
- - **Protocol** – HTTP or HTTPS
- - **Port** – 80 or 443
- - **Path** – /
- - **Interval** – 5
- - **Unhealthy threshold** – 2
+ - **Name** – Name of the new backend pool (**lb\_probe** in this guide)
+ - **Protocol** – **HTTP** or **HTTPS**
+ - **Port** – **80** or **443**
+ - **Path** – **/**
+ - **Interval** – **5**
+ - **Unhealthy threshold** – **2**
- After a few moments the new probe appears in the table on the lb | Health probes page. This probe queries the NGINX Plus landing page every five seconds to check whether NGINX Plus is running.
+ After a few moments the new probe appears in the table on the **lb | Health probes** page. This probe queries the NGINX Plus landing page every five seconds to check whether NGINX Plus is running.
-8. Click Load balancing rules in the left navigation column, and then **+ Add** in the upper left corner of the main pane on the lb | Load balancing rules page that opens.
+8. Click **Load balancing rules** in the left navigation column, and then **+ Add** in the upper left corner of the main pane on the **lb | Load balancing rules** page that opens.
-9. On the Add load balancing rule page that opens, enter or select the following values, then click the OK button.
+9. On the **Add load balancing rule** page that opens, enter or select the following values, then click the OK button.
- - **Name** – Name of the rule (lb\_rule in this guide)
- - **IP version** – IPv4
- - **Frontend IP address** – The Standard Load Balancer's public IP address, as reported in the Public IP address field on the **Overview** tag of the Standard Load Balancer's page (for an example, see [Step 3](#slb-configure-lb-overview) above); in this guide it is 51.143.107.x (LoadBalancerFrontEnd)
- - **Protocol** – TCP
- - **Port** – 80
- - **Backend port** – 80
- - **Backend pool** – lb_backend
- - **Health probe** – lb_probe (HTTP:80)
- - **Session persistence** – None
- - **Idle timeout (minutes)** – 4
- - **TCP reset** – Disabled
- - **Floating IP (direct server return)** – Disabled
- - **Create implicit outbound rules** – Yes
+ - **Name** – Name of the rule (**lb\_rule** in this guide)
+ - **IP version** – **IPv4**
+ - **Frontend IP address** – The Standard Load Balancer's public IP address, as reported in the **Public IP address** field on the **Overview** tag of the Standard Load Balancer's page (for an example, see [Step 3](#slb-configure-lb-overview) above); in this guide it is **51.143.107.x (LoadBalancerFrontEnd)**
+ - **Protocol** – **TCP**
+ - **Port** – **80**
+ - **Backend port** – **80**
+ - **Backend pool** – **lb_backend**
+ - **Health probe** – **lb_probe (HTTP:80)**
+ - **Session persistence** – **None**
+ - **Idle timeout (minutes)** – **4**
+ - **TCP reset** – **Disabled**
+ - **Floating IP (direct server return)** – **Disabled**
+ - **Create implicit outbound rules** – **Yes**
- After a few moments the new rule appears in the table on the lb | Load balancing rules page.
+ After a few moments the new rule appears in the table on the **lb | Load balancing rules** page.
### Verifying Correct Operation
-1. To verify that Standard Load Balancer is working correctly, open a new browser window and navigate to the IP address for the Standard Load Balancer front end, which appears in the Public IP address field on the **Overview** tab of the load balancer's page on the dashboard (for an example, see [Step 3](#slb-configure-lb-overview) of _Configuring the Standard Load Balancer_).
+1. To verify that Standard Load Balancer is working correctly, open a new browser window and navigate to the IP address for the Standard Load Balancer front end, which appears in the **Public IP address** field on the **Overview** tab of the load balancer's page on the dashboard (for an example, see [Step 3](#slb-configure-lb-overview) of _Configuring the Standard Load Balancer_).
-2. The default Welcome to nginx! page indicates that the Standard Load Balancer has successfully forwarded a request to one of the two NGINX Plus instances.
+2. The default **Welcome to nginx!** page indicates that the Standard Load Balancer has successfully forwarded a request to one of the two NGINX Plus instances.
@@ -210,42 +210,42 @@ Once you’ve tested that the Standard Load Balancer has been correctly deployed
In this case, you need to set up Azure Traffic Manager for DNS‑based global server load balancing (GSLB) among the regions. The involves creating a DNS name for the Standard Load Balancer and registering it as an endpoint in Traffic Manager.
-1. Navigate to the Public IP addresses page. (One way is to enter Public IP addresses in the search field of the Azure title bar and select that value in the **Services** section of the resulting drop‑down menu.)
+1. Navigate to the **Public IP addresses** page. (One way is to enter **Public IP addresses** in the search field of the Azure title bar and select that value in the **Services** section of the resulting drop‑down menu.)
-2. Click the name of the Standard Load Balancer's public IP address in the **Name** column of the table (here it is public\_ip_lb).
+2. Click the name of the Standard Load Balancer's public IP address in the **Name** column of the table (here it is **public\_ip_lb**).
3. On the **public\_ip_lb** page that opens, click **Configuration** in the left navigation column.
-4. Enter the DNS name for the Standard Load Balancer in the DNS name label field. In this guide, we're accepting the default, public-ip-dns.
+4. Enter the DNS name for the Standard Load Balancer in the **DNS name label** field. In this guide, we're accepting the default, **public‑ip‑dns**.
-5. Navigate to the Traffic Manager profiles tab. (One way is to enter Traffic Manager profiles in the search field of the Azure title bar and select that value in the **Services** section of the resulting drop‑down menu.)
+5. Navigate to the **Traffic Manager profiles** tab. (One way is to enter **Traffic Manager profiles** in the search field of the Azure title bar and select that value in the **Services** section of the resulting drop‑down menu.)
6. Click **+ Add** in the upper left corner of the page.
-7. On the Create Traffic Manager profile page that opens, enter or select the following values and click the Create button.
+7. On the **Create Traffic Manager profile** page that opens, enter or select the following values and click the Create button.
- - **Name** – Name of the profile (ngx in this guide)
- - **Routing method** – Performance
- - **Subscription** – NGINX-Plus-HA-subscription in this guide
- - **Resource group** – NGINX-Plus-HA in this guide
+ - **Name** – Name of the profile (**ngx** in this guide)
+ - **Routing method** – **Performance**
+ - **Subscription** – **NGINX‑Plus‑HA‑subscription** in this guide
+ - **Resource group** – **NGINX‑Plus‑HA** in this guide
_Azure-create-lb-create-Traffic-Manager-profile_
-8. It takes a few moments to create the profile. When it appears in the table on the Traffic Manager profiles page, click its name in the **Name** column.
+8. It takes a few moments to create the profile. When it appears in the table on the **Traffic Manager profiles** page, click its name in the **Name** column.
9. On the **ngx** page that opens, click **Endpoints** in the left navigation column, then **+ Add** in the main part of the page.
10. On the **Add endpoint** window that opens, enter or select the following values and click the Add button.
- - **Type** – Azure endpoint
- - **Name** – Endpoint name (ep-lb-west-us in this guide)
- - **Target resource type** – Public IP address
- - **Public IP address** – Name of the Standard Load Balancer's public IP address (public\_ip_lb (51.143.107.x) in this guide)
+ - **Type** – **Azure endpoint**
+ - **Name** – Endpoint name (**ep‑lb‑west‑us** in this guide)
+ - **Target resource type** – **Public IP address**
+ - **Public IP address** – Name of the Standard Load Balancer's public IP address (**public\_ip_lb (51.143.107.x)** in this guide)
- **Custom Header settings** – None in this guide
@@ -276,15 +276,15 @@ Assign the following names to the VMs, and then install the indicated NGINX soft
- Four NGINX Open Source VMs:
- **App 1**:
- - ngx-oss-app1-1
- - ngx-oss-app1-2
+ - **ngx-oss-app1-1**
+ - **ngx-oss-app1-2**
- **App 2**:
- - ngx-oss-app2-1
- - ngx-oss-app2-2
+ - **ngx-oss-app2-1**
+ - **ngx-oss-app2-2**
- Two NGINX Plus VMs:
- - ngx-plus-1
- - ngx-plus-2
+ - **ngx-plus-1**
+ - **ngx-plus-2**
**Note:** The two NGINX Plus VMs must have a public IP address with same SKU type as the Standard Load Balancer you are creating (in this guide, **Standard**). Instructions are included in our deployment guide, [Creating Microsoft Azure Virtual Machines for NGINX Open Source and NGINX Plus]({{< ref "virtual-machines-for-nginx.md" >}}).
@@ -300,11 +300,11 @@ For the purposes of this guide, you configure the NGINX Open Source VMs as web s
Complete the instructions on all four web servers:
- Running **App 1**:
- - ngx-oss-app1-1
- - ngx-oss-app1-2
+ - **ngx-oss-app1-1**
+ - **ngx-oss-app1-2**
- Running **App 2**:
- - ngx-oss-app2-1
- - ngx-oss-app2-2
+ - **ngx-oss-app2-1**
+ - **ngx-oss-app2-2**
### Configuring NGINX Plus on the Load Balancers
@@ -313,7 +313,7 @@ For the purposes of this guide, you configure the NGINX Plus VMs as load balanc
Step-by-step instructions are provided in our deployment guide, Setting Up an NGINX Demo Environment.
-Complete the instructions on both ngx-plus-1 and ngx-plus-2.
+Complete the instructions on both **ngx-plus-1** and **ngx-plus-2**.
### Revision History
diff --git a/content/nginx/deployment-guides/microsoft-azure/virtual-machines-for-nginx.md b/content/nginx/deployment-guides/microsoft-azure/virtual-machines-for-nginx.md
index d89c7c69e..487983211 100644
--- a/content/nginx/deployment-guides/microsoft-azure/virtual-machines-for-nginx.md
+++ b/content/nginx/deployment-guides/microsoft-azure/virtual-machines-for-nginx.md
@@ -23,7 +23,7 @@ These instructions assume you have:
- An Azure [account](https://azure.microsoft.com/en-us/free/).
- An Azure [subscription](https://docs.microsoft.com/en-us/azure/azure-glossary-cloud-terminology?toc=/azure/virtual-network/toc.json#subscription).
-- An Azure [resource group](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview#resource-groups). In this guide, it is called NGINX-Plus-HA.
+- An Azure [resource group](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview#resource-groups). In this guide, it is called **NGINX‑Plus‑HA**.
- An Azure [virtual network](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-overview).
- If using the instructions in [Automating Installation with Ansible](#automate-ansible), basic Linux system administration skills, including installation of Linux software from vendor‑supplied packages, and file creation and editing.
@@ -48,25 +48,25 @@ In addition, to install NGINX software by following the linked instructions, you
4. In the **Create a virtual machine** window that opens, enter the requested information on the **Basics** tab. In this guide, we're using the following values:
- - **Subscription** – NGINX-Plus-HA-subscription
- - **Resource group** – NGINX-Plus-HA
- - **Virtual machine name** – ngx-plus-1
+ - **Subscription** – **NGINX‑Plus‑HA‑subscription**
+ - **Resource group** – **NGINX‑Plus‑HA**
+ - **Virtual machine name** – **ngx‑plus‑1**
- The value ngx-plus-1 is one of the six used for VMs in [Active-Active HA for NGINX Plus on Microsoft Azure Using the Azure Standard Load Balancer]({{< ref "high-availability-standard-load-balancer.md" >}}). See Step 7 below for the other instance names.
+ The value **ngx‑plus‑1** is one of the six used for VMs in [Active-Active HA for NGINX Plus on Microsoft Azure Using the Azure Standard Load Balancer]({{< ref "high-availability-standard-load-balancer.md" >}}). See Step 7 below for the other instance names.
- - **Region** – (US) West US 2
- - **Availability options** – No infrastructure redundancy required
+ - **Region** – **(US) West US 2**
+ - **Availability options** – **No infrastructure redundancy required**
This option is sufficient for a demo like the one in this guide. For production deployments, you might want to select a more robust option; we recommend deploying a copy of each VM in a different Availability Zone. For more information, see the [Azure documentation](https://docs.microsoft.com/en-us/azure/availability-zones/az-overview).
- - **Image** – Ubuntu Server 18.04 LTS
- - **Azure Spot instance** – No
- - **Size** – B1s (click Select size to access the Select a VM size window, click the **B1s** row, and click the Select button to return to the **Basics** tab)
- - **Authentication type** – SSH public key
- - **Username** – nginx_azure
- - **SSH public key source** – Generate new key pair (the other choices on the drop‑down menu are to use an existing key stored in Azure or an existing public key)
- - **Key pair name** – nginx_key
- - **Public inbound ports** – Allow selected ports
- - **Select inbound ports** – Select from the drop-down menu: SSH (22) and HTTP (80), plus HTTPS (443) if you plan to configure NGINX and NGINX Plus for SSL/TLS
+ - **Image** – **Ubuntu Server 18.04 LTS**
+ - **Azure Spot instance** – **No**
+ - **Size** – **B1s** (click Select size to access the **Select a VM size** window, click the **B1s** row, and click the Select button to return to the **Basics** tab)
+ - **Authentication type** – **SSH public key**
+ - **Username** – **nginx_azure**
+ - **SSH public key source** – **Generate new key pair** (the other choices on the drop‑down menu are to use an existing key stored in Azure or an existing public key)
+ - **Key pair name** – **nginx_key**
+ - **Public inbound ports** – **Allow selected ports**
+ - **Select inbound ports** – Select from the drop-down menu: **SSH (22)** and **HTTP (80)**, plus **HTTPS (443)** if you plan to configure NGINX and NGINX Plus for SSL/TLS
@@ -75,11 +75,11 @@ In addition, to install NGINX software by following the linked instructions, you
For simplicity, we recommend allocating **Standard** public IP addresses for all six VMs used in the deployment. At the time of initial publication of this guide, the hourly cost for six such VMs was only $0.008 more than for six VMs with Basic addresses; for current pricing, see the [Microsoft documentation](https://azure.microsoft.com/en-us/pricing/details/ip-addresses/).
- To allocate a **Standard** public IP address, open the **Networking** tab on the **Create a virtual machine** window. Click Create new below the **Public IP** field. In the Create public IP address column that opens at right, click the **Standard** radio button under **SKU**. You can change the value in the **Name** field; here we are accepting the default created by Azure, ngx-plus-1-ip. Click the OK button.
+ To allocate a **Standard** public IP address, open the **Networking** tab on the **Create a virtual machine** window. Click Create new below the **Public IP** field. In the **Create public IP address** column that opens at right, click the **Standard** radio button under **SKU**. You can change the value in the **Name** field; here we are accepting the default created by Azure, **ngx‑plus‑1‑ip**. Click the ** OK ** button.
-6. At this point, you have the option of selecting nondefault values on the **Disks**, **Networking**, **Management**, **Advanced**, and **Tags** tabs. For a demo like the one in this guide, for example, selecting Standard HDD for OS disk type on the **Disks** tab saves money compared to the default, Premium SSD. You might also want to create or apply tags to this VM, on the **Tags** tab.
+6. At this point, you have the option of selecting nondefault values on the **Disks**, **Networking**, **Management**, **Advanced**, and **Tags** tabs. For a demo like the one in this guide, for example, selecting **Standard HDD** for OS disk type on the **Disks** tab saves money compared to the default, **Premium SSD**. You might also want to create or apply tags to this VM, on the **Tags** tab.
When you have completed your changes on all tabs, click the Review + create button at the bottom of the **Create a virtual machine** page.
@@ -87,7 +87,7 @@ In addition, to install NGINX software by following the linked instructions, you
To change any settings, open the appropriate tab. If the settings are correct, click the Create button.
- If you chose in [Step 4](#create-vm_Basics) to generate a new key pair, a Generate new key pair window pops up. Click the Download key and create private resource button.
+ If you chose in [Step 4](#create-vm_Basics) to generate a new key pair, a **Generate new key pair** window pops up. Click the Download key and create private resource button.
@@ -98,16 +98,16 @@ In addition, to install NGINX software by following the linked instructions, you
7. If you are following these instructions to create the six VMs used in [Active-Active HA for NGINX Plus on Microsoft Azure Using the Azure Standard Load Balancer]({{< ref "high-availability-standard-load-balancer.md" >}}), their names are as follows:
- - ngx-plus-1
- - ngx-plus-2
- - ngx-oss-app1-1
- - ngx-oss-app1-2
- - ngx-oss-app2-1
- - ngx-oss-app2-2
+ - **ngx-plus-1**
+ - **ngx-plus-2**
+ - **ngx-oss-app1-1**
+ - **ngx-oss-app1-2**
+ - **ngx-oss-app2-1**
+ - **ngx-oss-app2-2**
- For ngx-plus-2, it is probably simplest to repeat Steps 2 through 6 above (or purchase a second prebuilt VM in the [Microsoft Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps?search=NGINX%20Plus)).
+ For **ngx-plus-2**, it is probably simplest to repeat Steps 2 through 6 above (or purchase a second prebuilt VM in the [Microsoft Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps?search=NGINX%20Plus)).
- For the NGINX Open Source VMs, you can create them individually using Steps 2 through 6. Alternatively, create them based on an Azure image. To do so, follow Steps 2 through 6 above to create a source VM (naming it nginx-oss), [install the NGINX Open Source software](#install-nginx) on it, and then follow the instructions in [Optional: Creating an NGINX Open Source Image](#create-nginx-oss-image).
+ For the NGINX Open Source VMs, you can create them individually using Steps 2 through 6. Alternatively, create them based on an Azure image. To do so, follow Steps 2 through 6 above to create a source VM (naming it **nginx‑oss**), [install the NGINX Open Source software](#install-nginx) on it, and then follow the instructions in [Optional: Creating an NGINX Open Source Image](#create-nginx-oss-image).
## Connecting to a Virtual Machine
@@ -118,7 +118,7 @@ To install and configure NGINX Open Source or NGINX Plus on a VM, you need to o
-2. On the page that opens (ngx-plus-1 in this guide), note the VM's public IP address (in the Public IP address field in the right column).
+2. On the page that opens (**ngx‑plus‑1** in this guide), note the VM's public IP address (in the **Public IP address** field in the right column).
@@ -130,8 +130,8 @@ To install and configure NGINX Open Source or NGINX Plus on a VM, you need to o
where
- - `` is the name of the file containing the private key paired with the public key you entered in the SSH public key field in Step 4 of _Creating a Microsoft Azure Virtual Machine_.
- - `` is the name you entered in the **Username** field in Step 4 of _Creating a Microsoft Azure Virtual Machine_ (in this guide it is nginx_azure).
+ - `` is the name of the file containing the private key paired with the public key you entered in the **SSH public key** field in Step 4 of _Creating a Microsoft Azure Virtual Machine_.
+ - `` is the name you entered in the **Username** field in Step 4 of _Creating a Microsoft Azure Virtual Machine_ (in this guide it is **nginx_azure**).
- `` is the address you looked up in the previous step.
@@ -169,7 +169,7 @@ NGINX publishes a unified Ansible role for NGINX Open Source and NGINX Plus on
ansible-galaxy install nginxinc.nginx
```
-4. (NGINX Plus only) Copy the nginx-repo.key and nginx-repo.crt files provided by NGINX to ~/.ssh/ngx-certs/.
+4. (NGINX Plus only) Copy the **nginx‑repo.key** and **nginx‑repo.crt** files provided by NGINX to **~/.ssh/ngx‑certs/**.
5. Create a file called **playbook.yml** with the following contents:
@@ -196,7 +196,7 @@ To streamline the process of installing NGINX Open Source on multiple VMs, you c
2. Navigate to the **Virtual machines** page, if you are not already there.
-2. In the list of VMs, click the name of the one to use as a source image (in this guide, we have called it ngx-oss). Remember that NGINX Open Source needs to be installed on it already.
+2. In the list of VMs, click the name of the one to use as a source image (in this guide, we have called it **ngx‑oss**). Remember that NGINX Open Source needs to be installed on it already.
3. On the page than opens, click the **Capture** icon in the top navigation bar.
@@ -207,10 +207,10 @@ To streamline the process of installing NGINX Open Source on multiple VMs, you c
Then select the following values:
- **Name** – Keep the current value.
- - **Resource group** – Select the appropriate resource group from the drop‑down menu. Here it is NGINX-Plus-HA.
+ - **Resource group** – Select the appropriate resource group from the drop‑down menu. Here it is **NGINX‑Plus‑HA**.
- **Automatically delete this virtual machine after creating the image** – We recommend checking the box, since you can't do anything more with the image anyway.
- - **Zone resiliency** – On.
- - **Type the virtual machine name** – Name of the source VM (ngx-oss in this guide).
+ - **Zone resiliency** – **On**.
+ - **Type the virtual machine name** – Name of the source VM (**ngx‑oss** in this guide).
Click the Create button.
@@ -220,7 +220,7 @@ To streamline the process of installing NGINX Open Source on multiple VMs, you c
It takes a few moments for the image to be created. When it's ready, you can create VMs from it with NGINX Open Source already installed.
-1. Navigate to the **Images** page. (One method is to type images in the search box in the Microsoft Azure header bar and select that value in the **Services** section of the resulting drop‑down menu.)
+1. Navigate to the **Images** page. (One method is to type **images** in the search box in the Microsoft Azure header bar and select that value in the **Services** section of the resulting drop‑down menu.)
diff --git a/content/nginx/deployment-guides/migrate-hardware-adc/citrix-adc-configuration.md b/content/nginx/deployment-guides/migrate-hardware-adc/citrix-adc-configuration.md
index 9d8adb652..404b54b18 100644
--- a/content/nginx/deployment-guides/migrate-hardware-adc/citrix-adc-configuration.md
+++ b/content/nginx/deployment-guides/migrate-hardware-adc/citrix-adc-configuration.md
@@ -327,7 +327,7 @@ NGINX Plus and Citrix ADC handle high availability (HA) in similar but slightly
Citrix ADC handles the monitoring and failover of the VIP in a proprietary way.
- For [on‑premises deployments]({{< ref "nginx/admin-guide/high-availability/ha-keepalived.md" >}}), NGINX Plus uses a separate software package called **nginx-ha-keepalived** to handle the VIP and the failover process for an active‑passive pair of NGINX Plus servers. The package implements the VRRP protocol to handle the VIP. Limited [active‑active]({{< ref "nginx/admin-guide/high-availability/ha-keepalived-nodes.md" >}}) scenarios are also possible with the nginx-ha-keepalived package.
+ For [on‑premises deployments]({{< ref "nginx/admin-guide/high-availability/ha-keepalived.md" >}}), NGINX Plus uses a separate software package called ****nginx‑ha‑keepalived**** to handle the VIP and the failover process for an active‑passive pair of NGINX Plus servers. The package implements the VRRP protocol to handle the VIP. Limited [active‑active]({{< ref "nginx/admin-guide/high-availability/ha-keepalived-nodes.md" >}}) scenarios are also possible with the **nginx‑ha‑keepalived** package.
Solutions for high availability of NGINX Plus in cloud environments are also available, including these:
diff --git a/content/nginx/deployment-guides/migrate-hardware-adc/f5-big-ip-configuration.md b/content/nginx/deployment-guides/migrate-hardware-adc/f5-big-ip-configuration.md
index b5f9621b4..d7b82fc44 100644
--- a/content/nginx/deployment-guides/migrate-hardware-adc/f5-big-ip-configuration.md
+++ b/content/nginx/deployment-guides/migrate-hardware-adc/f5-big-ip-configuration.md
@@ -99,7 +99,7 @@ In addition to these networking concepts, there are two other important technolo
BIG-IP LTM uses a built‑in HA mechanism to handle the failover.
- For [on‑premises deployments]({{< ref "nginx/admin-guide/high-availability/ha-keepalived.md" >}}), NGINX Plus uses a separate software package called **nginx-ha-keepalived** to handle the VIP and the failover process for an active‑passive pair of NGINX Plus servers. The package implements the VRRP protocol to handle the VIP. Limited [active‑active]({{< ref "nginx/admin-guide/high-availability/ha-keepalived-nodes.md" >}}) scenarios are also possible with the nginx-ha-keepalived package.
+ For [on‑premises deployments]({{< ref "nginx/admin-guide/high-availability/ha-keepalived.md" >}}), NGINX Plus uses a separate software package called ****nginx‑ha‑keepalived**** to handle the VIP and the failover process for an active‑passive pair of NGINX Plus servers. The package implements the VRRP protocol to handle the VIP. Limited [active‑active]({{< ref "nginx/admin-guide/high-availability/ha-keepalived-nodes.md" >}}) scenarios are also possible with the **nginx‑ha‑keepalived** package.
Solutions for high availability of NGINX Plus in cloud environments are also available, including these:
diff --git a/content/nginx/deployment-guides/nginx-plus-high-availability-chef.md b/content/nginx/deployment-guides/nginx-plus-high-availability-chef.md
index d72f5f799..5dce8601b 100644
--- a/content/nginx/deployment-guides/nginx-plus-high-availability-chef.md
+++ b/content/nginx/deployment-guides/nginx-plus-high-availability-chef.md
@@ -27,9 +27,9 @@ To set up the highly available active/passive cluster, we’re using the [HA sol
## Modifying the NGINX Cookbook
-First we set up the Chef files for installing of the NGINX Plus HA package (nginx-ha-keepalived) and creating the `keepalived` configuration file, **keepalive.conf**.
+First we set up the Chef files for installing of the NGINX Plus HA package (**nginx‑ha‑keepalived**) and creating the `keepalived` configuration file, **keepalive.conf**.
-1. Modify the existing **plus_package** recipe to include package and configuration templates for the HA solution, by adding the following code to the bottom of the **plus_package.rb** file (per the instructions in the previous post, the file is in the ~/chef-zero/playground/cookbooks/nginx/recipes directory).
+1. Modify the existing **plus_package** recipe to include package and configuration templates for the HA solution, by adding the following code to the bottom of the **plus_package.rb** file (per the instructions in the previous post, the file is in the **~/chef‑zero/playground/cookbooks/nginx/recipes** directory).
We are using the **eth1** interface on each NGINX host, which makes the code a bit more complicated than if we used **eth0**. In case you are using **eth0**, the relevant code appears near the top of the file, commented out.
@@ -37,7 +37,7 @@ First we set up the Chef files for installing of the NGINX Plus HA package (nginx-ha-keepalived package, registers the `keepalived` service with Chef, and generates the **keepalived.conf** configuration file as a template, passing in the values of the `origip` and `ha_pair_ips` variables.
+ - It installs the **nginx‑ha‑keepalived** package, registers the `keepalived` service with Chef, and generates the **keepalived.conf** configuration file as a template, passing in the values of the `origip` and `ha_pair_ips` variables.
```nginx
if node['nginx']['enable_ha_mode'] == 'true'
@@ -102,7 +102,7 @@ First we set up the Chef files for installing of the NGINX Plus HA package (~/chef-zero/playground/cookbooks/nginx/templates/default directory.
+2. Create the Chef template for creating **keepalived.conf**, by copying the following content to a new template file, **nginx_plus_keepalived.conf.erb**, in the **~/chef‑zero/playground/cookbooks/nginx/templates/default** directory.
We’re using a combination of variables and attributes to pass the necessary information to **keepalived.conf**. We’ll set the attributes in the next step. Here we set the two variables in the template file to the host IP addresses that were set with the `variables` directive in the **plus_package.rb** recipe (modified in the previous step):
@@ -186,7 +186,7 @@ First we set up the Chef files for installing of the NGINX Plus HA package (~/chef-zero/playground/roles directory.
+3. Create a role that sets attributes used in the recipe and template files created in the previous steps, by copying the following contents to a new role file, **nginx_plus_ha.rb** in the **~/chef‑zero/playground/roles** directory.
Four attributes need to be set, and in the role we set the following three:
@@ -290,13 +290,13 @@ Now we bootstrap the nodes and get them ready for the installation. Note that th
`
-2. Create a local copy of the node definition file, which we’ll edit as appropriate for the node we bootstrapped in the previous step, chef-test-1:
+2. Create a local copy of the node definition file, which we’ll edit as appropriate for the node we bootstrapped in the previous step, **chef‑test‑1**:
```nginx
root@chef-server:~/chef-zero/playground# knife node show chef-test-1 --format json > nodes/chef-test-1.json
```
-3. Edit chef-test-1.json to have the following contents. In particular, we’re updating the run list and setting the `ha_primary` attribute, as required for the HA deployment.
+3. Edit **chef‑test‑1.json** to have the following contents. In particular, we’re updating the run list and setting the `ha_primary` attribute, as required for the HA deployment.
```json
{
@@ -323,7 +323,7 @@ Now we bootstrap the nodes and get them ready for the installation. Note that th
Updated Node chef-test-1!
```
-5. Log in on the chef-test-1 node and run the `chef-client` command to get everything configured:
+5. Log in on the **chef‑test‑1** node and run the `chef-client` command to get everything configured:
```text
username@chef-test-1:~$ sudo chef-client
@@ -616,7 +616,7 @@ Enter your password:
10.100.10.102 Chef Client finished, 18/50 resources updated in 10 seconds`
-If we look at **keepalived.conf** at this point, we see that there is a peer set in the `unicast_peer` section. But the following command shows that chef-test-2, which we intend to be the secondary node, is also assigned the VIP (10.100.10.50). This is because we haven’t yet updated the Chef configuration on chef-test-1 to make its `keepalived` aware of the secondary node.
+If we look at **keepalived.conf** at this point, we see that there is a peer set in the `unicast_peer` section. But the following command shows that **chef‑test‑2**, which we intend to be the secondary node, is also assigned the VIP (10.100.10.50). This is because we haven’t yet updated the Chef configuration on **chef‑test‑1** to make its `keepalived` aware of the secondary node.
username@chef-test-2:~$ ip addr show eth1
3: eth1: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
@@ -630,7 +630,7 @@ If we look at **keepalived.conf** at this point, we see that there is a peer set
### Synchronizing the Nodes
-To make `keepalived` on chef-test-1 aware of chef-test-2 and its IP address, we rerun the `chef-client` command on chef-test-1:
+To make `keepalived` on **chef‑test‑1** aware of **chef‑test‑2** and its IP address, we rerun the `chef-client` command on **chef‑test‑1**:
```text
username@chef-test-1:~$ sudo chef-client
@@ -741,7 +741,7 @@ Chef Client finished, 2/47 resources updated in 05 seconds
```
-We see that chef-test-1 is still assigned the VIP:
+We see that **chef‑test‑1** is still assigned the VIP:
```nginx
username@chef-test-1:~$ ip addr show eth1
@@ -755,7 +755,7 @@ We see that chef-test-1chef-test-2, as the secondary node, is now assigned only its physical IP address:
+And **chef‑test‑2**, as the secondary node, is now assigned only its physical IP address:
```nginx
username@chef-test-2:~$ ip addr show eth1
diff --git a/content/nginx/deployment-guides/setting-up-nginx-demo-environment.md b/content/nginx/deployment-guides/setting-up-nginx-demo-environment.md
index 9470ba00e..dc85877b0 100644
--- a/content/nginx/deployment-guides/setting-up-nginx-demo-environment.md
+++ b/content/nginx/deployment-guides/setting-up-nginx-demo-environment.md
@@ -21,7 +21,7 @@ Some commands require `root` privilege. If appropriate for your environment, pre
## Configuring NGINX Open Source for web serving
-The steps in this section configure an NGINX Open Source instance as a web server to return a page like the following, which specifies the server name, address, and other information. The page is defined in the demo-index.html configuration file you create in Step 4 below.
+The steps in this section configure an NGINX Open Source instance as a web server to return a page like the following, which specifies the server name, address, and other information. The page is defined in the **demo‑index.html** configuration file you create in Step 4 below.
diff --git a/content/nginx/deployment-guides/single-sign-on/oidc-njs/active-directory-federation-services.md b/content/nginx/deployment-guides/single-sign-on/oidc-njs/active-directory-federation-services.md
index 5f6ff547c..870198ac6 100644
--- a/content/nginx/deployment-guides/single-sign-on/oidc-njs/active-directory-federation-services.md
+++ b/content/nginx/deployment-guides/single-sign-on/oidc-njs/active-directory-federation-services.md
@@ -50,11 +50,11 @@ The instructions assume you have the following:
Create an AD FS application for NGINX Plus:
-1. Open the AD FS Management window. In the navigation column on the left, right‑click on the **Application Groups** folder and select Add Application Group from the drop‑down menu.
+1. Open the AD FS Management window. In the navigation column on the left, right‑click on the **Application Groups** folder and select **Add Application Group** from the drop‑down menu.
- The Add Application Group Wizard window opens. The left navigation column shows the steps you will complete to add an application group.
+ The **Add Application Group Wizard** window opens. The left navigation column shows the steps you will complete to add an application group.
-2. In the **Welcome** step, type the application group name in the **Name** field. Here we are using ADFSSSO. In the **Template** field, select **Server application** under Standalone applications. Click the Next > button.
+2. In the **Welcome** step, type the application group name in the **Name** field. Here we are using **ADFSSSO**. In the **Template** field, select **Server application** under **Standalone applications**. Click the Next > button.
@@ -63,7 +63,7 @@ Create an AD FS application for NGINX Plus:
1. Make a note of the value in the **Client Identifier** field. You will add it to the NGINX Plus configuration in [Step 4 of _Configuring NGINX Plus_](#nginx-plus-variables).
- 2. In the **Redirect URI** field, type the URI of the NGINX Plus instance including the port number, and ending in **/\_codexch**. Here we’re using https://my-nginx.example.com:443/\_codexch. Click the Add button.
+ 2. In the **Redirect URI** field, type the URI of the NGINX Plus instance including the port number, and ending in **/\_codexch**. Here we’re using **https://my‑nginx.example.com:443/\_codexch**. Click the Add button.
**Notes:**
@@ -75,7 +75,7 @@ Create an AD FS application for NGINX Plus:
-4. In the Configure Application Credentials step, click the Generate a shared secret checkbox. Make a note of the secret that AD FS generates (perhaps by clicking the Copy to clipboard button and pasting the clipboard content into a file). You will add the secret to the NGINX Plus configuration in [Step 4 of _Configuring NGINX Plus_](#nginx-plus-variables). Click the Next > button.
+4. In the **Configure Application Credentials** step, click the **Generate a shared secret** checkbox. Make a note of the secret that AD FS generates (perhaps by clicking the **Copy to clipboard** button and pasting the clipboard content into a file). You will add the secret to the NGINX Plus configuration in [Step 4 of _Configuring NGINX Plus_](#nginx-plus-variables). Click the Next > button.
@@ -87,7 +87,7 @@ Create an AD FS application for NGINX Plus:
Configure NGINX Plus as the OpenID Connect relying party:
-1. Create a clone of the [nginx-openid-connect](https://github.com/nginxinc/nginx-openid-connect) GitHub repository.
+1. Create a clone of the [**nginx‑openid‑connect**](https://github.com/nginxinc/nginx-openid-connect) GitHub repository.
```shell
git clone https://github.com/nginxinc/nginx-openid-connect
@@ -150,7 +150,7 @@ In a browser, enter the address of your NGINX Plus instance and try to log in u
## Troubleshooting
-See the [**Troubleshooting**](https://github.com/nginxinc/nginx-openid-connect#troubleshooting) section at the nginx-openid-connect repository on GitHub.
+See the [**Troubleshooting**](https://github.com/nginxinc/nginx-openid-connect#troubleshooting) section at the **nginx‑openid‑connect** repository on GitHub.
### Revision History
diff --git a/content/nginx/deployment-guides/single-sign-on/oidc-njs/cognito.md b/content/nginx/deployment-guides/single-sign-on/oidc-njs/cognito.md
index 21a17b064..6fa141df9 100644
--- a/content/nginx/deployment-guides/single-sign-on/oidc-njs/cognito.md
+++ b/content/nginx/deployment-guides/single-sign-on/oidc-njs/cognito.md
@@ -59,7 +59,7 @@ Create a new application for NGINX Plus in the Cognito GUI:
-3. In the **Create a user pool** window that opens, type a value in the **Pool name** field (in this guide, it's nginx-plus-pool), then click the Review defaults button.
+3. In the **Create a user pool** window that opens, type a value in the **Pool name** field (in this guide, it's **nginx‑plus‑pool**), then click the Review defaults button.
@@ -70,11 +70,11 @@ Create a new application for NGINX Plus in the Cognito GUI:
5. On the **App clients** tab which opens, click Add an app client.
-6. On the **Which app clients will have access to this user pool?** window which opens, enter a value (in this guide, nginx-plus-app) in the App client name field. Make sure the Generate client secret box is checked, then click the Create app client button.
+6. On the **Which app clients will have access to this user pool?** window which opens, enter a value (in this guide, **nginx‑plus‑app**) in the **App client name** field. Make sure the **Generate client secret** box is checked, then click the Create app client button.
-7. On the confirmation page which opens, click Return to pool details to return to the **Review** tab. On that tab click the Create pool button at the bottom. (The screenshot in [Step 4](#cognito-review-tab) shows the button.)
+7. On the confirmation page which opens, click **Return to pool details** to return to the **Review** tab. On that tab click the Create pool button at the bottom. (The screenshot in [Step 4](#cognito-review-tab) shows the button.)
8. On the details page which opens to confirm the new user pool was successfully created, make note of the value in the **Pool Id** field; you will add it to the NGINX Plus configuration in [Step 3 of _Configuring NGINX Plus_](#nginx-plus-variables).
@@ -82,36 +82,36 @@ Create a new application for NGINX Plus in the Cognito GUI:
-9. Click Users and groups in the left navigation column. In the interface that opens, designate the users (or group of users, on the **Groups** tab) who will be able to use SSO for the app being proxied by NGINX Plus. For instructions, see the Cognito documentation about [creating users](https://docs.aws.amazon.com/cognito/latest/developerguide/how-to-create-user-accounts.html), [importing users](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-using-import-tool.html), or [adding a group](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-user-groups.html).
+9. Click **Users and groups** in the left navigation column. In the interface that opens, designate the users (or group of users, on the **Groups** tab) who will be able to use SSO for the app being proxied by NGINX Plus. For instructions, see the Cognito documentation about [creating users](https://docs.aws.amazon.com/cognito/latest/developerguide/how-to-create-user-accounts.html), [importing users](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-using-import-tool.html), or [adding a group](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-user-groups.html).
-10. Click **App clients** in the left navigation bar. On the tab that opens, click the Show Details button in the box labeled with the app client name (in this guide, nginx-plus-app).
+10. Click **App clients** in the left navigation bar. On the tab that opens, click the Show Details button in the box labeled with the app client name (in this guide, **nginx‑plus‑app**).
-11. On the details page that opens, make note of the values in the App client id and App client secret fields. You will add them to the NGINX Plus configuration in [Step 3 of _Configuring NGINX Plus_](#nginx-plus-variables).
+11. On the details page that opens, make note of the values in the **App client id** and **App client secret** fields. You will add them to the NGINX Plus configuration in [Step 3 of _Configuring NGINX Plus_](#nginx-plus-variables).
-12. Click App client settings in the left navigation column. In the tab that opens, perform the following steps:
+12. Click **App client settings** in the left navigation column. In the tab that opens, perform the following steps:
- 1. In the Enabled Identity Providers section, click the Cognito User Pool checkbox (the **Select all** box gets checked automatically).
- 2. In the **Callback URL(s)** field of the Sign in and sign out URLs section, type the URI of the NGINX Plus instance including the port number, and ending in **/\_codexch**. Here we’re using https://my-nginx-plus.example.com:443/_codexch.
+ 1. In the **Enabled Identity Providers** section, click the **Cognito User Pool** checkbox (the **Select all** box gets checked automatically).
+ 2. In the **Callback URL(s)** field of the **Sign in and sign out URLs** section, type the URI of the NGINX Plus instance including the port number, and ending in **/\_codexch**. Here we’re using **https://my‑nginx‑plus.example.com:443/_codexch**.
**Notes:**
- For production, we strongly recommend that you use SSL/TLS (port 443).
- The port number is mandatory even when you're using the default port for HTTP (80) or HTTPS (443).
- 3. In the **OAuth 2.0** section, click the Authorization code grant checkbox under Allowed OAuth Flows and the **email**, **openid**, and **profile** checkboxes under Allowed OAuth Scopes.
+ 3. In the **OAuth 2.0** section, click the **Authorization code grant** checkbox under **Allowed OAuth Flows** and the **email**, **openid**, and **profile** checkboxes under **Allowed OAuth Scopes**.
4. Click the Save changes button.
-13. Click **Domain name** in the left navigation column. In the tab that opens, type a domain prefix in the **Domain prefix** field under Amazon Cognito domain (in this guide, my-nginx-plus). Click the Save changes button.
+13. Click **Domain name** in the left navigation column. In the tab that opens, type a domain prefix in the **Domain prefix** field under **Amazon Cognito domain** (in this guide, **my‑nginx‑plus**). Click the Save changes button.
@@ -120,7 +120,7 @@ Create a new application for NGINX Plus in the Cognito GUI:
Configure NGINX Plus as the OpenID Connect relying party:
-1. Create a clone of the [nginx-openid-connect](https://github.com/nginxinc/nginx-openid-connect) GitHub repository.
+1. Create a clone of the [**nginx‑openid‑connect**](https://github.com/nginxinc/nginx-openid-connect) GitHub repository.
```shell
git clone https://github.com/nginxinc/nginx-openid-connect
@@ -135,12 +135,12 @@ Configure NGINX Plus as the OpenID Connect relying party:
3. In your preferred text editor, open **/etc/nginx/conf.d/frontend.conf**. Change the second parameter of each of the following [set](http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#set) directives to the specified value.
- The `` variable is the full value in the **Domain prefix** field in [Step 13 of _Configuring Amazon Cognito_](#cognito-domain-name). In this guide it is https://my-nginx-plus.auth.us-east-2.amazoncognito.com.
+ The `` variable is the full value in the **Domain prefix** field in [Step 13 of _Configuring Amazon Cognito_](#cognito-domain-name). In this guide it is **https://my‑nginx‑plus.auth.us‑east‑2.amazoncognito.com**.
- `set $oidc_authz_endpoint` – `/oauth2/authorize`
- `set $oidc_token_endpoint` – `/oauth2/token`
- - `set $oidc_client` – Value in the App client id field from [Step 11 of _Configuring Amazon Cognito_](#cognito-app-client-id-secret) (in this guide, `2or4cs8bjo1lkbq6143tqp6ist`)
- - `set $oidc_client_secret` – Value in the App client secret field from [Step 11 of _Configuring Amazon Cognito_](#cognito-app-client-id-secret) (in this guide, `1k63m3nrcnu...`)
+ - `set $oidc_client` – Value in the **App client id** field from [Step 11 of _Configuring Amazon Cognito_](#cognito-app-client-id-secret) (in this guide, `2or4cs8bjo1lkbq6143tqp6ist`)
+ - `set $oidc_client_secret` – Value in the **App client secret** field from [Step 11 of _Configuring Amazon Cognito_](#cognito-app-client-id-secret) (in this guide, `1k63m3nrcnu...`)
- `set $oidc_hmac_key` – A unique, long, and secure phrase
4. Configure the JWK file. The file's URL is
@@ -154,7 +154,7 @@ Configure NGINX Plus as the OpenID Connect relying party:
In this guide, the URL is
- https://cognito-idp.us-east-2.amazonaws.com/us-east-2_mLoGHJpOs/.well-known/jwks.json.
+ **https://cognito‑idp.us‑east‑2.amazonaws.com/us‑east‑2_mLoGHJpOs/.well‑known/jwks.json**.
The method for configuring the JWK file depends on which version of NGINX Plus you are using:
@@ -187,7 +187,7 @@ In a browser, enter the address of your NGINX Plus instance and try to log in u
## Troubleshooting
-See the [**Troubleshooting**](https://github.com/nginxinc/nginx-openid-connect#troubleshooting) section at the nginx-openid-connect repository on GitHub.
+See the [**Troubleshooting**](https://github.com/nginxinc/nginx-openid-connect#troubleshooting) section at the **nginx‑openid‑connect** repository on GitHub.
### Revision History
diff --git a/content/nginx/deployment-guides/single-sign-on/oidc-njs/keycloak.md b/content/nginx/deployment-guides/single-sign-on/oidc-njs/keycloak.md
index aca6e1cac..eec74c7f0 100644
--- a/content/nginx/deployment-guides/single-sign-on/oidc-njs/keycloak.md
+++ b/content/nginx/deployment-guides/single-sign-on/oidc-njs/keycloak.md
@@ -60,15 +60,15 @@ Create a Keycloak client for NGINX Plus in the Keycloak GUI:
3. On the **Add Client** page that opens, enter or select these values, then click the Save button.
- - **Client ID** – The name of the application for which you're enabling SSO (Keycloak refers to it as the “client”). Here we're using NGINX-Plus.
- - **Client Protocol** – openid-connect.
+ - **Client ID** – The name of the application for which you're enabling SSO (Keycloak refers to it as the “client”). Here we're using **NGINX‑Plus**.
+ - **Client Protocol** – **openid‑connect**.
4. On the **NGINX Plus** page that opens, enter or select these values on the Settings tab:
- - **Access Type** – confidential
- - **Valid Redirect URIs** – The URI of the NGINX Plus instance, including the port number, and ending in **/\_codexch** (in this guide it is https://my-nginx.example.com:443/_codexch)
+ - **Access Type** – **confidential**
+ - **Valid Redirect URIs** – The URI of the NGINX Plus instance, including the port number, and ending in **/\_codexch** (in this guide it is **https://my‑nginx.example.com:443/_codexch**)
**Notes:**
@@ -84,14 +84,14 @@ Create a Keycloak client for NGINX Plus in the Keycloak GUI:
6. Click the Roles tab, then click the **Add Role** button in the upper right corner of the page that opens.
-7. On the **Add Role** page that opens, type a value in the **Role Name** field (here it is nginx-keycloak-role) and click the Save button.
+7. On the **Add Role** page that opens, type a value in the **Role Name** field (here it is **nginx‑keycloak‑role**) and click the Save button.
8. In the left navigation column, click **Users**. On the **Users** page that opens, either click the name of an existing user, or click the **Add user** button in the upper right corner to create a new user. For complete instructions, see the [Keycloak documentation](https://www.keycloak.org/docs/latest/server_admin/index.html#user-management).
-9. On the management page for the user (here, user01), click the Role Mappings tab. On the page that opens, select NGINX-Plus on the **Client Roles** drop‑down menu. Click nginx-keycloak-role in the **Available Roles** box, then click the **Add selected** button below the box. The role then appears in the **Assigned Roles** and **Effective Roles** boxes, as shown in the screenshot.
+9. On the management page for the user (here, **user01**), click the Role Mappings tab. On the page that opens, select **NGINX‑Plus** on the **Client Roles** drop‑down menu. Click **nginx‑keycloak‑role** in the **Available Roles** box, then click the **Add selected** button below the box. The role then appears in the **Assigned Roles** and **Effective Roles** boxes, as shown in the screenshot.
@@ -101,7 +101,7 @@ Create a Keycloak client for NGINX Plus in the Keycloak GUI:
Configure NGINX Plus as the OpenID Connect relying party:
-1. Create a clone of the [nginx-openid-connect](https://github.com/nginxinc/nginx-openid-connect) GitHub repository.
+1. Create a clone of the [**nginx‑openid‑connect**](https://github.com/nginxinc/nginx-openid-connect) GitHub repository.
```shell
git clone https://github.com/nginxinc/nginx-openid-connect
@@ -165,7 +165,7 @@ In a browser, enter the address of your NGINX Plus instance and try to log in u
## Troubleshooting
-See the [**Troubleshooting**](https://github.com/nginxinc/nginx-openid-connect#troubleshooting) section at the nginx-openid-connect repository on GitHub.
+See the [**Troubleshooting**](https://github.com/nginxinc/nginx-openid-connect#troubleshooting) section at the **nginx‑openid‑connect** repository on GitHub.
### Revision History
diff --git a/content/nginx/deployment-guides/single-sign-on/oidc-njs/onelogin.md b/content/nginx/deployment-guides/single-sign-on/oidc-njs/onelogin.md
index 1048acced..9d2c00fdb 100644
--- a/content/nginx/deployment-guides/single-sign-on/oidc-njs/onelogin.md
+++ b/content/nginx/deployment-guides/single-sign-on/oidc-njs/onelogin.md
@@ -48,15 +48,15 @@ Create a new application for NGINX Plus in the OneLogin GUI:
-3. On the **Find Applications** page that opens, type OpenID Connect in the search box. Click on the **OpenID Connect (OIDC)** row that appears.
+3. On the **Find Applications** page that opens, type **OpenID Connect** in the search box. Click on the **OpenID Connect (OIDC)** row that appears.
-4. On the **Add OpenId Connect (OIDC)** page that opens, change the value in the **Display Name** field to NGINX Plus and click the Save button.
+4. On the **Add OpenId Connect (OIDC)** page that opens, change the value in the **Display Name** field to **NGINX Plus** and click the Save button.
-5. When the save completes, a new set of choices appears in the left navigation bar. Click **Configuration**. In the **Redirect URI's** field, type the URI of the NGINX Plus instance including the port number, and ending in **/\_codexch** (in this guide it is https://my-nginx.example.com:443/_codexch). Then click the Save button.
+5. When the save completes, a new set of choices appears in the left navigation bar. Click **Configuration**. In the **Redirect URI's** field, type the URI of the NGINX Plus instance including the port number, and ending in **/\_codexch** (in this guide it is **https://my‑nginx.example.com:443/_codexch**). Then click the Save button.
**Notes:**
@@ -66,12 +66,12 @@ Create a new application for NGINX Plus in the OneLogin GUI:
-6. When the save completes, click **SSO** in the left navigation bar. Click Show client secret below the **Client Secret** field. Record the values in the **Client ID** and **Client Secret** fields. You will add them to the NGINX Plus configuration in [Step 4 of _Configuring NGINX Plus_](#nginx-plus-variables).
+6. When the save completes, click **SSO** in the left navigation bar. Click **Show client secret** below the **Client Secret** field. Record the values in the **Client ID** and **Client Secret** fields. You will add them to the NGINX Plus configuration in [Step 4 of _Configuring NGINX Plus_](#nginx-plus-variables).
-7. Assign users to the application (in this guide, NGINX Plus) to enable them to access it for SSO. OneLogin recommends using [roles](https://onelogin.service-now.com/kb_view_customer.do?sysparm_article=KB0010606) for this purpose. You can access the **Roles** page under Users in the title bar.
+7. Assign users to the application (in this guide, **NGINX Plus**) to enable them to access it for SSO. OneLogin recommends using [roles](https://onelogin.service-now.com/kb_view_customer.do?sysparm_article=KB0010606) for this purpose. You can access the **Roles** page under Users in the title bar.
diff --git a/content/nginx/deployment-guides/single-sign-on/oidc-njs/ping-identity.md b/content/nginx/deployment-guides/single-sign-on/oidc-njs/ping-identity.md
index 3999d9a0e..d4901c65a 100644
--- a/content/nginx/deployment-guides/single-sign-on/oidc-njs/ping-identity.md
+++ b/content/nginx/deployment-guides/single-sign-on/oidc-njs/ping-identity.md
@@ -56,30 +56,30 @@ Create a new application for NGINX Plus:
1. Log in to your Ping Identity account. The administrative dashboard opens automatically. In this guide, we show the PingOne for Enterprise dashboard, and for brevity refer simply to ”PingOne”.
-2. Click APPLICATIONS in the title bar, and on the **My Applications** page that opens, click **OIDC** and then the + Add Application button.
+2. Click APPLICATIONS in the title bar, and on the **My Applications** page that opens, click **OIDC** and then the **+ Add Application** button.
-3. The Add OIDC Application window pops up. Click the ADVANCED CONFIGURATION box, and then the Next button.
+3. The **Add OIDC Application** window pops up. Click the ADVANCED CONFIGURATION box, and then the Next button.
-4. In section 1 (PROVIDE DETAILS ABOUT YOUR APPLICATION), type a name in the **APPLICATION NAME** field and a short description in the **SHORT DESCRIPTION** field. Here, we're using nginx-plus-application and NGINX Plus. Choose a value from the **CATEGORY** drop‑down menu; here we’re using Information Technology. You can also add an icon if you wish. Click the Next button.
+4. In section 1 (PROVIDE DETAILS ABOUT YOUR APPLICATION), type a name in the **APPLICATION NAME** field and a short description in the **SHORT DESCRIPTION** field. Here, we're using **nginx‑plus‑application** and **NGINX Plus**. Choose a value from the **CATEGORY** drop‑down menu; here we’re using **Information Technology**. You can also add an icon if you wish. Click the Next button.
5. In section 2 (AUTHORIZATION SETTINGS), perform these steps:
- 1. Under **GRANTS**, click both Authorization Code and Implicit.
- 2. Under **CREDENTIALS**, click the + Add Secret button. PingOne creates a client secret and opens the **CLIENT SECRETS** field to display it, as shown in the screenshot. To see the actual value of the secret, click the eye icon.
+ 1. Under **GRANTS**, click both **Authorization Code** and **Implicit**.
+ 2. Under **CREDENTIALS**, click the **+ Add Secret** button. PingOne creates a client secret and opens the **CLIENT SECRETS** field to display it, as shown in the screenshot. To see the actual value of the secret, click the eye icon.
3. Click the Next button.
6. In section 3 (SSO FLOW AND AUTHENTICATION SETTINGS):
- 1. In the START SSO URL field, type the URL where users access your application. Here we’re using https://example.com.
- 2. In the **REDIRECT URIS** field, type the URI of the NGINX Plus instance including the port number, and ending in **/\_codexch**. Here we’re using https://my-nginx-plus.example.com:443/\_codexch (the full value is not visible in the screenshot).
+ 1. In the **START SSO URL** field, type the URL where users access your application. Here we’re using **https://example.com**.
+ 2. In the **REDIRECT URIS** field, type the URI of the NGINX Plus instance including the port number, and ending in **/\_codexch**. Here we’re using **https://my‑nginx‑plus.example.com:443/\_codexch** (the full value is not visible in the screenshot).
**Notes:**
@@ -88,34 +88,34 @@ Create a new application for NGINX Plus:
-7. In section 4 (DEFAULT USER PROFILE ATTRIBUTE CONTRACT), optionally add attributes to the required sub and idpid attributes, by clicking the + Add Attribute button. We’re not adding any in this example. When finished, click the Next button.
+7. In section 4 (DEFAULT USER PROFILE ATTRIBUTE CONTRACT), optionally add attributes to the required **sub** and **idpid** attributes, by clicking the **+ Add Attribute** button. We’re not adding any in this example. When finished, click the Next button.
-8. In section 5 (CONNECT SCOPES), click the circled plus-sign on the OpenID Profile (profile) and OpenID Profile Email (email) scopes in the LIST OF SCOPES column. They are moved to the **CONNECTED SCOPES** column, as shown in the screenshot. Click the Next button.
+8. In section 5 (CONNECT SCOPES), click the circled plus-sign on the **OpenID Profile (profile)** and **OpenID Profile Email (email)** scopes in the **LIST OF SCOPES** column. They are moved to the **CONNECTED SCOPES** column, as shown in the screenshot. Click the Next button.
-9. In section 6 (ATTRIBUTE MAPPING), map attributes from your identity repository to the claims available to the application. The one attribute you must map is **sub**, and here we have selected the value Email from the drop‑down menu (the screenshot is abridged for brevity).
+9. In section 6 (ATTRIBUTE MAPPING), map attributes from your identity repository to the claims available to the application. The one attribute you must map is **sub**, and here we have selected the value **Email** from the drop‑down menu (the screenshot is abridged for brevity).
-10. In section 7 (GROUP ACCESS), select the groups that will have access to the application, by clicking the circled plus-sign on the corresponding boxes in the **AVAILABLE GROUPS** column. The boxes move to the **ADDED GROUPS** column. As shown in the screenshot we have selected the two default groups, Domain Administrators@directory and Users@directory.
+10. In section 7 (GROUP ACCESS), select the groups that will have access to the application, by clicking the circled plus-sign on the corresponding boxes in the **AVAILABLE GROUPS** column. The boxes move to the **ADDED GROUPS** column. As shown in the screenshot we have selected the two default groups, **Domain Administrators@directory** and **Users@directory**.
Click the Done button.
-11. You are returned to the **My Applications** window, which now includes a row for nginx-plus-application. Click the toggle switch at the right end of the row to the “on” position, as shown in the screenshot. Then click the “expand” icon at the end of the row, to display the application’s details.
+11. You are returned to the **My Applications** window, which now includes a row for **nginx‑plus‑application**. Click the toggle switch at the right end of the row to the “on” position, as shown in the screenshot. Then click the “expand” icon at the end of the row, to display the application’s details.
12. On the page that opens, make note of the values in the following fields on the **Details** tab. You will add them to the NGINX Plus configuration in [Step 4 of _Configuring NGINX Plus_](#nginx-plus-variables).
- - **CLIENT ID** (in the screenshot, 28823604-83c5-4608-88da-c73fff9c607a)
- - **CLIENT SECRETS** (in the screenshot, 7GMKILBofxb...); click on the eye icon to view the actual value
+ - **CLIENT ID** (in the screenshot, **28823604‑83c5‑4608‑88da‑c73fff9c607a**)
+ - **CLIENT SECRETS** (in the screenshot, **7GMKILBofxb...**); click on the eye icon to view the actual value
@@ -124,7 +124,7 @@ Create a new application for NGINX Plus:
Configure NGINX Plus as the OpenID Connect relying party:
-1. Create a clone of the [nginx-openid-connect](https://github.com/nginxinc/nginx-openid-connect) GitHub repository.
+1. Create a clone of the [**nginx‑openid‑connect**](https://github.com/nginxinc/nginx-openid-connect) GitHub repository.
```shell
git clone https://github.com/nginxinc/nginx-openid-connect
@@ -190,7 +190,7 @@ In a browser, enter the address of your NGINX Plus instance and try to log in u
## Troubleshooting
-See the [**Troubleshooting**](https://github.com/nginxinc/nginx-openid-connect#troubleshooting) section at the nginx-openid-connect repository on GitHub.
+See the [**Troubleshooting**](https://github.com/nginxinc/nginx-openid-connect#troubleshooting) section at the **nginx‑openid‑connect** repository on GitHub.
### Revision History