Skip to content

Commit 5d7dad0

Browse files
authored
Merge branch 'nginx:main' into Updates_to_product_titles
2 parents 1ce85bd + 169c88e commit 5d7dad0

File tree

4 files changed

+18
-18
lines changed

4 files changed

+18
-18
lines changed

content/ngf/overview/resource-validation.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -62,8 +62,8 @@ More information on CEL in Kubernetes can be found [here](https://kubernetes.io/
6262
This step catches the following cases of invalid values:
6363

6464
- Valid values from the Gateway API perspective but not supported by NGINX Gateway Fabric yet. For example, a feature in an HTTPRoute routing rule. For the list of supported features see [Gateway API Compatibility]({{< relref "./gateway-api-compatibility.md" >}}) doc.
65-
- Valid values from the Gateway API perspective, but invalid for NGINX, because NGINX has stricter validation requirements for certain fields. These values will cause NGINX to fail to reload or operate erroneously.
66-
- Invalid values (both from the Gateway API and NGINX perspectives) that were not rejected because Step 1 was bypassed. Similar to the previous case, these values will cause NGINX to fail to reload or operate erroneously.
65+
- Valid values from the Gateway API perspective, but invalid for NGINX. NGINX has stricter validation requirements for certain fields. These values will cause NGINX to fail to reload or operate erroneously.
66+
- Invalid values (both from the Gateway API and NGINX perspectives) that were not rejected because Step 1 was bypassed. These values will cause NGINX to fail to reload or operate incorrectly.
6767
- Malicious values that inject unrestricted NGINX config into the NGINX configuration (similar to an SQL injection attack).
6868

6969
Below is an example of how NGINX Gateway Fabric rejects an invalid resource. The validation error is reported via the status:

content/nginx/admin-guide/installing-nginx/installing-nginx-plus-microsoft-azure.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ type:
1313

1414
The VM image contains the latest version of NGINX Plus, optimized for use with Azure.
1515

16-
## Installing the NGINX Plus VM
16+
## Install the NGINX Plus VM
1717

1818
To quickly set up an NGINX Plus environment on Microsoft Azure:
1919

content/nginx/admin-guide/web-server/serving-static-content.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
description: Configure NGINX and F5 NGINX Plus to serve static content, with type-specific
33
root directories, checks for file existence, and performance optimizations.
44
docs: DOCS-442
5-
title: Serving Static Content
5+
title: Serve Static Content
66
toc: true
77
weight: 200
88
type:
@@ -108,11 +108,11 @@ location @backend {
108108
For more information, watch the [Content Caching](https://www.nginx.com/resources/webinars/content-caching-nginx-plus/) webinar on‑demand to learn how to dramatically improve the performance of a website, and get a deep‑dive into NGINX’s caching capabilities.
109109

110110
<span id="optimize"></span>
111-
## Optimizing Performance for Serving Content
111+
## Optimize Performance for Serving Content
112112

113113
Loading speed is a crucial factor of serving any content. Making minor optimizations to your NGINX configuration may boost the productivity and help reach optimal performance.
114114

115-
### Enabling `sendfile`
115+
### Enable `sendfile`
116116

117117
By default, NGINX handles file transmission itself and copies the file into the buffer before sending it. Enabling the [sendfile](https://nginx.org/en/docs/http/ngx_http_core_module.html#sendfile) directive eliminates the step of copying the data into the buffer and enables direct copying data from one file descriptor to another. Alternatively, to prevent one fast connection from entirely occupying the worker process, you can use the [sendfile_max_chunk](https://nginx.org/en/docs/http/ngx_http_core_module.html#sendfile_max_chunk) directive to limit the amount of data transferred in a single `sendfile()` call (in this example, to `1` MB):
118118

@@ -124,7 +124,7 @@ location /mp3 {
124124
}
125125
```
126126

127-
### Enabling `tcp_nopush`
127+
### Enable `tcp_nopush`
128128

129129
Use the [tcp_nopush](https://nginx.org/en/docs/http/ngx_http_core_module.html#tcp_nopush) directive together with the [sendfile](https://nginx.org/en/docs/http/ngx_http_core_module.html#sendfile) `on;`directive. This enables NGINX to send HTTP response headers in one packet right after the chunk of data has been obtained by `sendfile()`.
130130

@@ -136,7 +136,7 @@ location /mp3 {
136136
}
137137
```
138138

139-
### Enabling `tcp_nodelay`
139+
### Enable `tcp_nodelay`
140140

141141
The [tcp_nodelay](https://nginx.org/en/docs/http/ngx_http_core_module.html#tcp_nodelay) directive allows override of [Nagle’s algorithm](https://en.wikipedia.org/wiki/Nagle's_algorithm), originally designed to solve problems with small packets in slow networks. The algorithm consolidates a number of small packets into a larger one and sends the packet with a `200` ms delay. Nowadays, when serving large static files, the data can be sent immediately regardless of the packet size. The delay also affects online applications (ssh, online games, online trading, and so on). By default, the [tcp_nodelay](https://nginx.org/en/docs/http/ngx_http_core_module.html#tcp_nodelay) directive is set to `on` which means that the Nagle’s algorithm is disabled. Use this directive only for keepalive connections:
142142

@@ -150,11 +150,11 @@ location /mp3 {
150150
```
151151

152152

153-
### Optimizing the Backlog Queue
153+
### Optimize the Backlog Queue
154154

155155
One of the important factors is how fast NGINX can handle incoming connections. The general rule is when a connection is established, it is put into the “listen” queue of a listen socket. Under normal load, either the queue is small or there is no queue at all. But under high load, the queue can grow dramatically, resulting in uneven performance, dropped connections, and increased latency.
156156

157-
#### Displaying the Listen Queue
157+
#### Display the Listen Queue
158158

159159
To display the current listen queue, run this command:
160160

@@ -182,7 +182,7 @@ Listen Local Address
182182
0/0/128 *.8080
183183
```
184184

185-
#### Tuning the Operating System
185+
#### Tune the Operating System
186186

187187
Increase the value of the `net.core.somaxconn` kernel parameter from its default value (`128`) to a value high enough for a large burst of traffic. In this example, it's increased to `4096`.
188188

@@ -205,7 +205,7 @@ Increase the value of the `net.core.somaxconn` kernel parameter from its default
205205
net.core.somaxconn = 4096
206206
```
207207
208-
#### Tuning NGINX
208+
#### Tune NGINX
209209
210210
If you set the `somaxconn` kernel parameter to a value greater than `512`, change the `backlog` parameter to the NGINX [listen](https://nginx.org/en/docs/http/ngx_http_core_module.html#listen) directive to match:
211211

content/nginx/deployment-guides/amazon-web-services/ingress-controller-elastic-kubernetes-services.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ The `PREFIX` argument specifies the repo name in your private container registry
3333

3434

3535
<span id="amazon-eks"></span>
36-
## Creating an Amazon EKS Cluster
36+
## Create an Amazon EKS Cluster
3737
You can create an Amazon EKS cluster with:
3838
- the AWS Management Console
3939
- the AWS CLI
@@ -46,7 +46,7 @@ This guide covers the `eksctl` command as it is the simplest option.
4646
2. Create an Amazon EKS cluster by following the instructions in the [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html). Select the <span style="white-space: nowrap; font-weight:bold;">Managed nodes – Linux</span> option for each step. Note that the <span style="white-space: nowrap;">`eksctl create cluster`</span> command in the first step can take ten minutes or more.
4747

4848
<span id="amazon-ecr"></span>
49-
## Pushing the NGINX Plus Ingress Controller Image to AWS ECR
49+
## Push the NGINX Plus Ingress Controller Image to AWS ECR
5050

5151
This step is only required if you do not plan to use the prebuilt NGINX Open Source image.
5252

@@ -81,7 +81,7 @@ This step is only required if you do not plan to use the prebuilt NGINX Open Sou
8181
```
8282

8383
<span id="ingress-controller"></span>
84-
## Installing the NGINX Plus Ingress Controller
84+
## Install the NGINX Plus Ingress Controller
8585

8686
Use [our documentation](https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/) to install the NGINX Plus Ingress Controller in your Amazon EKS cluster.
8787

@@ -97,15 +97,15 @@ You need a Kubernetes `LoadBalancer` service to route traffic to the NGINX Ingre
9797

9898
We also recommend enabling the PROXY Protocol for both the NGINX Plus Ingress Controller and your NLB target groups. This is used to forward client connection information. If you choose not to enable the PROXY protocol, see the [Appendix](#appendix).
9999

100-
### Configuring a `LoadBalancer` Service to Use NLB
100+
### Configure a `LoadBalancer` Service to Use NLB
101101

102102
Apply the manifest `deployments/service/loadbalancer-aws-elb.yaml` to create a `LoadBalancer` of type NLB:
103103

104104
```shell
105105
kubectl apply -f deployments/service/loadbalancer-aws-elb.yaml
106106
```
107107

108-
### Enabling the PROXY Protocol
108+
### Enable the PROXY Protocol
109109

110110
1. Add the following keys to the `deployments/common/nginx-config.yaml` config map file:
111111

@@ -158,7 +158,7 @@ Apply the manifest `deployments/service/loadbalancer-aws-elb.yaml` to create a `
158158

159159

160160
<span id="appendix"></span>
161-
## Appendix: Disabling the PROXY Protocol
161+
## Appendix: Disable the PROXY Protocol
162162

163163
If you want to disable the PROXY Protocol, perform these steps.
164164

0 commit comments

Comments
 (0)