Skip to content

Commit f4fabc8

Browse files
authored
Merge branch 'main' into 363-securing-tcp-traffic
2 parents fb37e4b + 169c88e commit f4fabc8

File tree

8 files changed

+34
-34
lines changed

8 files changed

+34
-34
lines changed

content/ngf/overview/resource-validation.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -62,8 +62,8 @@ More information on CEL in Kubernetes can be found [here](https://kubernetes.io/
6262
This step catches the following cases of invalid values:
6363

6464
- Valid values from the Gateway API perspective but not supported by NGINX Gateway Fabric yet. For example, a feature in an HTTPRoute routing rule. For the list of supported features see [Gateway API Compatibility]({{< relref "./gateway-api-compatibility.md" >}}) doc.
65-
- Valid values from the Gateway API perspective, but invalid for NGINX, because NGINX has stricter validation requirements for certain fields. These values will cause NGINX to fail to reload or operate erroneously.
66-
- Invalid values (both from the Gateway API and NGINX perspectives) that were not rejected because Step 1 was bypassed. Similar to the previous case, these values will cause NGINX to fail to reload or operate erroneously.
65+
- Valid values from the Gateway API perspective, but invalid for NGINX. NGINX has stricter validation requirements for certain fields. These values will cause NGINX to fail to reload or operate erroneously.
66+
- Invalid values (both from the Gateway API and NGINX perspectives) that were not rejected because Step 1 was bypassed. These values will cause NGINX to fail to reload or operate incorrectly.
6767
- Malicious values that inject unrestricted NGINX config into the NGINX configuration (similar to an SQL injection attack).
6868

6969
Below is an example of how NGINX Gateway Fabric rejects an invalid resource. The validation error is reported via the status:

content/nginx/admin-guide/basic-functionality/managing-configuration-files.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,7 @@ stream {
9191

9292
In general, a _child_ context – a context contained within another context (its _parent_) – inherits the settings of directives included at the parent level. Some directives can appear in multiple contexts, in which case you can override the setting inherited from the parent by including the directive in the child context. For an example, see the [proxy_set_header](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_set_header) directive.
9393

94-
## Reloading Configuration
94+
## Reload Configuration File
9595

9696
For changes to the configuration file to take effect, it must be reloaded. You can either restart the `nginx` process or send the `reload` signal to upgrade the configuration without interrupting the processing of current requests. For details, see [Control NGINX Processes at Runtime]({{< ref "/nginx/admin-guide/basic-functionality/runtime-control.md" >}}).
9797

content/nginx/admin-guide/installing-nginx/installing-nginx-plus-google-cloud-platform.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ type:
1212
[NGINX Plus](https://www.f5.com/products/nginx/nginx-plus), the high‑performance application delivery platform, load balancer, and web server, is available on the Google Cloud Platform as a virtual machine (VM) image. The VM image contains the latest version of NGINX Plus, optimized for use with the Google Cloud Platform Compute Engine.
1313

1414

15-
## Installing the NGINX Plus VM
15+
## Install the NGINX Plus VM
1616

1717
To quickly set up an NGINX Plus environment on the Google Cloud Platform, perform the following steps.
1818

@@ -52,7 +52,7 @@ If you encounter any problems with NGINX Plus configuration, documentation is a
5252
Customers who purchase an NGINX Plus VM image on the Google Cloud Platform are eligible for the Google Cloud Platform VM support provided by the NGINX, Inc. engineering team. To activate support, submit the [Google Cloud Platform Support Activation](https://www.nginx.com/gcp-support-activation/) form.
5353

5454

55-
### Accessing the Open Source Licenses for NGINX Plus
55+
### Access the Open Source Licenses for NGINX Plus
5656

5757
NGINX Plus includes open source software written by NGINX, Inc. and other contributors. The text of the open source licenses is provided in Appendix B of the _NGINX Plus Reference Guide_. To access the guide included with the NGINX Plus VM instance, run this command:
5858

content/nginx/admin-guide/installing-nginx/installing-nginx-plus-microsoft-azure.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ type:
1313

1414
The VM image contains the latest version of NGINX Plus, optimized for use with Azure.
1515

16-
## Installing the NGINX Plus VM
16+
## Install the NGINX Plus VM
1717

1818
To quickly set up an NGINX Plus environment on Microsoft Azure:
1919

content/nginx/admin-guide/monitoring/logging.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -9,32 +9,33 @@ type:
99
- how-to
1010
---
1111

12-
This article describes how to configure logging of errors and processed requests in NGINX Open Source and NGINX Plus.
12+
This article describes how to log errors and requests in NGINX Open Source and NGINX Plus.
1313

1414
<span id="error_log"></span>
15-
## Setting Up the Error Log
15+
## Set Up the Error Log
16+
17+
NGINX writes an error log that records encountered issues of different severity levels. The [error_log](https://nginx.org/en/docs/ngx_core_module.html#error_log) directive sets up the log location and severity level. The log location can be a particular file, `stderr`, or `syslog`. By default, the error log is located at **logs/error.log**, but the absolute path depends on the operating system and installation. The error severity level follows the `syslog` classification system. The log includes messages from all severity levels above the specified level.
1618

17-
NGINX writes information about encountered issues of different severity levels to the error log. The [error_log](https://nginx.org/en/docs/ngx_core_module.html#error_log) directive sets up logging to a particular file, `stderr`, or `syslog` and specifies the minimal severity level of messages to log. By default, the error log is located at **logs/error.log** (the absolute path depends on the operating system and installation), and messages from all severity levels above the one specified are logged.
1819

1920
The configuration below changes the minimal severity level of error messages to log from `error` to `warn`:
2021

2122
```nginx
2223
error_log logs/error.log warn;
2324
```
2425

25-
In this case, messages of `warn`, `error` `crit`, `alert`, and `emerg` levels are logged.
26+
In this case, messages above the `warn` level are logged. That includes `warn`, `error` `crit`, `alert`, and `emerg` levels.
2627

27-
The default setting of the error log works globally. To override it, place the [error_log](https://nginx.org/en/docs/ngx_core_module.html#error_log) directive in the `main` (top-level) configuration context. Settings in the `main` context are always inherited by other configuration levels (`http`, `server`, `location`). The `error_log` directive can be also specified at the [http](https://nginx.org/en/docs/http/ngx_http_core_module.html#http), [stream](https://nginx.org/en/docs/stream/ngx_stream_core_module.html#stream), `server` and [location](https://nginx.org/en/docs/http/ngx_http_core_module.html#location) levels and overrides the setting inherited from the higher levels. In case of an error, the message is written to only one error log, the one closest to the level where the error has occurred. However, if several `error_log` directives are specified on the same level, the message are written to all specified logs.
28+
The default setting of the error log works globally. To override it, place the [error_log](https://nginx.org/en/docs/ngx_core_module.html#error_log) directive in the `main` (top-level) configuration context. Settings in the `main` context are always inherited by other configuration levels (`http`, `server`, `location`). The `error_log` directive can be also specified at the [http](https://nginx.org/en/docs/http/ngx_http_core_module.html#http), [stream](https://nginx.org/en/docs/stream/ngx_stream_core_module.html#stream), `server` and [location](https://nginx.org/en/docs/http/ngx_http_core_module.html#location) levels. Settings at lower levels override the settings inherited from the higher levels. Each error message is written only once to the error log closest to the level where the error has occurred. However, if several `error_log` directives are specified on the same level, the message is written to all specified logs.
2829

2930
> **Note:** The ability to specify multiple `error_log` directives on the same configuration level was added in NGINX Open Source version [1.5.2](https://nginx.org/en/CHANGES).
3031
3132

3233
<span id="access_log"></span>
33-
## Setting Up the Access Log
34+
## Set Up the Access Log
3435

35-
NGINX writes information about client requests in the access log right after the request is processed. By default, the access log is located at **logs/access.log**, and the information is written to the log in the predefined **combined** format. To override the default setting, use the [log_format](https://nginx.org/en/docs/http/ngx_http_log_module.html#log_format) directive to change the format of logged messages, as well as the [access_log](https://nginx.org/en/docs/http/ngx_http_log_module.html#access_log) directive to specify the location of the log and its format. The log format is defined using variables.
36+
NGINX records client requests in the access log right after the request is processed. The [access_log](https://nginx.org/en/docs/http/ngx_http_log_module.html#access_log) directive specifies the location of the log and its format. By default, the access log is located at **logs/access.log**. The format of logged messages is the predefined **combined** format. To change the format of logged messages, use the [log_format](https://nginx.org/en/docs/http/ngx_http_log_module.html#log_format) directive. The log format is defined using variables.
3637

37-
The following examples define the log format that extends the predefined **combined** format with the value indicating the ratio of gzip compression of the response. The format is then applied to a virtual server that enables compression.
38+
The following example extends the predefined **combined** format by specifying the `gzip ratio` keyword. This enables gzip compression of the response in a virtual server.
3839

3940
```nginx
4041
http {
@@ -50,7 +51,7 @@ http {
5051
}
5152
```
5253

53-
Another example of the log format enables tracking different time values between NGINX and an upstream server that may help to diagnose a problem if your website experience slowdowns. You can use the following variables to log the indicated time values:
54+
Another example of the log format enables tracking different time values between NGINX and an upstream server. This may help with diagnosing a slowdown of your website. You can use the following variables to log the indicated time values:
5455

5556
- [`$upstream_connect_time`](https://nginx.org/en/docs/http/ngx_http_upstream_module.html#var_upstream_connect_time) – The time spent on establishing a connection with an upstream server
5657
- [`$upstream_header_time`](https://nginx.org/en/docs/http/ngx_http_upstream_module.html#var_upstream_header_time) – The time between establishing a connection and receiving the first byte of the response header from the upstream server
@@ -80,11 +81,10 @@ When reading the resulting time values, keep the following in mind:
8081
- When a request is unable to reach an upstream server or a full header cannot be received, the variable contains `0` (zero)
8182
- In case of internal error while connecting to an upstream or when a reply is taken from the cache, the variable contains `-` (hyphen)
8283

83-
Logging can be optimized by enabling the buffer for log messages and the cache of descriptors of frequently used log files whose names contain variables. To enable buffering use the `buffer` parameter of the [access_log](https://nginx.org/en/docs/http/ngx_http_log_module.html#access_log) directive to specify the size of the buffer. The buffered messages are then written to the log file when the next log message does not fit into the buffer as well as in some other [cases](https://nginx.org/en/docs/http/ngx_http_log_module.html#access_log).
84-
84+
The log can be optimized by enabling the `buffer` and the `cache`. With the [buffer](https://nginx.org/en/docs/http/ngx_http_log_module.html#access_log) parameter enabled, messages will be stored in the buffer first. When the buffer is full (or in some other [cases](https://nginx.org/en/docs/http/ngx_http_log_module.html#access_log)), the messages will be written to the log.
8585
To enable caching of log file descriptors, use the [open_log_file_cache](https://nginx.org/en/docs/http/ngx_http_log_module.html#open_log_file_cache) directive.
8686

87-
Similar to the `error_log` directive, the [access_log](https://nginx.org/en/docs/http/ngx_http_log_module.html#access_log) directive defined on a particular configuration level overrides the settings from the previous levels. When processing of a request is completed, the message is written to the log that is configured on the current level, or inherited from the previous levels. If one level defines multiple access logs, the message is written to all of them.
87+
Similar to the `error_log` directive, the [access_log](https://nginx.org/en/docs/http/ngx_http_log_module.html#access_log) directive defined on a particular configuration level overrides the settings from the previous levels. When processing of a request is completed, the message is written to the log that is configured on the current level, or inherited from the previous levels. If one level has multiple access log definitions, the message is written to all of them.
8888

8989

9090
<span id="conditional"></span>

content/nginx/admin-guide/security-controls/securing-http-traffic-upstream.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ This article explains how to encrypt HTTP traffic between NGINX and a upstream g
2020

2121
## Obtaining SSL Server Certificates
2222

23-
You can purchase a server certificate from a trusted certificate authority (CA), or your can create own internal CA with an [OpenSSL](https://www.openssl.org/) library and generate your own certificate. The server certificate together with a private key should be placed on each upstream server.
23+
Purchase a server certificate from a trusted certificate authority (CA), or create your own internal CA with an [OpenSSL](https://www.openssl.org/) library and generate your own certificate. The server certificate together with a private key should be placed on each upstream server.
2424

2525
<span id="client_certs"></span>
2626
## Obtaining an SSL Client Certificate

content/nginx/admin-guide/web-server/serving-static-content.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
description: Configure NGINX and F5 NGINX Plus to serve static content, with type-specific
33
root directories, checks for file existence, and performance optimizations.
44
docs: DOCS-442
5-
title: Serving Static Content
5+
title: Serve Static Content
66
toc: true
77
weight: 200
88
type:
@@ -108,11 +108,11 @@ location @backend {
108108
For more information, watch the [Content Caching](https://www.nginx.com/resources/webinars/content-caching-nginx-plus/) webinar on‑demand to learn how to dramatically improve the performance of a website, and get a deep‑dive into NGINX’s caching capabilities.
109109

110110
<span id="optimize"></span>
111-
## Optimizing Performance for Serving Content
111+
## Optimize Performance for Serving Content
112112

113113
Loading speed is a crucial factor of serving any content. Making minor optimizations to your NGINX configuration may boost the productivity and help reach optimal performance.
114114

115-
### Enabling `sendfile`
115+
### Enable `sendfile`
116116

117117
By default, NGINX handles file transmission itself and copies the file into the buffer before sending it. Enabling the [sendfile](https://nginx.org/en/docs/http/ngx_http_core_module.html#sendfile) directive eliminates the step of copying the data into the buffer and enables direct copying data from one file descriptor to another. Alternatively, to prevent one fast connection from entirely occupying the worker process, you can use the [sendfile_max_chunk](https://nginx.org/en/docs/http/ngx_http_core_module.html#sendfile_max_chunk) directive to limit the amount of data transferred in a single `sendfile()` call (in this example, to `1` MB):
118118

@@ -124,7 +124,7 @@ location /mp3 {
124124
}
125125
```
126126

127-
### Enabling `tcp_nopush`
127+
### Enable `tcp_nopush`
128128

129129
Use the [tcp_nopush](https://nginx.org/en/docs/http/ngx_http_core_module.html#tcp_nopush) directive together with the [sendfile](https://nginx.org/en/docs/http/ngx_http_core_module.html#sendfile) `on;`directive. This enables NGINX to send HTTP response headers in one packet right after the chunk of data has been obtained by `sendfile()`.
130130

@@ -136,7 +136,7 @@ location /mp3 {
136136
}
137137
```
138138

139-
### Enabling `tcp_nodelay`
139+
### Enable `tcp_nodelay`
140140

141141
The [tcp_nodelay](https://nginx.org/en/docs/http/ngx_http_core_module.html#tcp_nodelay) directive allows override of [Nagle’s algorithm](https://en.wikipedia.org/wiki/Nagle's_algorithm), originally designed to solve problems with small packets in slow networks. The algorithm consolidates a number of small packets into a larger one and sends the packet with a `200` ms delay. Nowadays, when serving large static files, the data can be sent immediately regardless of the packet size. The delay also affects online applications (ssh, online games, online trading, and so on). By default, the [tcp_nodelay](https://nginx.org/en/docs/http/ngx_http_core_module.html#tcp_nodelay) directive is set to `on` which means that the Nagle’s algorithm is disabled. Use this directive only for keepalive connections:
142142

@@ -150,11 +150,11 @@ location /mp3 {
150150
```
151151

152152

153-
### Optimizing the Backlog Queue
153+
### Optimize the Backlog Queue
154154

155155
One of the important factors is how fast NGINX can handle incoming connections. The general rule is when a connection is established, it is put into the “listen” queue of a listen socket. Under normal load, either the queue is small or there is no queue at all. But under high load, the queue can grow dramatically, resulting in uneven performance, dropped connections, and increased latency.
156156

157-
#### Displaying the Listen Queue
157+
#### Display the Listen Queue
158158

159159
To display the current listen queue, run this command:
160160

@@ -182,7 +182,7 @@ Listen Local Address
182182
0/0/128 *.8080
183183
```
184184

185-
#### Tuning the Operating System
185+
#### Tune the Operating System
186186

187187
Increase the value of the `net.core.somaxconn` kernel parameter from its default value (`128`) to a value high enough for a large burst of traffic. In this example, it's increased to `4096`.
188188

@@ -205,7 +205,7 @@ Increase the value of the `net.core.somaxconn` kernel parameter from its default
205205
net.core.somaxconn = 4096
206206
```
207207
208-
#### Tuning NGINX
208+
#### Tune NGINX
209209
210210
If you set the `somaxconn` kernel parameter to a value greater than `512`, change the `backlog` parameter to the NGINX [listen](https://nginx.org/en/docs/http/ngx_http_core_module.html#listen) directive to match:
211211

0 commit comments

Comments
 (0)