You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pages/load-balancer/concepts.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,7 +36,7 @@ Backend servers must be Scaleway resources (Instances, Elastic Metal or Dedibox
36
36
37
37
## Backend protection
38
38
39
-
Backend protection is a set of configurable values that allow you to control how load is distributed to backend servers. You can use these settings to configure the **maximum number of simultaneous requests** to a given backend server before it is considered to be at maximum capacity. You can also configure a **queue timeout** value, which defines the maximum amount of time (in ms) to queue a request or connection for a particular backend server when [stickiness](#sticky-session) is enabled. Once this value is reached, the request/connection will be directed to a different backend server.
39
+
Backend protection is a set of configurable values that allow you to control how load is distributed to backend servers. You can use these settings to configure the **maximum number of simultaneous requests** to a given backend server before it is considered to be at maximum capacity. You can also configure a **queue timeout** value, which defines the maximum amount of time (in ms) to queue a request or connection for a particular backend server. Read more in our [configuring backends documentation](/load-balancer/reference-content/configuring-backends/#backend-protection).
Copy file name to clipboardExpand all lines: pages/load-balancer/reference-content/configuring-backends.mdx
+6-5Lines changed: 6 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -121,17 +121,18 @@ Backend protection settings define when the Load Balancer should view a backend
121
121
122
122
A **Limit backend load** toggle displays in the Backend protection screen.
123
123
-**Toggle deactivated**: No settings to limit backend load are used. The Load Balancer can send an unlimited number of simultaneous connections/requests to backend servers.
124
-
-**Toggle activated**: Additional settings to limit backend load are activated and appear for you to configure. These additional settings are **Max simultaneous** and (if you previously [activated sticky sessions](#sticky-sessions)) **Queue timeout**.
124
+
-**Toggle activated**: Additional settings to limit backend load are activated and appear for you to configure. These additional settings are **Max simultaneous** and **Queue timeout**.
125
125
126
126
-**Max simultaneous**: Defines the maximum number of simultaneous requests (for HTTP) or simultaneous connections (for TCP) to any single backend server. A value of 20 means that each backend server will have a limit of 20 connections (even if, for example, there are only three servers in the backend). This setting is particularly relevant when using the [First available](#balancing-method) balancing method.
127
127
128
128
The minimum value for this field is 1, and the maximum value [depends on the Load Balancer type](/load-balancer/concepts/#maximum-connections). You should choose an appropriate value based on your backend server characteristics and traffic patterns.
129
129
130
-
When the maximum number of simultaneous connections/requests is reached for a single backend server, the Load Balancer will either:
131
-
- Pass the request/connection to a different backend server that still has slots available, unless no backend server has available slots in which case the Load Balancer indicates to the client that the request cannot be handled (e.g. `503 service unavailable` for HTTP or connection closed for TCP), or
132
-
- If sticky sessions are enabled: put the request into a queue for the backend server in question. Therefore, if and only if you enabled [activated sticky sessions](#sticky-sessions), you will also be prompted to set the following value:
130
+
When the maximum number of simultaneous connections/requests is reached for a single backend server, the Load Balancer queues the incoming connection/requst, waiting for an available connection slot.
131
+
-**If sticky sessions are not enabled**: Requests are queued only when **all** backend servers in the pool have reached their maximum limit.
132
+
-** If sticky sessions are enabled:**: Since traffic is pinned to specific backend servers, a request is queued as soon as its target backend reaches its maximum limit, even if other backends have available capacity.
133
+
In both cases, if the request remains unprocessed after the **queue timeout** (see below), the Load Balancer indicates to the client that the request cannot be handled.
133
134
134
-
-**Queue timeout**: Defines the maximum length of time (in ms) to queue a request/connection for a given backend where [stickiness](#sticky-sessions) is enabled. The default value for this setting is 5 000, the minimum value is 1 and the maximum value is 2 147 483 647. Choose an appropriate value based the acceptable wait time for your users, and your application's characteristics and traffic patterns.
135
+
-**Queue timeout**: Defines the maximum length of time (in ms) to queue a request/connection for a given backend. The default value for this setting is 5 000, the minimum value is 1 and the maximum value is 2 147 483 647. Choose an appropriate value based the acceptable wait time for your users, and your application's characteristics and traffic patterns.
135
136
136
137
Requests will wait in the queue for an available connection slot. If the queue timeout value is reached, the Load Balancer indicates to the client that the request cannot be handled (e.g. `503 service unavailable` for HTTP or connection closed for TCP).
0 commit comments