Skip to content

Commit aeb1e41

Browse files
jukkarnashif
authored andcommitted
doc: net: Add network configuration guide
Add a simple document describing various network related configuration options and how they affect the available resources in the system. Signed-off-by: Jukka Rissanen <[email protected]>
1 parent b8708ee commit aeb1e41

File tree

2 files changed

+274
-0
lines changed

2 files changed

+274
-0
lines changed

doc/connectivity/networking/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@ operation of the stacks and how they were implemented.
1212

1313
overview.rst
1414
net-stack-architecture.rst
15+
net_config_guide.rst
1516
networking_with_host.rst
1617
network_monitoring.rst
1718
api/index.rst
Lines changed: 273 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,273 @@
1+
.. _network_configuration_guide:
2+
3+
Network Configuration Guide
4+
###########################
5+
6+
.. contents::
7+
:local:
8+
:depth: 2
9+
10+
This document describes how various network configuration options can be
11+
set according to available resources in the system.
12+
13+
Network Buffer Configuration Options
14+
************************************
15+
16+
The network buffer configuration options control how much data we
17+
are able to either send or receive at the same time.
18+
19+
:kconfig:option:`CONFIG_NET_PKT_RX_COUNT`
20+
Maximum amount of network packets we can receive at the same time.
21+
22+
:kconfig:option:`CONFIG_NET_PKT_TX_COUNT`
23+
Maximum amount of network packet sends pending at the same time.
24+
25+
:kconfig:option:`CONFIG_NET_BUF_RX_COUNT`
26+
How many network buffers are allocated for receiving data.
27+
Each net_buf contains a small header and either a fixed or variable
28+
length data buffer. The :kconfig:option:`CONFIG_NET_BUF_DATA_SIZE`
29+
is used when :kconfig:option:`CONFIG_NET_BUF_FIXED_DATA_SIZE` is set.
30+
This is the default setting. The default size of the buffer is 128 bytes.
31+
32+
The :kconfig:option:`CONFIG_NET_BUF_VARIABLE_DATA_SIZE` is an experimental
33+
setting. There each net_buf data portion is allocated from a memory pool and
34+
can be the amount of data we have received from the network.
35+
When data is received from the network, it is placed into net_buf data portion.
36+
Depending on device resources and desired network usage, user can tweak
37+
the size of the fixed buffer by setting :kconfig:option:`CONFIG_NET_BUF_DATA_SIZE`, and
38+
the size of the data pool size by setting :kconfig:option:`CONFIG_NET_BUF_DATA_POOL_SIZE`
39+
if variable size buffers are used.
40+
41+
When using the fixed size data buffers, the memory consumption of network buffers
42+
can be tweaked by selecting the size of the data part according to what kind of network
43+
data we are receiving. If one sets the data size to 256, but only receives packets
44+
that are 32 bytes long, then we are "wasting" 224 bytes for each packet because we
45+
cannot utilize the remaining data. One should not set the data size too low because
46+
there is some overhead involved for each net_buf. For these reasons the default
47+
network buffer size is set to 128 bytes.
48+
49+
The variable size data buffer feature is marked as experimental as it has not
50+
received as much testing as the fixed size buffers. Using variable size data
51+
buffers tries to improve memory utilization by allocating minimum amount of
52+
data we need for the network data. The extra cost here is the amount of time
53+
that is needed when dynamically allocating the buffer from the memory pool.
54+
55+
For example, in Ethernet the maximum transmission unit (MTU) size is 1500 bytes.
56+
If one wants to receive two full frames, then the net_pkt RX count should be set to 2,
57+
and net_buf RX count to (1500 / 128) * 2 which is 24.
58+
If TCP is being used, then these values need to be higher because we can queue the
59+
packets internally before delivering to the application.
60+
61+
:kconfig:option:`CONFIG_NET_BUF_TX_COUNT`
62+
How many network buffers are allocated for sending data. This is similar setting
63+
as the receive buffer count but for sending.
64+
65+
66+
Connection Options
67+
******************
68+
69+
:kconfig:option:`CONFIG_NET_MAX_CONN`
70+
This option tells how many network connection endpoints are supported.
71+
For example each TCP connection requires one connection endpoint. Similarly
72+
each listening UDP connection requires one connection endpoint.
73+
Also various system services like DHCP and DNS need connection endpoints to work.
74+
The network shell command **net conn** can be used at runtime to see the
75+
network connection information.
76+
77+
:kconfig:option:`CONFIG_NET_MAX_CONTEXTS`
78+
Number of network contexts to allocate. Each network context describes a network
79+
5-tuple that is used when listening or sending network traffic. Each BSD socket in the
80+
system uses one network context.
81+
82+
83+
Socket Options
84+
**************
85+
86+
:kconfig:option:`CONFIG_NET_SOCKETS_POLL_MAX`
87+
Maximum number of supported poll() entries. One needs to select proper value here depending
88+
on how many BSD sockets are polled in the system.
89+
90+
:kconfig:option:`CONFIG_POSIX_MAX_FDS`
91+
Maximum number of open file descriptors, this includes files, sockets, special devices, etc.
92+
One needs to select proper value here depending on how many BSD sockets are created in
93+
the system.
94+
95+
:kconfig:option:`CONFIG_NET_SOCKETPAIR_BUFFER_SIZE`
96+
This option is used by socketpair() function. It sets the size of the
97+
internal intermediate buffer, in bytes. This sets the limit how large
98+
messages can be passed between two socketpair endpoints.
99+
100+
101+
TLS Options
102+
***********
103+
104+
:kconfig:option:`CONFIG_NET_SOCKETS_TLS_MAX_CONTEXTS`
105+
Maximum number of TLS/DTLS contexts. Each TLS/DTLS connection needs one context.
106+
107+
:kconfig:option:`CONFIG_NET_SOCKETS_TLS_MAX_CREDENTIALS`
108+
This variable sets maximum number of TLS/DTLS credentials that can be
109+
used with a specific socket.
110+
111+
:kconfig:option:`CONFIG_NET_SOCKETS_TLS_MAX_CIPHERSUITES`
112+
Maximum number of TLS/DTLS ciphersuites per socket.
113+
This variable sets maximum number of TLS/DTLS ciphersuites that can
114+
be used with specific socket, if set explicitly by socket option.
115+
By default, all ciphersuites that are available in the system are
116+
available to the socket.
117+
118+
:kconfig:option:`CONFIG_NET_SOCKETS_TLS_MAX_APP_PROTOCOLS`
119+
Maximum number of supported application layer protocols.
120+
This variable sets maximum number of supported application layer
121+
protocols over TLS/DTLS that can be set explicitly by a socket option.
122+
By default, no supported application layer protocol is set.
123+
124+
:kconfig:option:`CONFIG_NET_SOCKETS_TLS_MAX_CLIENT_SESSION_COUNT`
125+
This variable specifies maximum number of stored TLS/DTLS sessions,
126+
used for TLS/DTLS session resumption.
127+
128+
:kconfig:option:`CONFIG_TLS_MAX_CREDENTIALS_NUMBER`
129+
Maximum number of TLS credentials that can be registered.
130+
Make sure that this value is high enough so that all the
131+
certificates can be loaded to the store.
132+
133+
134+
IPv4/6 Options
135+
**************
136+
137+
:kconfig:option:`CONFIG_NET_IF_MAX_IPV4_COUNT`
138+
Maximum number of IPv4 network interfaces in the system.
139+
This tells how many network interfaces there will be in the system
140+
that will have IPv4 enabled.
141+
For example if you have two network interfaces, but only one of them
142+
can use IPv4 addresses, then this value can be set to 1.
143+
If both network interface could use IPv4, then the setting should be
144+
set to 2.
145+
146+
:kconfig:option:`CONFIG_NET_IF_MAX_IPV6_COUNT`
147+
Maximum number of IPv6 network interfaces in the system.
148+
This is similar setting as the IPv4 count option but for IPv6.
149+
150+
151+
TCP Options
152+
***********
153+
154+
:kconfig:option:`CONFIG_NET_TCP_TIME_WAIT_DELAY`
155+
How long to wait in TCP *TIME_WAIT* state (in milliseconds).
156+
To avoid a (low-probability) issue when delayed packets from
157+
previous connection get delivered to next connection reusing
158+
the same local/remote ports,
159+
`RFC 793 <https://www.rfc-editor.org/rfc/rfc793>`_ (TCP) suggests
160+
to keep an old, closed connection in a special *TIME_WAIT* state for
161+
the duration of 2*MSL (Maximum Segment Lifetime). The RFC
162+
suggests to use MSL of 2 minutes, but notes
163+
164+
*This is an engineering choice, and may be changed if experience indicates
165+
it is desirable to do so.*
166+
167+
For low-resource systems, having large MSL may lead to quick
168+
resource exhaustion (and related DoS attacks). At the same time,
169+
the issue of packet misdelivery is largely alleviated in the modern
170+
TCP stacks by using random, non-repeating port numbers and initial
171+
sequence numbers. Due to this, Zephyr uses much lower value of 1500ms
172+
by default. Value of 0 disables *TIME_WAIT* state completely.
173+
174+
:kconfig:option:`CONFIG_NET_TCP_RETRY_COUNT`
175+
Maximum number of TCP segment retransmissions.
176+
The following formula can be used to determine the time (in ms)
177+
that a segment will be be buffered awaiting retransmission:
178+
179+
.. math::
180+
181+
\sum_{n=0}^{\mathtt{NET\_TCP\_RETRY\_COUNT}} \bigg(1 \ll n\bigg)\times
182+
\mathtt{NET\_TCP\_INIT\_RETRANSMISSION\_TIMEOUT}
183+
184+
With the default value of 9, the IP stack will try to
185+
retransmit for up to 1:42 minutes. This is as close as possible
186+
to the minimum value recommended by
187+
`RFC 1122 <https://www.rfc-editor.org/rfc/rfc1122>`_ (1:40 minutes).
188+
Only 5 bits are dedicated for the retransmission count, so accepted
189+
values are in the 0-31 range. It's highly recommended to not go
190+
below 9, though.
191+
192+
Should a retransmission timeout occur, the receive callback is
193+
called with :code:`-ETIMEDOUT` error code and the context is dereferenced.
194+
195+
:kconfig:option:`CONFIG_NET_TCP_MAX_SEND_WINDOW_SIZE`
196+
Maximum sending window size to use.
197+
This value affects how the TCP selects the maximum sending window
198+
size. The default value 0 lets the TCP stack select the value
199+
according to amount of network buffers configured in the system.
200+
Note that if there are multiple active TCP connections in the system,
201+
then this value might require finetuning (lowering), otherwise multiple
202+
TCP connections could easily exhaust net_buf pool for the queued TX data.
203+
204+
:kconfig:option:`CONFIG_NET_TCP_MAX_RECV_WINDOW_SIZE`
205+
Maximum receive window size to use.
206+
This value defines the maximum TCP receive window size. Increasing
207+
this value can improve connection throughput, but requires more
208+
receive buffers available in the system for efficient operation.
209+
The default value 0 lets the TCP stack select the value
210+
according to amount of network buffers configured in the system.
211+
212+
:kconfig:option:`CONFIG_NET_TCP_RECV_QUEUE_TIMEOUT`
213+
How long to queue received data (in ms).
214+
If we receive out-of-order TCP data, we queue it. This value tells
215+
how long the data is kept before it is discarded if we have not been
216+
able to pass the data to the application. If set to 0, then receive
217+
queueing is not enabled. The value is in milliseconds.
218+
219+
Note that we only queue data sequentially in current version i.e.,
220+
there should be no holes in the queue. For example, if we receive
221+
SEQs 5,4,3,6 and are waiting SEQ 2, the data in segments 3,4,5,6 is
222+
queued (in this order), and then given to application when we receive
223+
SEQ 2. But if we receive SEQs 5,4,3,7 then the SEQ 7 is discarded
224+
because the list would not be sequential as number 6 is be missing.
225+
226+
227+
Traffic Class Options
228+
*********************
229+
230+
It is possible to configure multiple traffic classes (queues) when receiving
231+
or sending network data. Each traffic class queue is implemented as a thread
232+
with different priority. This means that higher priority network packet can
233+
be placed to a higher priority network queue in order to send or receive it
234+
faster or slower. Because of thread scheduling latencies, in practice the
235+
fastest way to send a packet out, is to directly send the packet without
236+
using a dedicated traffic class thread. This is why by default the
237+
:kconfig:option:`CONFIG_NET_TC_TX_COUNT` option is set to 0 if userspace is
238+
not enabled. If userspace is enabled, then the minimum TX traffic class
239+
count is 1. Reason for this is that the userspace application does not
240+
have enough permissions to deliver the message directly.
241+
242+
In receiving side, it is recommended to have at least one receiving traffic
243+
class queue. Reason is that typically the network device driver is running
244+
in IRQ context when it receives the packet, in which case it should not try
245+
to deliver the network packet directly to the upper layers, but to place
246+
the packet to the traffic class queue. If the network device driver is not
247+
running in IRQ context when it gets the packet, then the RX traffic class
248+
option :kconfig:option:`CONFIG_NET_TC_RX_COUNT` could be set to 0.
249+
250+
251+
Stack Size Options
252+
******************
253+
254+
There several network specific threads in a network enabled system.
255+
Some of the threads might depend on a configure option which can be
256+
used to enable or disable a feature. Each thread stack size is optimized
257+
to allow normal network operations.
258+
259+
The network management API is using a dedicated thread by default. The thread
260+
is responsible to deliver network management events to the event listeners that
261+
are setup in the system if the :kconfig:option:`CONFIG_NET_MGMT` and
262+
:kconfig:option:`CONFIG_NET_MGMT_EVENT` options are enabled.
263+
If the options are enabled, the user is able to register a callback function
264+
that the net_mgmt thread is calling for each network management event.
265+
By default the net_mgmt event thread stack size is rather small.
266+
The idea is that the callback function does minimal things so that new
267+
events can be delivered to listeners as fast as possible and they are not lost.
268+
The net_mgmt event thread stack size is controlled by
269+
:kconfig:option:`CONFIG_NET_MGMT_EVENT_QUEUE_SIZE` option. It is recommended
270+
to not do any blocking operations in the callback function.
271+
272+
The network thread stack utilization can be monitored from kernel shell by
273+
the **kernel threads** command.

0 commit comments

Comments
 (0)