Skip to content

Conversation

@zhhyu7
Copy link
Contributor

@zhhyu7 zhhyu7 commented Jan 2, 2026

Summary

optimize the current code format according to the previous net_xxx_wait implementation to reduce multiple calls of similar code.

Impact

wait process for holding locks in the protocol stack

Testing

sim:matter with ping and iperf
NuttX test log:

NuttShell (NSH) NuttX-12.12.0
MOTD: username=admin password=Administrator
nsh> ifconfig eth0 10.0.1.2/24
nsh> ping -c 3 10.0.1.1
PING 10.0.1.1 56 bytes of data
56 bytes from 10.0.1.1: icmp_seq=0 time=0.0 ms
56 bytes from 10.0.1.1: icmp_seq=1 time=0.0 ms
56 bytes from 10.0.1.1: icmp_seq=2 time=0.0 ms
3 packets transmitted, 3 received, 0% packet loss, time 3030 ms
rtt min/avg/max/mdev = 0.000/0.000/0.000/0.000 ms
nsh> ping -c 3 10.0.1.3
PING 10.0.1.3 56 bytes of data
56 bytes from 10.0.1.3: icmp_seq=0 time=20.0 ms
56 bytes from 10.0.1.3: icmp_seq=1 time=10.0 ms
56 bytes from 10.0.1.3: icmp_seq=2 time=10.0 ms
3 packets transmitted, 3 received, 0% packet loss, time 3030 ms
rtt min/avg/max/mdev = 10.000/13.333/20.000/4.714 ms
nsh> iperf -c 10.0.1.1
     IP: 10.0.1.2

 mode=tcp-client sip=10.0.1.2:5001,dip=10.0.1.1:5001, interval=3, time=30

           Interval         Transfer         Bandwidth

   0.00-   3.01 sec  198672384 Bytes  528.03 Mbits/sec
   3.01-   6.02 sec  196411392 Bytes  522.02 Mbits/sec
   6.02-   9.03 sec  197918720 Bytes  526.03 Mbits/sec
   9.03-  12.04 sec  197607424 Bytes  525.20 Mbits/sec
  12.04-  15.05 sec  198508544 Bytes  527.60 Mbits/sec
  15.05-  18.06 sec  195723264 Bytes  520.19 Mbits/sec
  18.06-  21.07 sec  197869568 Bytes  525.90 Mbits/sec
  21.07-  24.08 sec  195362816 Bytes  519.24 Mbits/sec
  24.08-  27.09 sec  197869568 Bytes  525.90 Mbits/sec
  27.09-  30.10 sec  198639616 Bytes  527.95 Mbits/sec
   0.00-  30.10 sec 1974583296 Bytes  524.81 Mbits/sec
iperf exit
nsh> iperf -s
     IP: 10.0.1.2

 mode=tcp-server sip=10.0.1.2:5001,dip=0.0.0.0:5001, interval=3, time=0
accept: 10.0.1.1:40150

           Interval         Transfer         Bandwidth

   0.00-   3.01 sec  239956900 Bytes  637.76 Mbits/sec
   3.01-   6.02 sec  197019700 Bytes  523.64 Mbits/sec
   6.02-   9.03 sec  253567284 Bytes  673.93 Mbits/sec
closed by the peer: 10.0.1.1:40150
iperf exit
nsh> iperf -c 10.0.1.1 -u
     IP: 10.0.1.2

 mode=udp-client sip=10.0.1.2:5001,dip=10.0.1.1:5001, interval=3, time=30

           Interval         Transfer         Bandwidth

   0.00-   3.01 sec  246847040 Bytes  656.07 Mbits/sec
   3.01-   6.02 sec  252470080 Bytes  671.02 Mbits/sec
   6.02-   9.03 sec  246097792 Bytes  654.08 Mbits/sec
   9.03-  12.04 sec  249895552 Bytes  664.17 Mbits/sec
  12.04-  15.05 sec  252598144 Bytes  671.36 Mbits/sec
  15.05-  18.06 sec  252327296 Bytes  670.64 Mbits/sec
  18.06-  21.07 sec  253010304 Bytes  672.45 Mbits/sec
  21.07-  24.08 sec  250141376 Bytes  664.83 Mbits/sec
  24.08-  27.09 sec  253397440 Bytes  673.48 Mbits/sec
  27.09-  30.10 sec  253257600 Bytes  673.11 Mbits/sec
   0.00-  30.10 sec 2510042624 Bytes  667.12 Mbits/sec
iperf exit
nsh> iperf -s -u
     IP: 10.0.1.2

 mode=udp-server sip=10.0.1.2:5001,dip=0.0.0.0:5001, interval=3, time=0
want recv=16384
accept: 10.0.1.1:44632

           Interval         Transfer         Bandwidth

   0.00-   3.01 sec    5306700 Bytes   14.10 Mbits/sec
   3.01-   6.02 sec    5309640 Bytes   14.11 Mbits/sec
   6.02-   9.03 sec    5292000 Bytes   14.07 Mbits/sec
   9.03-  12.04 sec    5309640 Bytes   14.11 Mbits/sec
  12.04-  15.05 sec    5309640 Bytes   14.11 Mbits/sec
  15.05-  18.06 sec    5292000 Bytes   14.07 Mbits/sec
  18.06-  21.07 sec    5309640 Bytes   14.11 Mbits/sec
  21.07-  24.08 sec    5308170 Bytes   14.11 Mbits/sec
  24.08-  27.09 sec    5309640 Bytes   14.11 Mbits/sec
  27.09-  30.10 sec    5309640 Bytes   14.11 Mbits/sec

The low rate of the UDP RX process is due to the implementation mechanism of the sim network card in the simulator, which by default can only read data once every 10ms, and the buffer size of the host's tap network card limits this rate.

zhhyu7 added 5 commits January 2, 2026 10:59
modify functions that are simple but frequently called into inline
functions, thereby reducing CPU loading without significantly changing
the code size.

Signed-off-by: zhanghongyu <[email protected]>
the following code can be modified

  conn_dev_unlock(&conn->sconn, conn->dev);
  ret = net_sem_timedwait_uninterruptible(&conn->snd_sem,
    tcp_send_gettimeout(start, timeout));
  conn_dev_lock(&conn->sconn, conn->dev);
to

  ret = conn_dev_sem_timedwait(&conn->snd_sem, false,
    tcp_send_gettimeout(start, timeout), &conn->sconn, conn->dev);

Signed-off-by: zhanghongyu <[email protected]>
decouple lock dependencies from other modules.

Signed-off-by: zhanghongyu <[email protected]>
decouple lock dependencies from other modules.

Signed-off-by: zhanghongyu <[email protected]>
…being NULL

thus supporting scenarios that only require break netdev_lock.

Signed-off-by: zhanghongyu <[email protected]>
@github-actions github-actions bot added the Area: Networking Effects networking subsystem label Jan 2, 2026
@github-actions github-actions bot added the Size: L The size of the change in this PR is large label Jan 2, 2026
optimize the current code format according to the previous net_xxx_wait
implementation to reduce multiple calls of similar code

Signed-off-by: zhanghongyu <[email protected]>
@acassis acassis merged commit 8f41613 into apache:master Jan 2, 2026
40 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Area: Networking Effects networking subsystem Size: L The size of the change in this PR is large

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants