Skip to content

NXP Ethernet eth_mcux poor performance #60144

@DerekSnell

Description

@DerekSnell

Creating a new Issue, originally reported by @jameswalmsley at #51107. He reports that zperf shows only 1Mbps:

I have the mimxrt1064_evk board. So far I can build the following (from zephyr/main):

west build -b mimxrt1064_evk zephyr/samples/net/zperf/    

zephyr-shell

zperf udp download

linux

iperf -V -u -c fe80::4:9fff:fe39:3ca4%enp0s20f0u1u1
------------------------------------------------------------
Client connecting to fe80::4:9fff:fe39:3ca4%enp0s20f0u1u1, UDP port 5001
Sending 1450 byte datagrams, IPG target: 11062.62 us (kalman adjust)
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[  1] local fe80::935:d1d6:7fda:bd83 port 49230 connected with fe80::4:9fff:fe39:3ca4 port 5001
[ ID] Interval       Transfer     Bandwidth
[  1] 0.0000-10.0119 sec  1.25 MBytes  1.05 Mbits/sec
[  1] Sent 906 datagrams
read failed: Connection refused
read failed: Connection refused
read failed: Connection refused

The speed seems very slow. I have tried to change many settings like dtcm, hardware acceleration etc, and I always get a very similar result of 1.05 Mbits/sec. I get the same result on our board too.

I have created a PR to fix some issues with the driver. #60073

The previous ENET_GetRxFrameSize() errors came from the eth_mcux driver only supporting the REFCLK being generated by the RT1064 (as in the ref-board).

I have added some changes in #60073 to enable the REFCLK as input, and support configuration of both 25MHz / 50MHz crystals on the PHYs.

I've also added other changes to disable cache maintenance in the HAL driver when DTCM is used for all buffers.

Unfortunately I was not able to find the source of the performance issue.

Metadata

Metadata

Type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions