@@ -311,6 +311,134 @@ Boot time numbers [avg, min, max] are measured from "Starting kernel" to Linux p
311311
312312|
313313
314+ Ethernet
315+ -----------------
316+ Ethernet performance benchmarks were measured using Netperf 2.7.1 https://hewlettpackard.github.io/netperf/doc/netperf.html
317+ Test procedures were modeled after those defined in RFC-2544:
318+ https://tools.ietf.org/html/rfc2544, where the DUT is the TI device
319+ and the "tester" used was a Linux PC. To produce consistent results,
320+ it is recommended to carry out performance tests in a private network and to avoid
321+ running NFS on the same interface used in the test. In these results,
322+ CPU utilization was captured as the total percentage used across all cores on the device,
323+ while running the performance test over one external interface.
324+
325+ UDP Throughput (0% loss) was measured by the procedure defined in RFC-2544 section 26.1: Throughput.
326+ In this scenario, netperf options burst_size (-b) and wait_time (-w) are used to limit bandwidth
327+ during different trials of the test, with the goal of finding the highest rate at which
328+ no loss is seen. For example, to limit bandwidth to 500Mbits/sec with 1472B datagram:
329+
330+ ::
331+
332+ burst_size = <bandwidth (bits/sec)> / 8 (bits -> bytes) / <UDP datagram size> / 100 (seconds -> 10 ms)
333+ burst_size = 500000000 / 8 / 1472 / 100 = 425
334+
335+ wait_time = 10 milliseconds (minimum supported by Linux PC used for testing)
336+
337+ UDP Throughput (possible loss) was measured by capturing throughput and packet loss statistics when
338+ running the netperf test with no bandwidth limit (remove -b/-w options).
339+
340+ In order to start a netperf client on one device, the other device must have netserver running.
341+ To start netserver:
342+
343+ ::
344+
345+ netserver [-p <port_number>] [-4 (IPv4 addressing)] [-6 (IPv6 addressing)]
346+
347+ Running the following shell script from the DUT will trigger netperf clients to measure
348+ bidirectional TCP performance for 60 seconds and report CPU utilization. Parameter -k is used in
349+ client commands to summarize selected statistics on their own line and -j is used to gain
350+ additional timing measurements during the test.
351+
352+ ::
353+
354+ #!/bin/bash
355+ for i in 1
356+ do
357+ netperf -H <tester ip> -j -c -l 60 -t TCP_STREAM --
358+ -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE &
359+
360+ netperf -H <tester ip> -j -c -l 60 -t TCP_MAERTS --
361+ -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE &
362+ done
363+
364+ Running the following commands will trigger netperf clients to measure UDP burst performance for
365+ 60 seconds at various burst/datagram sizes and report CPU utilization.
366+
367+ - For UDP egress tests, run netperf client from DUT and start netserver on tester.
368+
369+ ::
370+
371+ netperf -H <tester ip> -j -c -l 60 -t UDP_STREAM -b <burst_size> -w <wait_time> -- -m <UDP datagram size>
372+ -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE
373+
374+ - For UDP ingress tests, run netperf client from tester and start netserver on DUT.
375+
376+ ::
377+
378+ netperf -H <DUT ip> -j -C -l 60 -t UDP_STREAM -b <burst_size> -w <wait_time> -- -m <UDP datagram size>
379+ -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE
380+
381+
382+ CPSW/CPSW2g/CPSW3g Ethernet Driver
383+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
384+
385+ - CPSW2g: AM65x, J7200, J721e
386+ - CPSW3g: AM64x
387+
388+ .. rubric :: TCP Bidirectional Throughput
389+ :name: CPSW2g-tcp-bidirectional-throughput
390+
391+ .. csv-table :: CPSW2g TCP Bidirectional Throughput
392+ :header: "Command Used","am64xx-hsevm: THROUGHPUT (Mbits/sec)","am64xx-hsevm: CPU Load % (LOCAL_CPU_UTIL)"
393+
394+ "netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_STREAM; netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_MAERTS","1057.82 (min 989.78, max 1130.34)","97.00 (min 78.90, max 99.90)"
395+
396+
397+ ICSSG Ethernet Driver
398+ ^^^^^^^^^^^^^^^^^^^^^
399+
400+ .. rubric :: TCP Bidirectional Throughput
401+ :name: tcp-bidirectional-throughput
402+
403+ .. csv-table :: ICSSG TCP Bidirectional Throughput
404+ :header: "Command Used","am64xx-hsevm: THROUGHPUT (Mbits/sec)","am64xx-hsevm: CPU Load % (LOCAL_CPU_UTIL)"
405+
406+ "netperf -H 192.168.2.1 -j -c -C -l 60 -t TCP_STREAM; netperf -H 192.168.2.1 -j -c -C -l 60 -t TCP_MAERTS","790.93 (min 354.76, max 1107.29)","91.56 (min 77.94, max 99.77)"
407+
408+ .. rubric :: TCP Bidirectional Throughput Interrupt Pacing
409+ :name: ICSSG-tcp-bidirectional-throughput-interrupt-pacing
410+
411+ .. csv-table :: ICSSG TCP Bidirectional Throughput Interrupt Pacing
412+ :header: "Command Used","am64xx-hsevm: THROUGHPUT (Mbits/sec)","am64xx-hsevm: CPU Load % (LOCAL_CPU_UTIL)"
413+
414+ "netperf -H 192.168.2.1 -j -c -C -l 60 -t TCP_STREAM; netperf -H 192.168.2.1 -j -c -C -l 60 -t TCP_MAERTS","579.53 (min 185.68, max 1028.30)","50.57 (min 20.62, max 78.90)"
415+
416+ .. rubric :: UDP Egress Throughput
417+ :name: udp-egress-throughput-0-loss
418+
419+ .. csv-table :: ICSSG UDP Egress Throughput 0 loss
420+ :header: "Frame Size(bytes)","am64xx-hsevm: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE)","am64xx-hsevm: THROUGHPUT (Mbits/sec)","am64xx-hsevm: Packets Per Second (kPPS)","am64xx-hsevm: CPU Load % (LOCAL_CPU_UTIL)"
421+
422+ "64","","30.77 (min 21.63, max 40.56)","60.13 (min 42.00, max 79.00)","73.74 (min 57.66, max 89.55)"
423+ "128","","66.04 (min 45.59, max 78.52)","64.67 (min 45.00, max 77.00)","79.53 (min 60.70, max 89.42)"
424+ "256","","126.02 (min 75.65, max 155.88)","61.50 (min 37.00, max 76.00)","77.46 (min 54.35, max 89.57)"
425+ "1024","","495.53 (min 21.30, max 583.10)","60.63 (min 3.00, max 71.00)","77.25 (min 6.51, max 91.47)"
426+ "1472","","461.76 (min 5.89, max 798.58)","39.38 (min 1.00, max 68.00)","52.42 (min 0.17, max 90.19)"
427+
428+ .. rubric :: UDP Ingress Throughput
429+ :name: udp-ingress-throughput-0-loss
430+
431+ .. csv-table :: ICSSG UDP Ingress Throughput 0 loss
432+ :header: "Frame Size(bytes)","am64xx-hsevm: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE)","am64xx-hsevm: THROUGHPUT (Mbits/sec)","am64xx-hsevm: Packets Per Second (kPPS)","am64xx-hsevm: CPU Load %"
433+
434+ "64","","2.15 (min 2.00, max 2.51)","4.14 (min 4.00, max 5.00)","1.49 (min 0.24, max 6.15)"
435+ "128","","4.96 (min 3.89, max 6.35)","4.78 (min 4.00, max 6.00)","3.72 (min 0.12, max 7.48)"
436+ "256","","10.21 (min 9.42, max 11.06)","5.00","2.97 (min 0.29, max 7.62)"
437+ "1024","","44.54 (min 42.60, max 48.33)","5.25 (min 5.00, max 6.00)","6.07 (min 0.25, max 7.97)"
438+ "1472","","96.85 (min 62.41, max 174.28)","8.00 (min 5.00, max 15.00)","10.12 (min 4.00, max 20.47)"
439+
440+ |
441+
314442OSPI Flash Driver
315443-----------------
316444
0 commit comments