@@ -311,6 +311,160 @@ Boot time numbers [avg, min, max] are measured from "Starting kernel" to Linux p
311311
312312|
313313
314+ Ethernet
315+ -----------------
316+ Ethernet performance benchmarks were measured using Netperf 2.7.1 https://hewlettpackard.github.io/netperf/doc/netperf.html
317+ Test procedures were modeled after those defined in RFC-2544:
318+ https://tools.ietf.org/html/rfc2544, where the DUT is the TI device
319+ and the "tester" used was a Linux PC. To produce consistent results,
320+ it is recommended to carry out performance tests in a private network and to avoid
321+ running NFS on the same interface used in the test. In these results,
322+ CPU utilization was captured as the total percentage used across all cores on the device,
323+ while running the performance test over one external interface.
324+
325+ UDP Throughput (0% loss) was measured by the procedure defined in RFC-2544 section 26.1: Throughput.
326+ In this scenario, netperf options burst_size (-b) and wait_time (-w) are used to limit bandwidth
327+ during different trials of the test, with the goal of finding the highest rate at which
328+ no loss is seen. For example, to limit bandwidth to 500Mbits/sec with 1472B datagram:
329+
330+ ::
331+
332+ burst_size = <bandwidth (bits/sec)> / 8 (bits -> bytes) / <UDP datagram size> / 100 (seconds -> 10 ms)
333+ burst_size = 500000000 / 8 / 1472 / 100 = 425
334+
335+ wait_time = 10 milliseconds (minimum supported by Linux PC used for testing)
336+
337+ UDP Throughput (possible loss) was measured by capturing throughput and packet loss statistics when
338+ running the netperf test with no bandwidth limit (remove -b/-w options).
339+
340+ In order to start a netperf client on one device, the other device must have netserver running.
341+ To start netserver:
342+
343+ ::
344+
345+ netserver [-p <port_number>] [-4 (IPv4 addressing)] [-6 (IPv6 addressing)]
346+
347+ Running the following shell script from the DUT will trigger netperf clients to measure
348+ bidirectional TCP performance for 60 seconds and report CPU utilization. Parameter -k is used in
349+ client commands to summarize selected statistics on their own line and -j is used to gain
350+ additional timing measurements during the test.
351+
352+ ::
353+
354+ #!/bin/bash
355+ for i in 1
356+ do
357+ netperf -H <tester ip> -j -c -l 60 -t TCP_STREAM --
358+ -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE &
359+
360+ netperf -H <tester ip> -j -c -l 60 -t TCP_MAERTS --
361+ -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE &
362+ done
363+
364+ Running the following commands will trigger netperf clients to measure UDP burst performance for
365+ 60 seconds at various burst/datagram sizes and report CPU utilization.
366+
367+ - For UDP egress tests, run netperf client from DUT and start netserver on tester.
368+
369+ ::
370+
371+ netperf -H <tester ip> -j -c -l 60 -t UDP_STREAM -b <burst_size> -w <wait_time> -- -m <UDP datagram size>
372+ -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE
373+
374+ - For UDP ingress tests, run netperf client from tester and start netserver on DUT.
375+
376+ ::
377+
378+ netperf -H <DUT ip> -j -C -l 60 -t UDP_STREAM -b <burst_size> -w <wait_time> -- -m <UDP datagram size>
379+ -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE
380+
381+
382+ CPSW/CPSW2g/CPSW3g Ethernet Driver
383+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
384+
385+ - CPSW2g: AM65x, J7200, J721e
386+ - CPSW3g: AM64x
387+
388+ .. rubric :: TCP Bidirectional Throughput
389+ :name: CPSW2g-tcp-bidirectional-throughput
390+
391+ .. csv-table :: CPSW2g TCP Bidirectional Throughput
392+ :header: "Command Used","am64xx-hsevm: THROUGHPUT (Mbits/sec)","am64xx-hsevm: CPU Load % (LOCAL_CPU_UTIL)"
393+
394+ "netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_STREAM; netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_MAERTS","1057.82 (min 989.78, max 1130.34)","97.00 (min 78.90, max 99.90)"
395+
396+ .. csv-table :: CPSW2g UDP Egress Throughput 0 loss
397+ :header: "UDP Datagram Size(bytes) (LOCAL_SEND_SIZE)","am64xx-hsevm: THROUGHPUT (Mbits/sec)","am64xx-hsevm: Packets Per Second (kPPS)","am64xx-hsevm: CPU Load % (LOCAL_CPU_UTIL)"
398+
399+ "64","39.82","77.00","89.91"
400+ "128","75.5","74.00","89.57"
401+ "256","148.51","73.00","88.11"
402+ "1024","575.58","70.00","91.17"
403+ "1472","583.88","48.00","84.69"
404+
405+ .. csv-table :: CPSW2g UDP Ingress Throughput 0 loss
406+ :header: "UDP Datagram Size(bytes) (LOCAL_SEND_SIZE)","am64xx-hsevm: THROUGHPUT (Mbits/sec)","am64xx-hsevm: Packets Per Second (kPPS)","am64xx-hsevm: CPU Load % (LOCAL_CPU_UTIL)"
407+
408+ "64","2.41","5.00","0.14"
409+ "128","4.81","5.00","0.55"
410+ "256","10.85","5.00","0.3"
411+ "1024","43.42","5.00","0.15"
412+ "1472","62.41","5.00","3.28"
413+
414+ ICSSG Ethernet Driver
415+ ^^^^^^^^^^^^^^^^^^^^^
416+
417+ .. rubric :: TCP Bidirectional Throughput
418+ :name: tcp-bidirectional-throughput
419+
420+ .. csv-table :: ICSSG TCP Bidirectional Throughput
421+ :header: "Command Used","am64xx-hsevm: THROUGHPUT (Mbits/sec)","am64xx-hsevm: CPU Load % (LOCAL_CPU_UTIL)"
422+
423+ "netperf -H 192.168.2.1 -j -c -C -l 60 -t TCP_STREAM; netperf -H 192.168.2.1 -j -c -C -l 60 -t TCP_MAERTS","790.93 (min 354.76, max 1107.29)","91.56 (min 77.94, max 99.77)"
424+
425+ .. rubric :: TCP Bidirectional Throughput Interrupt Pacing
426+ :name: ICSSG-tcp-bidirectional-throughput-interrupt-pacing
427+
428+ .. csv-table :: ICSSG TCP Bidirectional Throughput Interrupt Pacing
429+ :header: "Command Used","am64xx-hsevm: THROUGHPUT (Mbits/sec)","am64xx-hsevm: CPU Load % (LOCAL_CPU_UTIL)"
430+
431+ "netperf -H 192.168.2.1 -j -c -C -l 60 -t TCP_STREAM; netperf -H 192.168.2.1 -j -c -C -l 60 -t TCP_MAERTS","579.53 (min 185.68, max 1028.30)","50.57 (min 20.62, max 78.90)"
432+
433+ .. rubric :: UDP Egress Throughput
434+ :name: udp-egress-throughput-0-loss
435+
436+ .. csv-table :: ICSSG UDP Egress Throughput 0 loss
437+ :header: "Frame Size(bytes)","am64xx-hsevm: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE)","am64xx-hsevm: THROUGHPUT (Mbits/sec)","am64xx-hsevm: Packets Per Second (kPPS)","am64xx-hsevm: CPU Load % (LOCAL_CPU_UTIL)"
438+
439+ "64","18.00","30.77 (min 21.63, max 40.56)","60.13 (min 42.00, max 79.00)","73.74 (min 57.66, max 89.55)"
440+ "128","82.00","66.04 (min 45.59, max 78.52)","64.67 (min 45.00, max 77.00)","79.53 (min 60.70, max 89.42)"
441+ "256","210.00","126.02 (min 75.65, max 155.88)","61.50 (min 37.00, max 76.00)","77.46 (min 54.35, max 89.57)"
442+ "1024","978.00","495.53 (min 21.30, max 583.10)","60.63 (min 3.00, max 71.00)","77.25 (min 6.51, max 91.47)"
443+ "1472","1472.00","461.76 (min 5.89, max 798.58)","39.38 (min 1.00, max 68.00)","52.42 (min 0.17, max 90.19)"
444+
445+ .. rubric :: UDP Ingress Throughput
446+ :name: udp-ingress-throughput-0-loss
447+
448+ .. csv-table :: ICSSG UDP Ingress Throughput 0 loss
449+ :header: "Frame Size(bytes)","am64xx-hsevm: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE)","am64xx-hsevm: THROUGHPUT (Mbits/sec)","am64xx-hsevm: Packets Per Second (kPPS)","am64xx-hsevm: CPU Load %"
450+
451+ "64","18.00","2.15 (min 2.00, max 2.51)","4.14 (min 4.00, max 5.00)","1.49 (min 0.24, max 6.15)"
452+ "128","82.00","4.96 (min 3.89, max 6.35)","4.78 (min 4.00, max 6.00)","3.72 (min 0.12, max 7.48)"
453+ "256","210.00","10.21 (min 9.42, max 11.06)","5.00","2.97 (min 0.29, max 7.62)"
454+ "1024","978.00","44.54 (min 42.60, max 48.33)","5.25 (min 5.00, max 6.00)","6.07 (min 0.25, max 7.97)"
455+ "1472","1472.00","96.85 (min 62.41, max 174.28)","8.00 (min 5.00, max 15.00)","10.12 (min 4.00, max 20.47)"
456+
457+ .. rubric :: HSR Mode
458+ :name: icssg-hsr-mode
459+
460+ .. csv-table :: ICSSG HSR Mode Forwarding
461+ :header: "Mode","am64xx-hsevm: THROUGHPUT (Mbits/sec)","am64xx-hsevm: CPU Load % (SENDER)","am64xx-hsevm: CPU Load % (FORWARDING)","am64xx-hsevm: CPU Load % (RECIEVER)"
462+
463+ "HSR with HW Offload","420","62.01","0","69.11"
464+ "HSR with SW Offload","388","65","27.68","70"
465+
466+ |
467+
314468OSPI Flash Driver
315469-----------------
316470
0 commit comments