diff --git a/source/devices/AM62DX/linux/Linux_Performance_Guide.rst b/source/devices/AM62DX/linux/Linux_Performance_Guide.rst index 7d132d1ce..3f97a5a7b 100644 --- a/source/devices/AM62DX/linux/Linux_Performance_Guide.rst +++ b/source/devices/AM62DX/linux/Linux_Performance_Guide.rst @@ -352,6 +352,129 @@ Boot time numbers [avg, min, max] are measured from "Starting kernel" to Linux p | +Ethernet +----------------- +Ethernet performance benchmarks were measured using Netperf 2.7.1 https://hewlettpackard.github.io/netperf/doc/netperf.html +Test procedures were modeled after those defined in RFC-2544: +https://tools.ietf.org/html/rfc2544, where the DUT is the TI device +and the "tester" used was a Linux PC. To produce consistent results, +it is recommended to carry out performance tests in a private network and to avoid +running NFS on the same interface used in the test. In these results, +CPU utilization was captured as the total percentage used across all cores on the device, +while running the performance test over one external interface. + +UDP Throughput (0% loss) was measured by the procedure defined in RFC-2544 section 26.1: Throughput. +In this scenario, netperf options burst_size (-b) and wait_time (-w) are used to limit bandwidth +during different trials of the test, with the goal of finding the highest rate at which +no loss is seen. For example, to limit bandwidth to 500Mbits/sec with 1472B datagram: + +:: + + burst_size = / 8 (bits -> bytes) / / 100 (seconds -> 10 ms) + burst_size = 500000000 / 8 / 1472 / 100 = 425 + + wait_time = 10 milliseconds (minimum supported by Linux PC used for testing) + +UDP Throughput (possible loss) was measured by capturing throughput and packet loss statistics when +running the netperf test with no bandwidth limit (remove -b/-w options). + +In order to start a netperf client on one device, the other device must have netserver running. +To start netserver: + +:: + + netserver [-p ] [-4 (IPv4 addressing)] [-6 (IPv6 addressing)] + +Running the following shell script from the DUT will trigger netperf clients to measure +bidirectional TCP performance for 60 seconds and report CPU utilization. Parameter -k is used in +client commands to summarize selected statistics on their own line and -j is used to gain +additional timing measurements during the test. + +:: + + #!/bin/bash + for i in 1 + do + netperf -H -j -c -l 60 -t TCP_STREAM -- + -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE & + + netperf -H -j -c -l 60 -t TCP_MAERTS -- + -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE & + done + +Running the following commands will trigger netperf clients to measure UDP burst performance for +60 seconds at various burst/datagram sizes and report CPU utilization. + +- For UDP egress tests, run netperf client from DUT and start netserver on tester. + +:: + + netperf -H -j -c -l 60 -t UDP_STREAM -b -w -- -m + -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE + +- For UDP ingress tests, run netperf client from tester and start netserver on DUT. + +:: + + netperf -H -j -C -l 60 -t UDP_STREAM -b -w -- -m + -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE + + +CPSW/CPSW2g/CPSW3g Ethernet Driver +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +- CPSW2g: AM65x, J7200, J721e, J721S2, J784S4, J742S2 +- CPSW3g: AM64x, AM62x, AM62ax, AM62px, AM62dx + + +.. rubric:: TCP Bidirectional Throughput + :name: CPSW2g-tcp-bidirectional-throughput + +.. csv-table:: CPSW2g TCP Bidirectional Throughput + :header: "Command Used","am62dxx_evm-fs: THROUGHPUT (Mbits/sec)","am62dxx_evm-fs: CPU Load % (LOCAL_CPU_UTIL)" + + "netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_STREAM; netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_MAERTS","304.93 (min 301.56, max 307.77)","11.34 (min 11.00, max 11.84)" + +.. rubric:: TCP Bidirectional Throughput Interrupt Pacing + :name: CPSW2g-tcp-bidirectional-throughput-interrupt-pacing + +.. csv-table:: CPSW2g TCP Bidirectional Throughput Interrupt Pacing + :header: "Command Used","am62dxx_evm-fs: THROUGHPUT (Mbits/sec)","am62dxx_evm-fs: CPU Load % (LOCAL_CPU_UTIL)" + + "netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_STREAM; netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_MAERTS","305.08 (min 301.65, max 308.15)","11.48 (min 11.33, max 11.63)" + +.. rubric:: UDP Throughput + :name: CPSW2g-udp-throughput-0-loss + +.. csv-table:: CPSW2g UDP Egress Throughput 0 loss + :header: "Frame Size(bytes)","am62dxx_evm-fs: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE)","am62dxx_evm-fs: THROUGHPUT (Mbits/sec)","am62dxx_evm-fs: Packets Per Second (kPPS)","am62dxx_evm-fs: CPU Load % (LOCAL_CPU_UTIL)" + + "64","18.00","57.87 (min 55.77, max 59.88)","113.00 (min 109.00, max 117.00)","25.67 (min 25.08, max 26.03)" + "128","82.00","113.83 (min 109.02, max 116.77)","111.13 (min 106.00, max 114.00)","25.77 (min 25.07, max 26.02)" + "256","210.00","148.42 (min 63.90, max 160.50)","72.13 (min 31.00, max 78.00)","19.34 (min 6.07, max 21.69)" + "1024","978.00","181.49 (min 181.48, max 181.50)","22.00","7.98 (min 7.88, max 8.14)" + "1518","1472.00","179.37 (min 179.36, max 179.37)","15.00","7.84 (min 7.65, max 7.99)" + +.. csv-table:: CPSW2g UDP Ingress Throughput 0 loss + :header: "Frame Size(bytes)","am62dxx_evm-fs: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE)","am62dxx_evm-fs: THROUGHPUT (Mbits/sec)","am62dxx_evm-fs: Packets Per Second (kPPS)","am62dxx_evm-fs: CPU Load % (LOCAL_CPU_UTIL)" + + "64","18.00","1.86 (min 1.48, max 2.36)","3.63 (min 3.00, max 5.00)","1.53 (min 0.53, max 3.81)" + "128","82.00","4.47 (min 4.40, max 4.71)","4.13 (min 4.00, max 5.00)","1.02 (min 0.76, max 1.65)" + "256","210.00","10.41 (min 10.03, max 10.85)","5.00","2.42 (min 0.94, max 5.27)" + "1024","978.00","43.19 (min 42.60, max 44.24)","5.00","2.08 (min 1.31, max 4.18)" + "1518","1472.00","62.15 (min 61.23, max 64.77)","5.11 (min 5.00, max 6.00)","2.90 (min 1.80, max 4.28)" + +.. csv-table:: CPSW2g UDP Ingress Throughput possible loss + :header: "Frame Size(bytes)","am62dxx_evm-fs: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE)","am62dxx_evm-fs: THROUGHPUT (Mbits/sec)","am62dxx_evm-fs: Packets Per Second (kPPS)","am62dxx_evm-fs: CPU Load % (LOCAL_CPU_UTIL)","am62dxx_evm-fs: Packet Loss %" + + "64","18.00","82.68 (min 69.85, max 95.35)","161.50 (min 136.00, max 186.00)","40.23 (min 38.26, max 42.06)","31.04 (min 0.18, max 62.30)" + "128","82.00","174.59 (min 130.93, max 188.67)","170.50 (min 128.00, max 184.00)","41.26 (min 39.46, max 42.34)","44.65 (min 0.39, max 61.31)" + "256","210.00","320.34 (min 259.61, max 369.04)","156.50 (min 127.00, max 180.00)","40.94 (min 38.60, max 42.78)","24.24 (min 0.37, max 49.24)" + "1024","978.00","876.98 (min 837.06, max 913.30)","106.86 (min 102.00, max 111.00)","40.28 (min 39.04, max 41.61)","0.61 (min 0.18, max 1.17)" + "1518","1472.00","892.63 (min 786.50, max 934.06)","75.78 (min 67.00, max 79.00)","38.92 (min 34.39, max 40.76)","1.04 (min 0.25, max 2.13)" + +| + USB Driver ---------- diff --git a/source/devices/AM62LX/linux/Linux_Performance_Guide.rst b/source/devices/AM62LX/linux/Linux_Performance_Guide.rst index 8f68dec10..40b659eec 100644 --- a/source/devices/AM62LX/linux/Linux_Performance_Guide.rst +++ b/source/devices/AM62LX/linux/Linux_Performance_Guide.rst @@ -411,6 +411,126 @@ ALSA SoC Audio Driver | +Ethernet +----------------- +Ethernet performance benchmarks were measured using Netperf 2.7.1 https://hewlettpackard.github.io/netperf/doc/netperf.html +Test procedures were modeled after those defined in RFC-2544: +https://tools.ietf.org/html/rfc2544, where the DUT is the TI device +and the "tester" used was a Linux PC. To produce consistent results, +it is recommended to carry out performance tests in a private network and to avoid +running NFS on the same interface used in the test. In these results, +CPU utilization was captured as the total percentage used across all cores on the device, +while running the performance test over one external interface. + +UDP Throughput (0% loss) was measured by the procedure defined in RFC-2544 section 26.1: Throughput. +In this scenario, netperf options burst_size (-b) and wait_time (-w) are used to limit bandwidth +during different trials of the test, with the goal of finding the highest rate at which +no loss is seen. For example, to limit bandwidth to 500Mbits/sec with 1472B datagram: + +:: + + burst_size = / 8 (bits -> bytes) / / 100 (seconds -> 10 ms) + burst_size = 500000000 / 8 / 1472 / 100 = 425 + + wait_time = 10 milliseconds (minimum supported by Linux PC used for testing) + +UDP Throughput (possible loss) was measured by capturing throughput and packet loss statistics when +running the netperf test with no bandwidth limit (remove -b/-w options). + +In order to start a netperf client on one device, the other device must have netserver running. +To start netserver: + +:: + + netserver [-p ] [-4 (IPv4 addressing)] [-6 (IPv6 addressing)] + +Running the following shell script from the DUT will trigger netperf clients to measure +bidirectional TCP performance for 60 seconds and report CPU utilization. Parameter -k is used in +client commands to summarize selected statistics on their own line and -j is used to gain +additional timing measurements during the test. + +:: + + #!/bin/bash + for i in 1 + do + netperf -H -j -c -l 60 -t TCP_STREAM -- + -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE & + + netperf -H -j -c -l 60 -t TCP_MAERTS -- + -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE & + done + +Running the following commands will trigger netperf clients to measure UDP burst performance for +60 seconds at various burst/datagram sizes and report CPU utilization. + +- For UDP egress tests, run netperf client from DUT and start netserver on tester. + +:: + + netperf -H -j -c -l 60 -t UDP_STREAM -b -w -- -m + -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE + +- For UDP ingress tests, run netperf client from tester and start netserver on DUT. + +:: + + netperf -H -j -C -l 60 -t UDP_STREAM -b -w -- -m + -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE + + +CPSW/CPSW2g/CPSW3g Ethernet Driver +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +- CPSW2g: AM65x, J7200, J721e, J721S2, J784S4, J742S2 +- CPSW3g: AM64x, AM62x, AM62ax, AM62px + +.. rubric:: TCP Bidirectional Throughput + :name: CPSW2g-tcp-bidirectional-throughput + +.. csv-table:: CPSW2g TCP Bidirectional Throughput + :header: "Command Used","am62lxx_evm-fs: THROUGHPUT (Mbits/sec)","am62lxx_evm-fs: CPU Load % (LOCAL_CPU_UTIL)" + + "netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_STREAM; netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_MAERTS","1250.65 (min 1162.63, max 1320.05)","99.45 (min 98.50, max 99.91)" + +.. rubric:: TCP Bidirectional Throughput Interrupt Pacing + :name: CPSW2g-tcp-bidirectional-throughput-interrupt-pacing + +.. csv-table:: CPSW2g TCP Bidirectional Throughput Interrupt Pacing + :header: "Command Used","am62lxx_evm-fs: THROUGHPUT (Mbits/sec)","am62lxx_evm-fs: CPU Load % (LOCAL_CPU_UTIL)" + + "netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_STREAM; netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_MAERTS","1276.53 (min 1096.32, max 1414.15)","98.53 (min 97.46, max 99.97)" + +.. rubric:: UDP Throughput + :name: CPSW2g-udp-throughput-0-loss + +.. csv-table:: CPSW2g UDP Egress Throughput 0 loss + :header: "Frame Size(bytes)","am62lxx_evm-fs: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE)","am62lxx_evm-fs: THROUGHPUT (Mbits/sec)","am62lxx_evm-fs: Packets Per Second (kPPS)","am62lxx_evm-fs: CPU Load % (LOCAL_CPU_UTIL)" + + "64","18.00","48.02 (min 45.73, max 49.33)","93.57 (min 89.00, max 96.00)","81.68 (min 79.75, max 83.43)" + "128","82.00","91.45 (min 88.25, max 95.00)","89.29 (min 86.00, max 93.00)","76.38 (min 50.57, max 81.56)" + "256","210.00","173.95 (min 160.08, max 184.15)","85.00 (min 78.00, max 90.00)","76.66 (min 59.01, max 80.28)" + "1024","978.00","470.51 (min 73.73, max 704.77)","57.50 (min 9.00, max 86.00)","56.11 (min 7.01, max 79.70)" + "1518","1472.00","666.11 (min 646.32, max 702.83)","54.71 (min 53.00, max 58.00)","72.76 (min 71.76, max 74.10)" + +.. csv-table:: CPSW2g UDP Ingress Throughput 0 loss + :header: "Frame Size(bytes)","am62lxx_evm-fs: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE)","am62lxx_evm-fs: THROUGHPUT (Mbits/sec)","am62lxx_evm-fs: Packets Per Second (kPPS)","am62lxx_evm-fs: CPU Load % (LOCAL_CPU_UTIL)" + + "64","18.00","2.28 (min 2.25, max 2.36)","4.14 (min 4.00, max 5.00)","2.97 (min 1.94, max 6.96)" + "128","82.00","4.96 (min 4.40, max 5.43)","4.86 (min 4.00, max 5.00)","3.14 (min 2.03, max 4.94)" + "256","210.00","9.04 (min 1.02, max 10.85)","4.17 (min 0.00, max 5.00)","3.31 (min 0.98, max 6.33)" + "1024","978.00","38.40 (min 6.55, max 43.42)","4.50 (min 1.00, max 5.00)","4.55 (min 1.23, max 8.23)" + "1518","1472.00","52.60 (min 4.71, max 62.41)","4.17 (min 0.00, max 5.00)","5.17 (min 0.70, max 9.26)" + +.. csv-table:: CPSW2g UDP Ingress Throughput possible loss + :header: "Frame Size(bytes)","am62lxx_evm-fs: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE)","am62lxx_evm-fs: THROUGHPUT (Mbits/sec)","am62lxx_evm-fs: Packets Per Second (kPPS)","am62lxx_evm-fs: CPU Load % (LOCAL_CPU_UTIL)","am62lxx_evm-fs: Packet Loss %" + + "64","18.00","72.96 (min 69.39, max 76.56)","142.57 (min 136.00, max 150.00)","82.64 (min 81.65, max 85.19)","72.14 (min 61.99, max 82.53)" + "128","82.00","144.83 (min 137.49, max 154.44)","141.43 (min 134.00, max 151.00)","85.13 (min 83.80, max 86.33)","68.86 (min 55.17, max 78.22)" + "256","210.00","280.59 (min 262.68, max 303.33)","137.00 (min 128.00, max 148.00)","85.01 (min 82.70, max 87.15)","49.21 (min 31.44, max 66.24)" + "1024","978.00","809.96 (min 576.67, max 890.96)","99.00 (min 70.00, max 109.00)","88.52 (min 84.69, max 92.80)","7.02 (min 4.65, max 10.64)" + "1518","1472.00","787.28 (min 704.86, max 853.41)","66.83 (min 60.00, max 72.00)","82.25 (min 73.30, max 87.11)","6.83 (min 2.70, max 10.83)" + Linux OSPI Flash Driver ----------------------- diff --git a/source/devices/AM62PX/linux/Linux_Performance_Guide.rst b/source/devices/AM62PX/linux/Linux_Performance_Guide.rst index d890bb499..4824405aa 100644 --- a/source/devices/AM62PX/linux/Linux_Performance_Guide.rst +++ b/source/devices/AM62PX/linux/Linux_Performance_Guide.rst @@ -436,6 +436,110 @@ Run Glmark2 and capture performance reported (Score). All display outputs (HDMI, | +Ethernet +----------------- +Ethernet performance benchmarks were measured using Netperf 2.7.1 https://hewlettpackard.github.io/netperf/doc/netperf.html +Test procedures were modeled after those defined in RFC-2544: +https://tools.ietf.org/html/rfc2544, where the DUT is the TI device +and the "tester" used was a Linux PC. To produce consistent results, +it is recommended to carry out performance tests in a private network and to avoid +running NFS on the same interface used in the test. In these results, +CPU utilization was captured as the total percentage used across all cores on the device, +while running the performance test over one external interface. + +UDP Throughput (0% loss) was measured by the procedure defined in RFC-2544 section 26.1: Throughput. +In this scenario, netperf options burst_size (-b) and wait_time (-w) are used to limit bandwidth +during different trials of the test, with the goal of finding the highest rate at which +no loss is seen. For example, to limit bandwidth to 500Mbits/sec with 1472B datagram: + +:: + + burst_size = / 8 (bits -> bytes) / / 100 (seconds -> 10 ms) + burst_size = 500000000 / 8 / 1472 / 100 = 425 + + wait_time = 10 milliseconds (minimum supported by Linux PC used for testing) + +UDP Throughput (possible loss) was measured by capturing throughput and packet loss statistics when +running the netperf test with no bandwidth limit (remove -b/-w options). + +In order to start a netperf client on one device, the other device must have netserver running. +To start netserver: + +:: + + netserver [-p ] [-4 (IPv4 addressing)] [-6 (IPv6 addressing)] + +Running the following shell script from the DUT will trigger netperf clients to measure +bidirectional TCP performance for 60 seconds and report CPU utilization. Parameter -k is used in +client commands to summarize selected statistics on their own line and -j is used to gain +additional timing measurements during the test. + +:: + + #!/bin/bash + for i in 1 + do + netperf -H -j -c -l 60 -t TCP_STREAM -- + -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE & + + netperf -H -j -c -l 60 -t TCP_MAERTS -- + -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE & + done + +Running the following commands will trigger netperf clients to measure UDP burst performance for +60 seconds at various burst/datagram sizes and report CPU utilization. + +- For UDP egress tests, run netperf client from DUT and start netserver on tester. + +:: + + netperf -H -j -c -l 60 -t UDP_STREAM -b -w -- -m + -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE + +- For UDP ingress tests, run netperf client from tester and start netserver on DUT. + +:: + + netperf -H -j -C -l 60 -t UDP_STREAM -b -w -- -m + -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE + + +CPSW/CPSW2g/CPSW3g Ethernet Driver +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +- CPSW2g: AM65x, J7200, J721e, J721S2, J784S4, J742S2 +- CPSW3g: AM64x, AM62x, AM62ax, AM62px + +.. rubric:: TCP Bidirectional Throughput + :name: CPSW2g-tcp-bidirectional-throughput + +.. csv-table:: CPSW2g TCP Bidirectional Throughput + :header: "Command Used","am62pxx_sk-fs: THROUGHPUT (Mbits/sec)","am62pxx_sk-fs: CPU Load % (LOCAL_CPU_UTIL)" + + "netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_STREAM; netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_MAERTS","1778.39 (min 1503.06, max 1857.97)","63.69 (min 60.80, max 65.40)" + +.. rubric:: TCP Bidirectional Throughput Interrupt Pacing + :name: CPSW2g-tcp-bidirectional-throughput-interrupt-pacing + +.. csv-table:: CPSW2g TCP Bidirectional Throughput Interrupt Pacing + :header: "Command Used","am62pxx_sk-fs: THROUGHPUT (Mbits/sec)","am62pxx_sk-fs: CPU Load % (LOCAL_CPU_UTIL)" + + "netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_STREAM; netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_MAERTS","1789.08 (min 1612.03, max 1873.08)","35.80 (min 27.35, max 38.76)" + +.. rubric:: UDP Throughput + :name: CPSW2g-udp-throughput-0-loss + +.. csv-table:: CPSW2g UDP Egress Throughput 0 loss + :header: "Frame Size(bytes)","am62pxx_sk-fs: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE)","am62pxx_sk-fs: THROUGHPUT (Mbits/sec)","am62pxx_sk-fs: Packets Per Second (kPPS)","am62pxx_sk-fs: CPU Load % (LOCAL_CPU_UTIL)" + + "64","18.00","54.23 (min 53.64, max 55.42)","105.88 (min 105.00, max 108.00)","39.59 (min 39.25, max 40.20)" + "128","82.00","107.70 (min 106.76, max 110.02)","105.00 (min 104.00, max 107.00)","39.41 (min 39.06, max 39.88)" + "256","210.00","214.09 (min 211.33, max 217.17)","104.50 (min 103.00, max 106.00)","39.31 (min 39.08, max 39.69)" + "1024","978.00","836.06 (min 813.57, max 847.48)","101.88 (min 99.00, max 103.00)","39.65 (min 38.57, max 40.42)" + "1518","1472.00","838.69 (min 826.64, max 852.20)","69.13 (min 68.00, max 70.00)","37.17 (min 36.51, max 38.15)" + +| + Linux OSPI Flash Driver ----------------------- diff --git a/source/devices/AM62X/linux/Linux_Performance_Guide.rst b/source/devices/AM62X/linux/Linux_Performance_Guide.rst index 174b1ff90..70afad86a 100644 --- a/source/devices/AM62X/linux/Linux_Performance_Guide.rst +++ b/source/devices/AM62X/linux/Linux_Performance_Guide.rst @@ -440,6 +440,129 @@ Run Glmark2 and capture performance reported (Score). All display outputs (HDMI, | + +Ethernet +----------------- +Ethernet performance benchmarks were measured using Netperf 2.7.1 https://hewlettpackard.github.io/netperf/doc/netperf.html +Test procedures were modeled after those defined in RFC-2544: +https://tools.ietf.org/html/rfc2544, where the DUT is the TI device +and the "tester" used was a Linux PC. To produce consistent results, +it is recommended to carry out performance tests in a private network and to avoid +running NFS on the same interface used in the test. In these results, +CPU utilization was captured as the total percentage used across all cores on the device, +while running the performance test over one external interface. + +UDP Throughput (0% loss) was measured by the procedure defined in RFC-2544 section 26.1: Throughput. +In this scenario, netperf options burst_size (-b) and wait_time (-w) are used to limit bandwidth +during different trials of the test, with the goal of finding the highest rate at which +no loss is seen. For example, to limit bandwidth to 500Mbits/sec with 1472B datagram: + +:: + + burst_size = / 8 (bits -> bytes) / / 100 (seconds -> 10 ms) + burst_size = 500000000 / 8 / 1472 / 100 = 425 + + wait_time = 10 milliseconds (minimum supported by Linux PC used for testing) + +UDP Throughput (possible loss) was measured by capturing throughput and packet loss statistics when +running the netperf test with no bandwidth limit (remove -b/-w options). + +In order to start a netperf client on one device, the other device must have netserver running. +To start netserver: + +:: + + netserver [-p ] [-4 (IPv4 addressing)] [-6 (IPv6 addressing)] + +Running the following shell script from the DUT will trigger netperf clients to measure +bidirectional TCP performance for 60 seconds and report CPU utilization. Parameter -k is used in +client commands to summarize selected statistics on their own line and -j is used to gain +additional timing measurements during the test. + +:: + + #!/bin/bash + for i in 1 + do + netperf -H -j -c -l 60 -t TCP_STREAM -- + -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE & + + netperf -H -j -c -l 60 -t TCP_MAERTS -- + -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE & + done + +Running the following commands will trigger netperf clients to measure UDP burst performance for +60 seconds at various burst/datagram sizes and report CPU utilization. + +- For UDP egress tests, run netperf client from DUT and start netserver on tester. + +:: + + netperf -H -j -c -l 60 -t UDP_STREAM -b -w -- -m + -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE + +- For UDP ingress tests, run netperf client from tester and start netserver on DUT. + +:: + + netperf -H -j -C -l 60 -t UDP_STREAM -b -w -- -m + -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE + + +CPSW/CPSW2g/CPSW3g Ethernet Driver +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +- CPSW2g: AM65x, J7200, J721e, J721S2, J784S4, J742S2 +- CPSW3g: AM64x, AM62x, AM62ax, AM62px + +.. rubric:: TCP Bidirectional Throughput + :name: CPSW2g-tcp-bidirectional-throughput + +.. csv-table:: CPSW2g TCP Bidirectional Throughput + :header: "Command Used","am62xx_lp_sk-fs: THROUGHPUT (Mbits/sec)","am62xx_lp_sk-fs: CPU Load % (LOCAL_CPU_UTIL)","am62xx_sk-fs: THROUGHPUT (Mbits/sec)","am62xx_sk-fs: CPU Load % (LOCAL_CPU_UTIL)","am62xxsip_sk-fs: THROUGHPUT (Mbits/sec)","am62xxsip_sk-fs: CPU Load % (LOCAL_CPU_UTIL)" + + "netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_STREAM; netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_MAERTS","1698.44 (min 1628.06, max 1800.35)","67.30 (min 64.15, max 72.01)","1783.17 (min 1762.43, max 1798.15)","67.94 (min 65.84, max 69.35)","1658.66 (min 1180.00, max 1849.79)","66.19 (min 55.35, max 71.09)" + +.. rubric:: TCP Bidirectional Throughput Interrupt Pacing + :name: CPSW2g-tcp-bidirectional-throughput-interrupt-pacing + +.. csv-table:: CPSW2g TCP Bidirectional Throughput Interrupt Pacing + :header: "Command Used","am62xx_lp_sk-fs: THROUGHPUT (Mbits/sec)","am62xx_lp_sk-fs: CPU Load % (LOCAL_CPU_UTIL)","am62xx_sk-fs: THROUGHPUT (Mbits/sec)","am62xx_sk-fs: CPU Load % (LOCAL_CPU_UTIL)","am62xxsip_sk-fs: THROUGHPUT (Mbits/sec)","am62xxsip_sk-fs: CPU Load % (LOCAL_CPU_UTIL)" + + "netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_STREAM; netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_MAERTS","1630.25 (min 1484.82, max 1709.62)","59.65 (min 50.43, max 84.96)","1604.35 (min 301.65, max 1837.36)","47.99 (min 14.99, max 55.09)","1840.19 (min 1813.97, max 1870.78)","54.44 (min 50.41, max 57.84)" + +.. rubric:: UDP Throughput + :name: CPSW2g-udp-throughput-0-loss + +.. csv-table:: CPSW2g UDP Egress Throughput 0 loss + :header: "Frame Size(bytes)","am62xx_lp_sk-fs: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE)","am62xx_lp_sk-fs: THROUGHPUT (Mbits/sec)","am62xx_lp_sk-fs: Packets Per Second (kPPS)","am62xx_lp_sk-fs: CPU Load % (LOCAL_CPU_UTIL)","am62xx_sk-fs: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE)","am62xx_sk-fs: THROUGHPUT (Mbits/sec)","am62xx_sk-fs: Packets Per Second (kPPS)","am62xx_sk-fs: CPU Load % (LOCAL_CPU_UTIL)","am62xxsip_sk-fs: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE)","am62xxsip_sk-fs: THROUGHPUT (Mbits/sec)","am62xxsip_sk-fs: Packets Per Second (kPPS)","am62xxsip_sk-fs: CPU Load % (LOCAL_CPU_UTIL)" + + "64","18.00","45.17 (min 42.87, max 45.99)","88.22 (min 84.00, max 90.00)","39.84 (min 25.15, max 63.81)","","42.93 (min 26.73, max 54.34)","83.86 (min 52.00, max 106.00)","30.70 (min 12.22, max 39.15)","","47.46 (min 45.64, max 48.58)","92.75 (min 89.00, max 95.00)","39.02 (min 38.09, max 39.68)" + "128","82.00","87.45 (min 81.17, max 89.93)","85.25 (min 79.00, max 88.00)","44.32 (min 37.82, max 62.92)","","90.95 (min 17.31, max 102.75)","88.70 (min 17.00, max 100.00)","34.13 (min 9.70, max 38.73)","","95.20 (min 89.43, max 97.29)","93.00 (min 87.00, max 95.00)","39.06 (min 38.06, max 39.81)" + "256","210.00","172.10 (min 156.28, max 178.99)","83.89 (min 76.00, max 87.00)","43.40 (min 37.59, max 62.50)","","200.58 (min 195.49, max 213.41)","97.71 (min 95.00, max 104.00)","38.57 (min 38.11, max 39.49)","","185.76 (min 178.48, max 190.50)","90.57 (min 87.00, max 93.00)","38.56 (min 38.29, max 38.82)" + "1024","978.00","610.03 (min 286.71, max 692.59)","74.57 (min 35.00, max 85.00)","37.81 (min 15.50, max 61.34)","","627.48 (min 46.69, max 807.16)","76.78 (min 6.00, max 99.00)","30.94 (min 0.92, max 38.45)","","700.39 (min 683.61, max 736.41)","85.25 (min 83.00, max 90.00)","38.01 (min 37.61, max 38.45)" + "1518","1472.00","530.95 (min 183.19, max 697.98)","43.63 (min 15.00, max 57.00)","35.51 (min 11.05, max 60.04)","","757.07 (min 611.07, max 823.49)","62.33 (min 50.00, max 68.00)","35.49 (min 32.74, max 36.36)","","692.69 (min 529.45, max 741.46)","57.13 (min 44.00, max 61.00)","35.56 (min 30.07, max 40.87)" + +.. csv-table:: CPSW2g UDP Ingress Throughput 0 loss + :header: "Frame Size(bytes)","am62xx_lp_sk-fs: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE)","am62xx_lp_sk-fs: THROUGHPUT (Mbits/sec)","am62xx_lp_sk-fs: Packets Per Second (kPPS)","am62xx_lp_sk-fs: CPU Load % (LOCAL_CPU_UTIL)","am62xx_sk-fs: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE)","am62xx_sk-fs: THROUGHPUT (Mbits/sec)","am62xx_sk-fs: Packets Per Second (kPPS)","am62xx_sk-fs: CPU Load % (LOCAL_CPU_UTIL)","am62xxsip_sk-fs: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE)","am62xxsip_sk-fs: THROUGHPUT (Mbits/sec)","am62xxsip_sk-fs: Packets Per Second (kPPS)","am62xxsip_sk-fs: CPU Load % (LOCAL_CPU_UTIL)" + + "64","18.00","2.27 (min 1.38, max 2.66)","4.33 (min 3.00, max 5.00)","1.77 (min 0.74, max 3.49)","","1.52 (min 1.38, max 1.54)","3.00","0.88 (min 0.62, max 1.82)","","1.92 (min 1.38, max 2.71)","3.89 (min 3.00, max 5.00)","1.18 (min 0.58, max 2.47)" + "128","82.00","5.28 (min 5.12, max 5.43)","5.00","2.32 (min 1.08, max 4.15)","","4.51","4.00","1.19 (min 0.85, max 2.88)","","4.73 (min 4.10, max 5.32)","4.57 (min 4.00, max 5.00)","0.99 (min 0.76, max 1.18)" + "256","210.00","10.38 (min 9.83, max 10.85)","5.00","2.01 (min 1.04, max 3.87)","","11.06 (min 9.83, max 16.18)","5.50 (min 5.00, max 8.00)","1.76 (min 1.05, max 3.74)","","10.32 (min 9.20, max 10.85)","4.80 (min 4.00, max 5.00)","1.06 (min 0.98, max 1.15)" + "1024","978.00","43.15 (min 42.60, max 43.42)","5.00","3.05 (min 1.38, max 5.13)","","43.07 (min 42.60, max 44.23)","5.00","2.44 (min 1.48, max 3.70)","","189.01 (min 27.85, max 935.80)","22.83 (min 3.00, max 114.00)","8.78 (min 1.39, max 42.77)" + "1518","1472.00","61.97 (min 61.23, max 62.41)","5.00","3.53 (min 1.95, max 4.71)","","62.18 (min 61.23, max 63.59)","5.00","3.07 (min 1.84, max 4.74)","","225.77 (min 28.26, max 941.32)","19.00 (min 2.00, max 80.00)","11.77 (min 1.97, max 42.15)" + +.. csv-table:: CPSW2g UDP Ingress Throughput possible loss + :header: "Frame Size(bytes)","am62xx_lp_sk-fs: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE)","am62xx_lp_sk-fs: THROUGHPUT (Mbits/sec)","am62xx_lp_sk-fs: Packets Per Second (kPPS)","am62xx_lp_sk-fs: CPU Load % (LOCAL_CPU_UTIL)","am62xx_lp_sk-fs: Packet Loss %","am62xx_sk-fs: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE)","am62xx_sk-fs: THROUGHPUT (Mbits/sec)","am62xx_sk-fs: Packets Per Second (kPPS)","am62xx_sk-fs: CPU Load % (LOCAL_CPU_UTIL)","am62xx_sk-fs: Packet Loss %","am62xxsip_sk-fs: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE)","am62xxsip_sk-fs: THROUGHPUT (Mbits/sec)","am62xxsip_sk-fs: Packets Per Second (kPPS)","am62xxsip_sk-fs: CPU Load % (LOCAL_CPU_UTIL)","am62xxsip_sk-fs: Packet Loss %" + + "64","18.00","78.62 (min 69.58, max 88.72)","153.56 (min 136.00, max 173.00)","48.04 (min 39.65, max 69.17)","52.00 (min 0.03, max 80.01)","","96.29 (min 89.66, max 97.42)","188.13 (min 175.00, max 190.00)","42.55 (min 39.69, max 44.43)","47.53 (min 31.90, max 64.49)","","83.77 (min 61.91, max 91.66)","163.56 (min 121.00, max 179.00)","42.08 (min 40.20, max 43.92)","71.27 (min 65.75, max 77.58)" + "128","82.00","166.41 (min 141.23, max 171.82)","162.57 (min 138.00, max 168.00)","43.50 (min 40.85, max 45.21)","45.31 (min 0.18, max 62.80)","","192.35 (min 182.52, max 194.35)","187.89 (min 178.00, max 190.00)","40.78 (min 33.34, max 46.88)","48.04 (min 28.69, max 70.54)","","170.66 (min 161.89, max 179.10)","166.71 (min 158.00, max 175.00)","43.01 (min 41.89, max 44.60)","66.48 (min 46.79, max 74.38)" + "256","210.00","302.93 (min 274.45, max 334.43)","147.89 (min 134.00, max 163.00)","48.96 (min 40.52, max 69.72)","28.02 (min 0.35, max 61.23)","","365.04 (min 343.99, max 374.14)","178.33 (min 168.00, max 183.00)","43.26 (min 41.74, max 44.53)","47.95 (min 40.06, max 53.04)","","285.42 (min 148.67, max 331.95)","139.40 (min 73.00, max 162.00)","42.81 (min 39.40, max 44.21)","46.03 (min 16.49, max 58.26)" + "1024","978.00","815.64 (min 461.33, max 935.09)","99.56 (min 56.00, max 114.00)","44.65 (min 38.28, max 68.37)","0.31 (min 0.07, max 0.81)","","889.38 (min 838.56, max 935.45)","108.43 (min 102.00, max 114.00)","42.69 (min 39.75, max 45.31)","0.43 (min 0.06, max 0.90)","","817.58 (min 732.19, max 908.97)","99.83 (min 89.00, max 111.00)","41.74 (min 40.22, max 43.02)","0.48 (min 0.09, max 0.89)" + "1518","1472.00","847.13 (min 748.02, max 945.15)","72.00 (min 64.00, max 80.00)","43.54 (min 35.25, max 68.38)","0.43 (min 0.03, max 1.40)","","881.34 (min 797.89, max 922.82)","74.80 (min 68.00, max 78.00)","40.02 (min 35.44, max 42.18)","0.34 (min 0.16, max 0.49)","","922.03 (min 879.70, max 955.84)","78.29 (min 75.00, max 81.00)","41.81 (min 39.60, max 43.08)","0.41 (min 0.00, max 0.77)" + +| + Linux OSPI Flash Driver ----------------------- diff --git a/source/devices/AM64X/linux/RT_Linux_Performance_Guide.rst b/source/devices/AM64X/linux/RT_Linux_Performance_Guide.rst index 8d9e279cf..f8371949a 100644 --- a/source/devices/AM64X/linux/RT_Linux_Performance_Guide.rst +++ b/source/devices/AM64X/linux/RT_Linux_Performance_Guide.rst @@ -311,6 +311,160 @@ Boot time numbers [avg, min, max] are measured from "Starting kernel" to Linux p | +Ethernet +----------------- +Ethernet performance benchmarks were measured using Netperf 2.7.1 https://hewlettpackard.github.io/netperf/doc/netperf.html +Test procedures were modeled after those defined in RFC-2544: +https://tools.ietf.org/html/rfc2544, where the DUT is the TI device +and the "tester" used was a Linux PC. To produce consistent results, +it is recommended to carry out performance tests in a private network and to avoid +running NFS on the same interface used in the test. In these results, +CPU utilization was captured as the total percentage used across all cores on the device, +while running the performance test over one external interface. + +UDP Throughput (0% loss) was measured by the procedure defined in RFC-2544 section 26.1: Throughput. +In this scenario, netperf options burst_size (-b) and wait_time (-w) are used to limit bandwidth +during different trials of the test, with the goal of finding the highest rate at which +no loss is seen. For example, to limit bandwidth to 500Mbits/sec with 1472B datagram: + +:: + + burst_size = / 8 (bits -> bytes) / / 100 (seconds -> 10 ms) + burst_size = 500000000 / 8 / 1472 / 100 = 425 + + wait_time = 10 milliseconds (minimum supported by Linux PC used for testing) + +UDP Throughput (possible loss) was measured by capturing throughput and packet loss statistics when +running the netperf test with no bandwidth limit (remove -b/-w options). + +In order to start a netperf client on one device, the other device must have netserver running. +To start netserver: + +:: + + netserver [-p ] [-4 (IPv4 addressing)] [-6 (IPv6 addressing)] + +Running the following shell script from the DUT will trigger netperf clients to measure +bidirectional TCP performance for 60 seconds and report CPU utilization. Parameter -k is used in +client commands to summarize selected statistics on their own line and -j is used to gain +additional timing measurements during the test. + +:: + + #!/bin/bash + for i in 1 + do + netperf -H -j -c -l 60 -t TCP_STREAM -- + -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE & + + netperf -H -j -c -l 60 -t TCP_MAERTS -- + -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE & + done + +Running the following commands will trigger netperf clients to measure UDP burst performance for +60 seconds at various burst/datagram sizes and report CPU utilization. + +- For UDP egress tests, run netperf client from DUT and start netserver on tester. + +:: + + netperf -H -j -c -l 60 -t UDP_STREAM -b -w -- -m + -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE + +- For UDP ingress tests, run netperf client from tester and start netserver on DUT. + +:: + + netperf -H -j -C -l 60 -t UDP_STREAM -b -w -- -m + -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE + + +CPSW/CPSW2g/CPSW3g Ethernet Driver +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +- CPSW2g: AM65x, J7200, J721e +- CPSW3g: AM64x + +.. rubric:: TCP Bidirectional Throughput + :name: CPSW2g-tcp-bidirectional-throughput + +.. csv-table:: CPSW2g TCP Bidirectional Throughput + :header: "Command Used","am64xx-hsevm: THROUGHPUT (Mbits/sec)","am64xx-hsevm: CPU Load % (LOCAL_CPU_UTIL)" + + "netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_STREAM; netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_MAERTS","1057.82 (min 989.78, max 1130.34)","97.00 (min 78.90, max 99.90)" + +.. csv-table:: CPSW2g UDP Egress Throughput 0 loss + :header: "UDP Datagram Size(bytes) (LOCAL_SEND_SIZE)","am64xx-hsevm: THROUGHPUT (Mbits/sec)","am64xx-hsevm: Packets Per Second (kPPS)","am64xx-hsevm: CPU Load % (LOCAL_CPU_UTIL)" + + "64","39.82","77.00","89.91" + "128","75.5","74.00","89.57" + "256","148.51","73.00","88.11" + "1024","575.58","70.00","91.17" + "1472","583.88","48.00","84.69" + +.. csv-table:: CPSW2g UDP Ingress Throughput 0 loss + :header: "UDP Datagram Size(bytes) (LOCAL_SEND_SIZE)","am64xx-hsevm: THROUGHPUT (Mbits/sec)","am64xx-hsevm: Packets Per Second (kPPS)","am64xx-hsevm: CPU Load % (LOCAL_CPU_UTIL)" + + "64","2.41","5.00","0.14" + "128","4.81","5.00","0.55" + "256","10.85","5.00","0.3" + "1024","43.42","5.00","0.15" + "1472","62.41","5.00","3.28" + +ICSSG Ethernet Driver +^^^^^^^^^^^^^^^^^^^^^ + +.. rubric:: TCP Bidirectional Throughput + :name: tcp-bidirectional-throughput + +.. csv-table:: ICSSG TCP Bidirectional Throughput + :header: "Command Used","am64xx-hsevm: THROUGHPUT (Mbits/sec)","am64xx-hsevm: CPU Load % (LOCAL_CPU_UTIL)" + + "netperf -H 192.168.2.1 -j -c -C -l 60 -t TCP_STREAM; netperf -H 192.168.2.1 -j -c -C -l 60 -t TCP_MAERTS","790.93 (min 354.76, max 1107.29)","91.56 (min 77.94, max 99.77)" + +.. rubric:: TCP Bidirectional Throughput Interrupt Pacing + :name: ICSSG-tcp-bidirectional-throughput-interrupt-pacing + +.. csv-table:: ICSSG TCP Bidirectional Throughput Interrupt Pacing + :header: "Command Used","am64xx-hsevm: THROUGHPUT (Mbits/sec)","am64xx-hsevm: CPU Load % (LOCAL_CPU_UTIL)" + + "netperf -H 192.168.2.1 -j -c -C -l 60 -t TCP_STREAM; netperf -H 192.168.2.1 -j -c -C -l 60 -t TCP_MAERTS","579.53 (min 185.68, max 1028.30)","50.57 (min 20.62, max 78.90)" + +.. rubric:: UDP Egress Throughput + :name: udp-egress-throughput-0-loss + +.. csv-table:: ICSSG UDP Egress Throughput 0 loss + :header: "Frame Size(bytes)","am64xx-hsevm: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE)","am64xx-hsevm: THROUGHPUT (Mbits/sec)","am64xx-hsevm: Packets Per Second (kPPS)","am64xx-hsevm: CPU Load % (LOCAL_CPU_UTIL)" + + "64","18.00","30.77 (min 21.63, max 40.56)","60.13 (min 42.00, max 79.00)","73.74 (min 57.66, max 89.55)" + "128","82.00","66.04 (min 45.59, max 78.52)","64.67 (min 45.00, max 77.00)","79.53 (min 60.70, max 89.42)" + "256","210.00","126.02 (min 75.65, max 155.88)","61.50 (min 37.00, max 76.00)","77.46 (min 54.35, max 89.57)" + "1024","978.00","495.53 (min 21.30, max 583.10)","60.63 (min 3.00, max 71.00)","77.25 (min 6.51, max 91.47)" + "1472","1472.00","461.76 (min 5.89, max 798.58)","39.38 (min 1.00, max 68.00)","52.42 (min 0.17, max 90.19)" + +.. rubric:: UDP Ingress Throughput + :name: udp-ingress-throughput-0-loss + +.. csv-table:: ICSSG UDP Ingress Throughput 0 loss + :header: "Frame Size(bytes)","am64xx-hsevm: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE)","am64xx-hsevm: THROUGHPUT (Mbits/sec)","am64xx-hsevm: Packets Per Second (kPPS)","am64xx-hsevm: CPU Load %" + + "64","18.00","2.15 (min 2.00, max 2.51)","4.14 (min 4.00, max 5.00)","1.49 (min 0.24, max 6.15)" + "128","82.00","4.96 (min 3.89, max 6.35)","4.78 (min 4.00, max 6.00)","3.72 (min 0.12, max 7.48)" + "256","210.00","10.21 (min 9.42, max 11.06)","5.00","2.97 (min 0.29, max 7.62)" + "1024","978.00","44.54 (min 42.60, max 48.33)","5.25 (min 5.00, max 6.00)","6.07 (min 0.25, max 7.97)" + "1472","1472.00","96.85 (min 62.41, max 174.28)","8.00 (min 5.00, max 15.00)","10.12 (min 4.00, max 20.47)" + +.. rubric:: HSR Mode + :name: icssg-hsr-mode + +.. csv-table:: ICSSG HSR Mode Forwarding + :header: "Mode","am64xx-hsevm: THROUGHPUT (Mbits/sec)","am64xx-hsevm: CPU Load % (SENDER)","am64xx-hsevm: CPU Load % (FORWARDING)","am64xx-hsevm: CPU Load % (RECIEVER)" + + "HSR with HW Offload","420","62.01","0","69.11" + "HSR with SW Offload","388","65","27.68","70" + +| + OSPI Flash Driver -----------------