Skip to content

High error rate using LiteON with srsRAN #1391

@arman-maghsoudnia

Description

@arman-maghsoudnia

The issue

We recently purchased LiteON Radio Units and followed the integration instructions to set them up. Despite investing significant time in the process, we continue to experience consistently high error rates, particularly on the uplink (UL) side.

The setup

We are using a Dell Precision 5860 equipped with an Intel(R) Xeon(R) w3-2425 CPU (6 cores in total). The Intel® Ethernet Network Adapter E810-XXVDA2 NIC connects the DU machine to the RU. For synchronization, we do not use the PTP-capable fronthaul switch recommended in the integration documentation (The Falcon RX Switch). Instead, we use the aforementioned NIC as the grandmaster clock, transmitting PTP packets directly to the RU.

Although we are not entirely certain, we believe this deviation from the recommended setup is not causing issues, as the RU logs show very few “lates” and the packet timing appears correct.

We are running the srsRAN project version cdc93a6 , and use DPDK for communication between the DU and RU, following this guide: srsRAN gNB with DPDK

The two experiments

To provide context regarding the issue we are facing, I present below the configurations and results from two experiments we conducted.

Experiment 1: 100MHz with 2x1 MIMO

We used iperf to stress-test the network. We first ran the downlink test, followed immediately by the uplink test.

The high level results (gNB print):

sudo ~/srsRAN_Project/build/apps/gnb/gnb -c ~/gnb_liteonDPDK.yaml

--== srsRAN gNB (commit cdc93a6092) ==--

EAL: Detected CPU lcores: 12
EAL: Detected NUMA nodes: 1
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_iavf (8086:1889) device: 0000:55:01.0 (socket 0)
TELEMETRY: No legacy callbacks, legacy socket not created
iavf_configure_queues(): request RXDID[22] in Queue[0]
Initializing the Open Fronthaul Interface for sector#0: ul_compr=[BFP,9], dl_compr=[BFP,9], prach_compr=[BFP,9], prach_cp_enabled=true
Cell pci=1, bw=100 MHz, 2T1R, dl_arfcn=630000 (n78), dl_freq=3450 MHz, dl_ssb_arfcn=627264, ul_freq=3450 MHz

N2: Connection to AMF on 10.43.99.137:38412 completed
==== gNB started ===
Type <h> to view help
srsLog error - The backend queue size is about to reach its maximum capacity of 8192 elements, new log entries will get discarded.
Consider increasing the queue capacity.

          |--------------------DL---------------------|-------------------------UL----------------------------------
 pci rnti | cqi  ri  mcs  brate   ok  nok  (%)  dl_bs | pusch  rsrp  ri  mcs  brate   ok  nok  (%)    bsr     ta  phr
   1 4601 |  15 2.0   27   350M  705    0   0%  2.71M |  19.9 -15.0   1   23   165k   31   30  49%      0   156n   38
   1 4601 |  15 2.0   27   826M 1600    0   0%  2.69M |  20.6 -16.2   1   20   220k   50    7  12%      0   251n   38
   1 4601 |  15 2.0   27   826M 1600    0   0%  2.71M |  20.6 -16.7   1   19   225k   50    3   5%      0   251n   38
   1 4601 |  15 2.0   27   826M 1600    0   0%  2.66M |  20.2 -17.0   1   18   215k   50    1   1%      0   271n   38
   1 4601 |  15 2.0   27   824M 1598    2   0%   2.9M |  20.1 -16.9   1   18   215k   50    2   3%      0   272n   38
   1 4601 |  15 2.0   27   826M 1600    0   0%  2.64M |  19.8 -17.3   1   17   212k   50    1   1%      0   267n   38
   1 4601 |  15 2.0   27   825M 1598    2   0%  2.77M |  19.2 -18.0   1   15   210k   50    0   0%      0   269n   38
   1 4601 |  15 2.0   27   826M 1600    0   0%  2.84M |  19.8 -17.5   1   16   210k   50    1   1%      0   278n   38
   1 4601 |  15 2.0   27   826M 1600    0   0%  2.73M |  19.8 -17.5   1   16   209k   50    0   0%      0   276n   38
   1 4601 |  15 2.0   27   826M 1600    0   0%  2.69M |  19.8 -17.5   1   17   210k   50    1   1%      0   277n   38
   1 4601 |  15 2.0   27   769M 1493    0   0%      0 |  19.8 -17.4   1   16   206k   50    0   0%      0   278n   38

          |--------------------DL---------------------|-------------------------UL----------------------------------
 pci rnti | cqi  ri  mcs  brate   ok  nok  (%)  dl_bs | pusch  rsrp  ri  mcs  brate   ok  nok  (%)    bsr     ta  phr
   1 4601 |  15 2.0   27    46k   12    0   0%      0 |  19.2 -16.1   1   16  16.4k    4    0   0%      0   282n   38
   1 4601 |  15 2.0   27    28k    8    0   0%      0 |   n/a   n/a   1    0      0    0    0   0%      0   241n   38
   1 4601 |  15 2.0   26    30k    8    0   0%      0 |   n/a   n/a   1    0      0    0    0   0%      0   257n   38
   1 4601 |  15 2.0   27   273k   69    0   0%      0 |  18.5 -15.4   1   14  29.0M  241    0   0%   700k   270n   25
   1 4601 |  15 2.0   27   410k  100    0   0%      0 |  18.3 -15.3   1   14  50.5M  400    0   0%   700k   273n   25
   1 4601 |  15 2.0   27   410k  100    0   0%      0 |  18.2 -15.4   1   15  52.8M  400    0   0%   700k   275n   25
   1 4601 |  15 2.0   27   410k  100    0   0%      0 |  18.3 -15.3   1   16  57.7M  399    1   0%   700k   281n   25
   1 4601 |  15 2.0   27   410k  100    0   0%      0 |  18.2 -15.3   1   16  58.8M  396    4   1%   700k   282n   25
   1 4601 |  15 2.0   27   410k  100    0   0%      0 |  18.6 -15.1   1   16  55.6M  389   11   2%   700k   283n   25
   1 4601 |  15 2.0   27   410k  100    0   0%      0 |  19.6 -14.3   1   17  62.9M  396    4   1%   700k   285n   24
   1 4601 |  15 2.0   27   410k  100    0   0%      0 |  18.9 -14.7   1   16  58.4M  397    3   0%   700k   288n   24

          |--------------------DL---------------------|-------------------------UL----------------------------------
 pci rnti | cqi  ri  mcs  brate   ok  nok  (%)  dl_bs | pusch  rsrp  ri  mcs  brate   ok  nok  (%)    bsr     ta  phr
   1 4601 |  15 2.0   27   410k  100    0   0%      0 |  19.6 -14.5   1   17  61.3M  390   10   2%   700k   291n   21
   1 4601 |  15 2.0   27   221k  100    0   0%      0 |  19.2 -14.7   1   15  52.8M  394    6   1%   700k   293n   24
   1 4601 |  15 2.0   27   158k   65    0   0%      0 |  18.6 -14.9   1   14  25.6M  207    1   0%      0   278n   38
   1 4601 |  15 2.0   26    36k    9    0   0%      0 |   n/a   n/a   1    0      0    0    0   0%      0   243n   38
   1 4601 |  15 2.0   26    40k   11    0   0%      0 |   n/a   n/a   1    0      0    0    0   0%      0   256n   38
^CStopping...
Logfile stored in xxx/gnb.log
DPDK - Closing port '0000:55:01.0', id = '0' ...  Done

As shown in the results, the uplink exhibits consistently low MCS values. When we manually increase the MCS, we observe a significant rise in error rate, indicating that there is an issue with the uplink performance. In our previous experiments using srsRAN with USRP B210 and USRP X410, we did not encounter this problem.

gNB Configuration:

We used the following gNB configuration for this test. Note that the parameters shown correspond specifically to this experiment.
We experimented extensively with the iq_scaling parameter as well as the RU’s internal gain-rx and gain-tx settings in an attempt to improve performance. The configuration presented here represents the most optimal combination of iq_scaling and gain values we were able to achieve.

cu_cp:
  amf:
    addr: xxx                    
    bind_addr: xxx                              
    supported_tracking_areas:                       
      - tac: 1
        plmn_list:
          - plmn: "99999"
            tai_slice_support_list:
              - sst: 1
                sd: 0

ru_ofh:
  t1a_max_cp_dl: 350
  t1a_min_cp_dl: 200
  t1a_max_cp_ul: 350
  t1a_min_cp_ul: 200
  t1a_max_up: 300
  t1a_min_up: 0
  ta4_max: 500
  ta4_min: 0
  is_prach_cp_enabled: true
  ignore_ecpri_payload_size: false
  ignore_ecpri_seq_id: true
  compr_method_ul: bfp
  compr_bitwidth_ul: 9
  compr_method_dl: bfp
  compr_bitwidth_dl: 9
  compr_method_prach: bfp
  compr_bitwidth_prach: 9
  enable_ul_static_compr_hdr: true
  enable_dl_static_compr_hdr: true
  iq_scaling: 3
  cells:
  - network_interface: 0000:55:01.0 #0000:04:00.1 #enp4s0f1.564
    vlan_tag_cp: 564
    vlan_tag_up: 564
    ru_mac_addr: xxx
    du_mac_addr: xxx
    prach_port_id: [4, 5]
    dl_port_id: [0, 1]
    ul_port_id: [0]

cell_cfg:
  dl_arfcn: 630000
  band: 78
  channel_bandwidth_MHz: 100
  common_scs: 30
  plmn: "99999"
  tac: 1
  pci: 1
  nof_antennas_dl: 2
  nof_antennas_ul: 1
  prach:
    # prach_config_index: 159
    prach_root_sequence_index: 1
    zero_correlation_zone: 0
    prach_frequency_start: 20
  pdsch:
    mcs_table: qam256
    olla_target_bler: 0.1
  pusch:
     olla_max_snr_offset: 20
  ssb:
    ssb_period: 20
  tdd_ul_dl_cfg:
    dl_ul_tx_period: 10
    nof_dl_slots: 7
    nof_ul_slots: 2
    nof_dl_symbols: 8
    nof_ul_symbols: 0

expert_phy:
    allow_request_on_empty_uplink_slot: true

hal:
  eal_args: "--lcores (0-1)@(0-11) -a 0000:55:01.0"

log:
  filename: /home/xxx/gnb.log
  all_level: debug
  ofh_level: debug
  phy_rx_symbols_filename: /home/xxx/iq.bin

du:
  warn_on_drop: true

pcap:
  mac_enable: false
  mac_filename: /tmp/gnb_mac.pcap
  ngap_enable: false
  ngap_filename: /tmp/gnb_ngap.pcap

metrics:
  autostart_stdout_metrics: true

RU running config and logs:

The LiteON RU provides status information regarding the active configuration as well as the number of late and early frames. The results are shown below:

# show running-config 
Band Width = 100000000
Center Frequency = 3450000000
Compression Bit = 9
Control and User Plane vlan = 564
M Plane vlan = 0
default gateway = 10.101.0.101
dpd mode : Enable
DU MAC Address = 003322330011
phase compensation mode : Enable
RX gain = 14
TX power = 24
subcarrier spacing = 1
rj45_vlan_ip = 10.101.131.61
SFP_vlan_ip = 10.101.131.62
SFP_non_vlan_static_ip = 192.168.0.100
prach eAxC-id port 0, 1, 2, 3 = 0x0004, 0x0005, 0x0006, 0x0007
slotid = 0x00000001
jumboframe = 0x00000001
sync source : PTP
# show pm-data 
1,POWER,2025-10-13T14:02:27Z,2025-10-13T14:02:45Z,o-ran-hardware:O-RU-FPGA,8.7717,9.6000,8.9613,iana-hardware:cpu,8.7717,9.6000,8.9613
13,VOLTAGE,2025-10-13T14:02:27Z,2025-10-13T14:03:00Z,0,0.0000,2025-10-13T14:02:32Z,1.1902,2025-10-13T14:02:31Z,1.1902,2025-10-13T14:02:31Z,0.6409,2025-10-13T14:03:00Z,3450000000
1,POWER,2025-10-13T14:02:45Z,2025-10-13T14:03:00Z,o-ran-hardware:O-RU-FPGA,8.7748,9.8125,9.1282,iana-hardware:cpu,8.7748,9.8125,9.1282
1,RX_ON_TIME,2025-10-13T14:02:27Z,2025-10-13T14:03:00Z,ru1,143453736
2,RX_EARLY,2025-10-13T14:02:27Z,2025-10-13T14:03:00Z,ru1,2458
3,RX_LATE,2025-10-13T14:02:27Z,2025-10-13T14:03:00Z,ru1,750
6,RX_TOTAL,2025-10-13T14:02:27Z,2025-10-13T14:03:00Z,ru1,159351336
7,RX_ON_TIME_C,2025-10-13T14:02:27Z,2025-10-13T14:03:00Z,ru1,15894298
8,RX_EARLY_C,2025-10-13T14:02:27Z,2025-10-13T14:03:00Z,ru1,0
9,RX_LATE_C,2025-10-13T14:02:27Z,2025-10-13T14:03:00Z,ru1,94
1,TX_TOTAL,2025-10-13T14:02:27Z,2025-10-13T14:03:00Z,ru1,74771211

As shown in the results, the number of late and early frames is minimal, suggesting that our PTP synchronization is functioning correctly. However, we are not entirely certain about this conclusion. It is worth noting again that, unlike the recommended setup, we are not using a PTP-capable switch; instead, the NIC itself serves as the grandmaster clock.

gNB log with debug level for all:

Since the file is quite large, I’ve uploaded it to an external drive:
gnb_log

The IQ samples:

As you know, enabling the phy_rx_symbols_filename option allows saving the IQ samples. Since the resulting file is quite large, I have uploaded it to an external drive:
iq.bin

The iperf test logs:

iperf_client_side.txt

iperf_server_side.txt

The PTP Config & The ptp4l and phc2sys logs:

The logs indicate a minimal timing offset, suggesting that the synchronization is functioning correctly.

The config used:

#
# Copyright 2021-2025 Software Radio Systems Limited
#
# By using this file, you agree to the terms and conditions set
# forth in the LICENSE file which can be found at the top level of
# the distribution.
#

# SRS E2E O-RAN testbed example PTP configuration. It is based on the 
# Telecom G.8275.1 example configuration provided with ptp4l, with some
# modifications to make the LLS-C1 setup work with a wide range of RUs.

[global]
dataset_comparison             G.8275.x
G.8275.defaultDS.localPriority 128
maxStepsRemoved                255
logAnnounceInterval            -3
logSyncInterval                -4
logMinDelayReqInterval         -4
clientOnly                     0
serverOnly                     1
G.8275.portDS.localPriority    128
domainNumber                   24

#
# Transport options.
#
ptp_dst_mac             01:80:C2:00:00:0E
p2p_dst_mac             01:80:C2:00:00:0E
uds_address             /var/run/ptp4l
network_transport       L2
delay_mechanism         E2E
time_stamping           hardware
tsproc_mode             filter
delay_filter            moving_median
delay_filter_length     10
egressLatency           0
ingressLatency          0

#
# Clock description.
#
clockClass              6
clock_type              OC
clockAccuracy           0x20

ptp4l logs:

xxx@xxx:~$ sudo ptp4l -2 -i ens1f0np0 -f ~/srs-linuxptp.cfg -m
ptp4l[387.198]: selected /dev/ptp1 as PTP clock
ptp4l[387.216]: port 1 (ens1f0np0): INITIALIZING to LISTENING on INIT_COMPLETE
ptp4l[387.216]: port 0 (/var/run/ptp4l): INITIALIZING to LISTENING on INIT_COMPLETE
ptp4l[387.216]: port 0 (/var/run/ptp4lro): INITIALIZING to LISTENING on INIT_COMPLETE
ptp4l[387.615]: port 1 (ens1f0np0): LISTENING to MASTER on ANNOUNCE_RECEIPT_TIMEOUT_EXPIRES
ptp4l[387.616]: selected local clock 507c6f.fffe.86f9aa as best master
ptp4l[387.616]: port 1 (ens1f0np0): assuming the grand master role
ptp4l[436.361]: timed out while polling for tx timestamp
ptp4l[436.361]: increasing tx_timestamp_timeout or increasing kworker priority may correct this issue, but a driver bug likely causes it
ptp4l[436.361]: port 1 (ens1f0np0): send sync failed
ptp4l[436.361]: port 1 (ens1f0np0): MASTER to FAULTY on FAULT_DETECTED (FT_UNSPECIFIED)
ptp4l[436.436]: port 1 (ens1f0np0): link down
ptp4l[436.436]: port 1 (ens1f0np0): assuming the grand master role
ptp4l[479.439]: port 1 (ens1f0np0): link up
ptp4l[479.460]: port 1 (ens1f0np0): FAULTY to LISTENING on INIT_COMPLETE
ptp4l[479.890]: port 1 (ens1f0np0): LISTENING to MASTER on ANNOUNCE_RECEIPT_TIMEOUT_EXPIRES
ptp4l[479.890]: port 1 (ens1f0np0): assuming the grand master role
ptp4l[1229.281]: timed out while polling for tx timestamp
ptp4l[1229.281]: increasing tx_timestamp_timeout or increasing kworker priority may correct this issue, but a driver bug likely causes it
ptp4l[1229.281]: port 1 (ens1f0np0): send sync failed
ptp4l[1229.281]: port 1 (ens1f0np0): MASTER to FAULTY on FAULT_DETECTED (FT_UNSPECIFIED)
ptp4l[1229.334]: port 1 (ens1f0np0): link down
ptp4l[1229.335]: port 1 (ens1f0np0): assuming the grand master role
ptp4l[1272.240]: port 1 (ens1f0np0): link up
ptp4l[1272.261]: port 1 (ens1f0np0): FAULTY to LISTENING on INIT_COMPLETE
ptp4l[1272.704]: port 1 (ens1f0np0): LISTENING to MASTER on ANNOUNCE_RECEIPT_TIMEOUT_EXPIRES
ptp4l[1272.705]: port 1 (ens1f0np0): assuming the grand master role

phc2sys logs:

hc2sys[20011.698]: CLOCK_REALTIME phc offset         2 s2 freq   +6971 delay    690
phc2sys[20011.823]: CLOCK_REALTIME phc offset        -2 s2 freq   +6968 delay    686
phc2sys[20011.948]: CLOCK_REALTIME phc offset         8 s2 freq   +6977 delay    678
phc2sys[20012.073]: CLOCK_REALTIME phc offset         1 s2 freq   +6973 delay    680
phc2sys[20012.199]: CLOCK_REALTIME phc offset        -9 s2 freq   +6963 delay    681
phc2sys[20012.324]: CLOCK_REALTIME phc offset        -4 s2 freq   +6965 delay    695
phc2sys[20012.449]: CLOCK_REALTIME phc offset        -9 s2 freq   +6959 delay    690
phc2sys[20012.574]: CLOCK_REALTIME phc offset        -2 s2 freq   +6963 delay    691
phc2sys[20012.700]: CLOCK_REALTIME phc offset         7 s2 freq   +6972 delay    588
phc2sys[20012.825]: CLOCK_REALTIME phc offset        -7 s2 freq   +6960 delay    698
phc2sys[20012.950]: CLOCK_REALTIME phc offset        -4 s2 freq   +6961 delay    693
phc2sys[20013.076]: CLOCK_REALTIME phc offset        -5 s2 freq   +6959 delay    700
phc2sys[20013.201]: CLOCK_REALTIME phc offset         1 s2 freq   +6963 delay    702
phc2sys[20013.326]: CLOCK_REALTIME phc offset        -9 s2 freq   +6953 delay    693
phc2sys[20013.451]: CLOCK_REALTIME phc offset        -4 s2 freq   +6956 delay    701
phc2sys[20013.577]: CLOCK_REALTIME phc offset         3 s2 freq   +6961 delay    701
phc2sys[20013.702]: CLOCK_REALTIME phc offset        -1 s2 freq   +6958 delay    701
phc2sys[20013.827]: CLOCK_REALTIME phc offset         1 s2 freq   +6960 delay    707
phc2sys[20013.953]: CLOCK_REALTIME phc offset        -2 s2 freq   +6957 delay    698
phc2sys[20014.078]: CLOCK_REALTIME phc offset         6 s2 freq   +6965 delay    688
phc2sys[20014.203]: CLOCK_REALTIME phc offset         6 s2 freq   +6967 delay    693
phc2sys[20014.328]: CLOCK_REALTIME phc offset        18 s2 freq   +6980 delay    599
phc2sys[20014.454]: CLOCK_REALTIME phc offset         6 s2 freq   +6974 delay    693
phc2sys[20014.579]: CLOCK_REALTIME phc offset         8 s2 freq   +6978 delay    693
phc2sys[20014.704]: CLOCK_REALTIME phc offset         4 s2 freq   +6976 delay    696

Experiment 2: 100MHz with 4x4 MIMO

This test is similar to Experiment 1, except that we increased the MIMO configuration to 4×4.

The high level results (gNB print):

xxx@xxx:~$ sudo ~/srsRAN_Project/build/apps/gnb/gnb -c ~/gnb_liteonDPDK.yaml

--== srsRAN gNB (commit cdc93a6092) ==--

EAL: Detected CPU lcores: 12
EAL: Detected NUMA nodes: 1
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_iavf (8086:1889) device: 0000:55:01.0 (socket 0)
TELEMETRY: No legacy callbacks, legacy socket not created
iavf_configure_queues(): request RXDID[22] in Queue[0]
Initializing the Open Fronthaul Interface for sector#0: ul_compr=[BFP,9], dl_compr=[BFP,9], prach_compr=[BFP,9], prach_cp_enabled=true
Cell pci=1, bw=100 MHz, 4T1R, dl_arfcn=630000 (n78), dl_freq=3450 MHz, dl_ssb_arfcn=627264, ul_freq=3450 MHz

N2: Connection to AMF on 10.43.99.137:38412 completed
==== gNB started ===
Type <h> to view help

          |--------------------DL---------------------|-------------------------UL----------------------------------
 pci rnti | cqi  ri  mcs  brate   ok  nok  (%)  dl_bs | pusch  rsrp  ri  mcs  brate   ok  nok  (%)    bsr     ta  phr
   1 4601 |  14 3.5   15   751k   10    0   0%      0 |  20.0 -15.7   1   25  53.8k    6    4  40%      0   160n   27
   1 4601 |  15 2.0    0      0    0    0   0%      0 |   n/a   n/a   1    0      0    0    0   0%      0   262n   27
srsLog error - The backend queue size is about to reach its maximum capacity of 8192 elements, new log entries will get discarded.
Consider increasing the queue capacity.
   1 4601 |  15 2.2   23   143M  239    1   0%   376k |  19.2 -16.3   1   22  52.6k   12   12  50%      0   274n   38
   1 4601 |  12 4.0   21   946M 1437  113   7%  1.57M |  20.0 -16.6   1   20   218k   50   14  21%      0   274n   38
   1 4601 |  12 4.0   22   954M 1409  141   9%   177k |  20.1 -16.9   1   18   221k   50    2   3%      0   278n   38
   1 4601 |  12 4.0   22   937M 1397  153   9%   1.8M |  20.1 -16.9   1   19   221k   50    0   0%      0   275n   38
   1 4601 |  13 4.0   22   956M 1394  156  10%   394k |  20.2 -16.8   1   19   224k   50    1   1%      0   277n   38
   1 4601 |  13 4.0   23   953M 1399  151   9%  1.03M |  20.3 -16.8   1   19   220k   50    1   1%      0   277n   38
   1 4601 |  13 4.0   23   960M 1404  146   9%   778k |  20.3 -16.8   1   19   221k   50    3   5%      0   278n   38
   1 4601 |  12 4.0   22   967M 1399  151   9%   344k |  20.3 -16.8   1   18   220k   50    1   1%      0   268n   38
   1 4601 |  13 4.0   23   984M 1405  145   9%   174k |  20.3 -16.9   1   18   217k   50    0   0%      0   280n   38

          |--------------------DL---------------------|-------------------------UL----------------------------------
 pci rnti | cqi  ri  mcs  brate   ok  nok  (%)  dl_bs | pusch  rsrp  ri  mcs  brate   ok  nok  (%)    bsr     ta  phr
   1 4601 |  13 4.0   22   983M 1402  148   9%   175k |  20.2 -16.9   1   18   218k   50    0   0%      0   281n   38
   1 4601 |  13 4.0   23   973M 1420  130   8%   173k |  20.1 -17.0   1   18   218k   50    0   0%      0   283n   38
   1 4601 |  12 4.0   22   952M 1402  148   9%   172k |  20.0 -17.1   1   18   217k   50    1   1%      0   284n   38
   1 4601 |  13 4.0   23   955M 1370  180  11%   175k |  20.0 -17.1   1   18   215k   50    0   0%      0   284n   38
   1 4601 |  13 4.0   23   966M 1398  152   9%   173k |  20.1 -16.9   1   18   217k   50    1   1%      0   286n   38
   1 4601 |  13 4.0   23   880M 1312  137   9%      0 |  20.0 -17.0   1   18   212k   50    0   0%      0   277n   38
   1 4601 |  15 2.4   25    53k   12    0   0%      0 |  18.6 -16.2   1   15  12.4k    3    0   0%      0   264n   38
   1 4601 |  15 2.0   26    38k   10    0   0%      0 |   n/a   n/a   1    0      0    0    0   0%      0   240n   38
   1 4601 |  15 2.0   27    12k    3    0   0%      0 |   n/a   n/a   1    0      0    0    0   0%      0   296n   38
   1 4601 |  15 2.0   27   322k   81    0   0%      0 |  16.1 -15.7   1   11  25.6M  279    0   0%   700k   277n   25
   1 4601 |  15 2.0   27   410k  100    4   3%      0 |  16.5 -15.9   1   12  43.7M  400    0   0%   700k   275n   25

          |--------------------DL---------------------|-------------------------UL----------------------------------
 pci rnti | cqi  ri  mcs  brate   ok  nok  (%)  dl_bs | pusch  rsrp  ri  mcs  brate   ok  nok  (%)    bsr     ta  phr
   1 4601 |  15 2.0   27   410k  100    4   3%      0 |  16.7 -15.6   1   13  46.0M  400    0   0%   700k   279n   25
   1 4601 |  15 2.0   27   410k  100    4   3%      0 |  16.3 -15.8   1   13  46.1M  400    0   0%   700k   281n   25
   1 4601 |  15 2.0   27   410k  100    3   2%      0 |  16.3 -15.8   1   14  48.2M  400    0   0%   700k   285n   25
   1 4601 |  15 2.0   27   410k  100    2   1%      0 |  16.3 -15.8   1   14  49.2M  398    2   0%   700k   287n   25
   1 4601 |  15 2.0   27   410k  100    2   1%      0 |  16.3 -15.8   1   14  50.6M  400    0   0%   700k   290n   25
   1 4601 |  15 2.0   27   410k  100    5   4%      0 |  17.5 -15.2   1   16  56.4M  387   13   3%   700k   279n   24
   1 4601 |  15 2.0   27   410k  100    5   4%      0 |  17.6 -14.8   1   16  57.5M  399    1   0%   700k   264n   24
   1 4601 |  15 2.0   27   410k  100    6   5%      0 |  17.0 -15.4   1   15  53.2M  397    3   0%   700k   266n   24
   1 4601 |  15 2.0   27   192k   48    2   4%      0 |  17.4 -14.9   1   16  21.9M  159    7   4%      0   268n   38
   1 4601 |  15 2.0   27   8.2k    2    0   0%      0 |   n/a   n/a   1    0      0    0    0   0%      0   265n   38
^CStopping...
Logfile stored in /home/xxx/gnb.log
DPDK - Closing port '0000:55:01.0', id = '0' ...  Done

As shown in the results, even the downlink now exhibits issues—the MCS values are lower than expected, and the retransmission rate is noticeably high. The uplink performance remains poor, despite the increase in MIMO size.

gNB Configuration:

gnb_liteonDPDK.yaml

RU running config and logs:

ru_runnings.txt

gNB log with debug level for all:

https://ftpsens.epfl.ch/LiteOnIssueDataShare/Experiment_2/gnb.log

The IQ samples:

https://ftpsens.epfl.ch/LiteOnIssueDataShare/Experiment_2/iq.bin

The iperf test logs:

iperf_client_side.txt

iperf_server_side.txt

The PTP Config & The ptp4l and phc2sys logs:

srs-linuxptp.txt

ptp4l_log.txt

phc2sys_log.txt

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions