Skip to content

Debugging Linux Software

Anton Kolesov edited this page Dec 2, 2015 · 15 revisions

This article describes how to debug user-space applications on the Linux on ARC.

Building tool chain

Refer to README.md file in the toolchain repository to learn how to build toolchain for Linux uClibc target. In most cases you would want to run build-all.sh script from the toolhain repository directory. For example to build Linux toolchain for ARC 700 targets:

$ ./build-all.sh --no-elf32 --cpu arc700

To build Linux toolchain for ARC HS targets:

$ ./build-all.sh --no-elf32 --cpu archs

Toolchain will be installed to the ../INSTALL directory (relatively to build-all.sh script.)

Running Linux

There are two ways supported by Synopsys to run Linux on ARC cores: on AXS10x Software Development platform and ARC HS VDK:

  • Linux for ARC 770 can be run on a single core ASIC AXS101
  • Linux for ARC HS can be run on a dual core FPGA AXS103
  • Linux for ARC HS can be run on a single and dual core models in ARC HS Virtual Development Kit.

ARC HS VDK

ARC HS VDK is part of nSIM PRO distribution. It is a complete development system prototype that includes models for ARC HS (either single or dual core), Ethernet, UART and touchscreen LCD. ARC HS VDK comes with already prebuilt Linux kernel image and root file system, that include several demo applications. This guide provides essential steps on how to configure networking in VDK and how to replace its default kernel and root file system with customs. For more details refer to ARC HS VDK User Guide.

AXS101 SDP

Follow this article to build Linux kernel. You need to specify axs101 as a defconfig. If you are using Buildroot, then you can do this in the Kernel configuration section, parameter Defconfig name aka BR2_LINUX_KERNEL_DEFCONFIG.

After this follow this article if you are using OpenOCD or this article if you are using Ashling GDB Server. Load <BUILDROOT_OUTPUT>/images/vmlinux file into the target memory as with any baremetal application. Note that if you want to only load vmlinux into the target memory, you can use arc-linux-gdb for this, however if you are going to perform a debugging of the Linux kernel, then you need to have a baremetal debugger: Metaware Debugger or ARC GNU ELF32 GDB.

After loading image into memory start the core.

To connect to UART terminal of Linux system, use minicom application. Run:

$ sudo minicom -s

That will start Minicom in configuration mode. Configure serial port to use serial device /dev/ttyUSB0, bps/par/bits are 115200 8N1 and both hardware and software flow controls are disabled. You can save setup as default so you will not need to input it each time. Choose "Exit" option in Minicom menu and it will connect to the specified serial device. If you want to use another UART device of the target system, then you will need to change path to serial device to the desired value.

Configuring target system

Configuring networking

By default target system will not bring up networking device. To do this:

# ifconfig eth0 up

If network to which board or virtual platform is attached has a working DHCP server, then you should run DHCP client:

# udhcpc

If there is no DHCP server, then configure networking manually:

# ifconfig eth0 <IP_ADDRESS> netmast <IP_NETMASK>
# route add default gw <NETWORK_GATEWAY> eth0

Where <IP_ADDRESS> is an IP address to assign to ARC Linux, <IP_NETMASK> is a mask of this network, <NETWORK_GATEWAY> is default gateway of your network.

To gain access to the Internet configure DNS servers. Create /etc/resolv.conf which lists DNS servers. For example:

nameserver 8.8.8.8
nameserver 8.8.4.4

This is not required if you are using DHCP.

Those actions will connect your ARC Linux to the network.

Configuring NFS

To ease process of delivering target application into the ARC Linux it is recommended to configure NFS share and mount it on the ARC Linux. Refer to this article for details on how to bring up NFS share on the Ubuntu machine. If you already has a working NFS share connect to it with following command:

# mount -t nfs -o nolock,rw <NFS_SERVER_IP>:<NFS_SHARE_PATH> /mnt

Network share will be mounted at /mnt.

Additional services

Another thing that might be useful are network services like telnet, ftp, etc. First you need to ensure that desired service is available in the Busybox configuration. Run make menuconfig from Busybox directory or make busybox-menuconfig if you are using Buildroot. Make sure that "inetd" server is enabled. Select required packages (telnet, ftpd, etc) and save configuration. Rebuild busybox (run make busybox-rebuild if you are using Buildroot).

Then configure inetd daemon. Refer to inetd documentation to learn how to do this. In the simple case it is required to create /etc/inetd.conf file on the target system with following contents:

ftp     stream  tcp nowait  root    /usr/sbin/ftpd      ftpd -w /
telnet  stream  tcp nowait  root    /usr/sbin/telnetd   telnetd -i -l /bin/sh

Thus inetd will allow connections to ftpd and telnetd servers on the target system. Other services can be added if required.

Rebuild and update rootfs and vmlinux. Start rebuilt system and run inetd to start inetd daemon:

# inetd

Making configuration changes permanent in Buildroot

In most cases it is better to make changes described in the previous section permanent, so that there will be no need to run each command each time when starting target system. Buildroot provides bare rootfs skeleton, that can be altered to automate those configuration commands and other things.

Create a directory for skeleton overlay, that will contain files that you wish to add to the target system. Configure Buildroot to use this overlay: set System configuration -> Root filesystem overlay directories aka BR2_ROOTFS_OVERLAY to the path of the overlay directory.

Then create any files you want to add to this skeleton. For example if you want to bring up networking when booting system add file etc/init.d/S41eth0 with the command that brings up eth0. Add other scripts that will start other services as well (inetd, mount NFS) to etc/init.d.

Copy inetd.conf to skeleton's etc directory if you are going to use it. Don't forget to start inetd from one of the scripts in the etc/init.d.

If you already build target system in Buildroot and wish to update it's rootfs, then remove <BUILDROOT_OUTPUT>/build/.root file to force Buildroot to copy skeleton and overlay. Then run Buildroot:

$ rm -f build/.root
$ make

If you made any changes to the busybox configuration, then to keep them copy <BUILDROOT_OUTPUT>/build/busybox-*/.config file to some permanent location, then configure Buildroot to use new configuration file with Target packages -> BusyBox configuration file to use aka BR2_PACKAGE_BUSYBOX_CONFIG. Reconfigure busybox with:

$ make busybox-reconfigure

Rebuild rootfs and vmlinux.

Debugging applications with gdbserver

It is assumed that one or another way you've copied target application to the target system. Run application on target with gdbserver:

# gdbserver :49101 <application-to-debug> [application arguments]

TCP port number could any port, not occupied by other applications. Then run GDB on the host:

$ arc-linux-gdb <application-to-debug>

Then set sysroot directory path. Sysroot is a "mirror" of the target system file system: it contains copies of the applications and shared libraries installed on the target system. Path to the sysroot directory should be set to allow GDB to step into shared libraries functions. Note that shared libraries and applications on the target system can be stripped from the debug symbols to preserve disk space, while files in the sysroot shouldn't be stripped. In case of Buildroot-generated rootfs sysroot directory can be found under <BUILDROOT_OUTPUT>/staging.

(gdb) set sysroot <SYSROOT_PATH>

Then connect to the remote gdbserver:

(gdb) target remote <TARGET_IP>:49101

You can find <TARGET_IP> via running ifconfig on the target system. TCP port must much the one used when starting up gdbserver. It is important that sysroot should be set before connecting to remote target, otherwise GDB might have issues with stepping into shared libraries functions.

Then you can your debug session as usual. In the simplest case:

(gdb) continue

Note that there is a known limitation of gdbserver - it is not safe to debug multiprocess application with it. Problem is that when child if forked, it still shares code pages with parent, therefore software breakpoints set in the parent process might be hit by the child process should it execute the same code path. In this case child process will crash due to unexpected breakpoint. This is a generic problem with gdbserver, that is not specific to ARC port of GDB - it can be reproduced with gdb/gdbserver for x86_64.

Debugging applications with native GDB

Starting from GNU Toolchain for ARC release 2014.08 it is possible to build full GDB to run natively on ARC Linux. Starting from GNU Tooolchain for ARC release 2015.06 native GDB is automatically built for uClibc toolchain (can be disabled by --no-native-gdb option). In GNU Toolchain prebuilt tarballs native GDB binary can be found in sysroot directory: arc-snps-linux-uclibc/sysroot/usr/bin/gdb

With native GDB it is possible to debug applications the same way as it is done on the host system without gdbserver.

When choosing between gdbserver and native GDB, following pros and cons should be considered.

Pros of native GDB:

  • Overhead for network communication between GDB and gdbserver is removed, theoretically improving debugging performance.
  • Some features might be not implemented in the gdbserver.
  • As described in gdbserver section - gdbserver cannot be safely used to debug applications that use fork(). Therefore native GDB is the debugger of choice for multiprocess applications.
  • There is no need for a second host to perform debugging session, since everything is on the target system.

Cons:

  • It is required that applications on target system should have debugging symbols (unless you are so hardcore that you don't need them). Debugging symbols, especially in the most verbose case occupy significant disk space. Depending on the type of target hardware this might be or might not be a thing to consider. Usually this can be ignored in case of virtual prototypes, and is hardly a problem with development systems, however disk space is probably very limited on the production systems. Large rootfs size also means increased time required to load rootfs into the target memory.
  • Not only debugging symbols will take noticeable disk space, but also GDB will also read them intensively, so if target file system has a low performance, this might be noticeable.
  • Full GDB on target requires more computational power than gdbserver. This might offset all of the gains from exclusion of the networking layer.

In general it is highly dependent on target system properties and developer needs whether gdbserver or native GDB is better and it is up to the software developer to decide what is better in their particular case.

Clone this wiki locally