Skip to content

Debugging Linux Software

Anton Kolesov edited this page Jul 22, 2014 · 15 revisions

This article describes how to debug user-space applications on the Linux on ARC.

Building Tool chain

Refer to README.md file in the toolchain repository to learn how to build toolchain for Linux uClibc target. In most cases you would want to run from the toolhain repository directory:

$ ./build-all.sh --no-elf32

Toolchain will be installed to the ../INSTALL directory.

Running Linux

There are two ways supported by Synopsys to run Linux on ARC cores: on the SystemC model based on nSIM simulator and on the AXS101 SDP.

SystemC Model

Source code of the SystemC model is supplied as an example with the nSIM distribution.

First you need to build model. Refer to our wiki article for details.

By default model already contains a prebuilt Linux image that can be used to run and debug target applications. If you wish to have your own Linux image, then refer to this article for details. If you are using Buildroot make sure that gdbserver is selected in the Target packages / Debugging, profiling and benchmarking.

AXS101 SDP

Follow this article to build Linux kernel. You need to specify axs101 as a defconfig. If you are using Buildroot, then you can do this in the Kernel configuration section, parameter Defconfig name aka BR2_LINUX_KERNEL_DEFCONFIG.

After this follow this article if you are using OpenOCD or this article if you are using Ashling GDB Server. Load <BUILDROOT_OUTPUT>/images/vmlinux file into the target memory as with any baremetal application. Note that if you want to only load vmlinux into the target memory, you can use arc-linux-gdb for this, however if you are going to perform a debugging of the Linux kernel, thne you need to have a baremetal debugger: Metaware Debugger or ARC GNU ELF32 GDB.

After loading image into memory start the core.

To connect to UART terminal of Linux system, use minicom application. Run:

$ sudo minicom -s

That will start minicom in configuration mode. Configure serial port to use serial device /dev/ttyUSB0, bps/par/bits are 115200 8N1 and both hardware and software flow controls are disabled. Save setup as default adn exit from Minicom. Restart minicom without -s and it will connect to the specified serial device. If you want to use another UART device of the target system, then you will need to change path to serial device to the desired value.

Configuring Target System

Configuring Networking

By default target system will not bring up networking device. To do this:

# ifconfig eth0 up

If network to which board or virtual platoform is attached has a working DHCP server, then you shoud run DHCP client:

# udhcpc

If there is no DHCP server, then configure networking manually:

# ifconfig eth0 <IP_ADDRESS> netmast <IP_NETMASK>
# route add default gw <NETWORK_GATEWAY> eth0

Where <IP_ADDRESS> is an IP address to assign to ARC Linux, <IP_NETMASK> is a mask of this network, <NETWORK_GATEWAY> is default gateway of your network.

To gain access to the Internet configure DNS servers. Create /etc/resolv.conf which lists DNS servers. For example:

nameserver 8.8.8.8
nameserver 8.8.4.4

This is not required if you are using DHCP.

Those actions will connect your ARC Linux to the network.

Configuring NFS

To ease process of delivering target application into the ARC Linux it is recommended to configure NFS share and mount it on the ARC Linux. Refer to this article for details on how to bring up NFS share on the Ubuntu machine. If you already has a working NFS share connect to it with following command:

# mount -t nfs -o nolock,rw <NFS_SERVER_IP>:<NFS_SHARE_PATH> /mnt

Network share will be mounted at /mnt.

Additional Services

Another thing that might be useful are network services like telnet, ftp, etc. First you need to ensure that desired service is available in the Busybox configuration. Run make menuconfig from Busybox directory or make busybox-menuconfig if you are using Buildroot. Make sure that "inetd" server is enabled. Select required packages and save configuration. Rebuild busybox (run make busybox-rebuild in case of Buildroot).

Then you need to configure inetd daemon. Refer to inetd documentation to learn how to do this. In the simple case you would like to create /etc/inetd.conf file on the target system with following contents:

ftp     stream  tcp nowait  root    /usr/sbin/ftpd      ftpd -w /
telnet  stream  tcp nowait  root    /usr/sbin/telnetd   telnetd -i -l /bin/sh

Thus inetd will allow connections to ftpd and telnetd servers on the taget system. Add other services if required.

Rebuild and update rootfs and vmlinux. Start rebuilt system and run inetd to start inetd daemon:

# inetd

Making configuration changes permanent in Buildroot

In most cases it is better to make changes described in the previous section permanent, so that there will be no need to run each command each time when starting target system. Buildroot provides bare rootfs skeleton, that can be altered to automate those configuration commands and other things.

Create a directory for skeleton overlay, that will contain files that you wish to add to the target system. Configure Buildroot to use this overlay: set System configuration -> Root filesystem overlay directories aka BR2_ROOTFS_OVERLAY to the path of the overlay directory.

Then create any files you want to add to this skeleton. For example if you want to bring up networking when booting system add file etc/init.d/S41eth0 with the command that brings up eth0. Add other scripts that will start other services as well (inetd, mount NFS) to etc/init.d.

Copy inetd.conf to skeleton's etc directory if you are going to use it. Don't forget to start inetd from one of the scripts in the etc/init.d.

If you already build target system in Buildroot and wish to update it's rootfs, then remove <BUILDROOT_OUTPUT>/build/.root file to force Buildroot to copy skeleton and overlay. Then run Buildroot:

$ rm -f build/.root
$ make

If you made any changes to the busybox configuration, then to keep them copy <BUILDROOT_OUTPUT>/build/busybox-*/.config file to some permanent location, then configure Buildroot to use new configuration file with Target packages -> BusyBox configuration file to use aka BR2_PACKAGE_BUSYBOX_CONFIG. Reconfigure busybox with:

$ make busybox-reconfigure

Rebuild rootfs and vmlinux.

Debugging Applications

It is assumed that one or another way you've copied target application to the target system. Run application on target with gdbserver:

# gdbserver :49101 <application-to-debug> [application arguments]

TCP port number could any port, not occupied by other applications. Then run GDB on the host:

$ arc-linux-gdb <application-to-debug>

Then set sysroot directory path. Sysroot is a "mirror" of the target system file system: it contains copies of the applications and shared libraries installed on the target system. Path to the sysroot directory should be set to allow GDB to step into shared libraries functions. Note that shared libraries and applications on the target system can be stripped from the debug symbols to preserve disk space, while files in the sysroot shouldn't be stripped. In case of Buildroot-generated rootfs sysroot directory can be found under <BUILDROOT_OUTPUT>/staging.

(gdb) set sysroot <SYSROOT_PATH>

Then connect to the remote gdbserver:

(gdb) target remote <TARGET_IP>:49101

You can find <TARGET_IP> via running ifconfig on the target system. TCP port must much the one used when starting up gdbserver. It is important that sysroot should be set before connecting to remote target, otherwise GDB might have issues with stepping into shared libraries functions.

Then you can your debug session as usual. In the simplest case:

(gdb) continue

Debugging Applications with Native GDB

Starting from ARC GNU Toolchain release 2014.08 it is possible to build full GDB to run natively on ARC Linux. You can either select GDB in the Target applications of the Buildroot configuration, or build it as a normal cross-compiled application with ./configure --host=arc-linux-uclibc.

Then you can debug applications the same way you do this on the host system without gdbserver.

When choosing between gdbserver and native GDB please consider pros and cons of native GDB.

Pros:

  • Overhead for network communication between GDB and gdbserver is removed, theoretically improving debugging performance.
  • Some features are not implemented in the gdbserver. One example is "follow-child" model of "follow-fork" behaviour.
  • There is no need for a second host to perform debugging session, since everything is on the target system.

Cons:

  • It is required that applications on target system should have debugging symbols (unless you are so hardcore that you don't need them). Debugging symbols, especially in the most verbose case occupy significant disk space. Depending on the type of target hardware this might be or might not be a thing to consider. Usually this can be ignored in case of virtual prototypes, and is hardly a problem with development systems, however disk space is probably very limited on the production systems. Large rootfs size also means increased time required to load rootfs into the target memory.
  • Not only debugging symbols will take noticeable disk space, but also GDB will also read them intensively, so if target file system has a low performance, this might be noticeable.
  • Support for native GDB has been added much later to the ARC tool chain, hence it is theoretically less stable.
  • Host GDB requires more computational power than gdbserver. This might offset all of the gains from exclusion of the networking layer.

In general it is highly dependent on the target system whether gdbserver or native GDB is better and it is up to the software developer to decide what is better in their particular case.

Clone this wiki locally