Skip to content

Developer's Guide

Firas Zaidan edited this page Aug 29, 2017 · 34 revisions

This is the Developer's Guide. It contains the technical details of the implementation. The guide assumes that you have read the User's Guide and have advanced programming skills. If you are not familiar with ROS (Robot Operating System) please read the ROS Documentation.

Building the Robot

Hardware

The robot used for this project is a Kobuki Turtlebot 2. It can receive commands from other devices using the Robot Operating System (ROS). The robot itself is shaped like a disk (see two figures below) and on top a platform can be attached to mount the computing unit and sensors.

Kobuki Turtlebot 2 front view

Kobuki Turtlebot 2 front view

Kobuki Turtlebot 2 back view

Kobuki Turtlebot 2 back view

Computing Unit

The computing unit used in the project is a Intel NUC with a Celeron Processor. To avoid tilting of the robot, when it accelerates, the robot was build as small as possible with only one platform attached. To make this working, some 3D-printed parts had to be added. One is a base plate to attach the NUC to the robot:

Front view

NUC front view

Bottom view

NUC bottom view

Back view (attached)

NUC back view (attached)

Adapter for the Astra

Another 3D-printed part is an adapter to attach a Kinect-like system called Astra. This adapter fits into the slot, which is normally for cable management, but by reversing the orientation of the platform, it can be used to attach the Astra this way:

NUC back view (attached)

Power and Wiring

The NUC is powered by the battery of the Turtlebot and is connected to the 12V output. For the data connection between the robot and the NUC, a USB-B cable is used. The Astra and the RPLIDAR are connected via USB to the NUC. The RPLIDAR is attached to a so called ”charging capable” USB-port, which is able to deliver the necessary current.

Initial Setup

The initial setup was already done by Kuboki. The robot only has to be connected to the same Wifi. To facilitate the startup, a .launch-file has been written, which contains the following:

TODO

It starts up the robot, after that the RPLIDAR and than creates a tf for the laser sensor, which is about 3cm above the ground.

Simulator

For the simulator the stage ros package has been used. For the robot model, a predefined Turtlebot model was used. Only the laser sensor parameters were changed to match the RPLIDAR. This is done in the turtlebot_model.inc at top in the ranger-section. The field of view (fov) is changed to 360, as well as the samples. The map in the simulator shows the 3rd floor of the SA building at the University of Duisburg-Essen, which is the place where the real robot was tested. The localization in the simulator is done by the fake amcl-package which provides a pseudo Monte Carlo localization.

Simulator

Localisation

For the localization of the real robot adaptive Monte Carlo localization is used (amcl). The corresponding ROS package can be used out of the box with the following parameters:

  • use_map_topic = false to get the map over the global frame
  • scan_topic = scan to set the correct topic for the laser scanner
  • initial_pose_(x,y,z) = 0.0
  • odom_frame_id = odom to set the odometry frame identifier
  • base_frame_id = base_link to set the base frame identifier
  • global_frame_id = map to set the global frame identifier

SLAM

Coverage

Navigation

Zone Generation

Planner

The planner node in scripts/planner.py performs high level task planning with a SMACH state machine.

With the smach_viewer you can view the current state of the planning node:

WilsonROS SMACH State Machine

WaitForZone State

The state machine starts in the state WaitForZone. When there is no zone data (the deque is empty), the planner fetches the current result of the zone generator and stays in the WaitForZone state. Due to lack of deep object deserialization the nested data has to be deserialized explicitly. This data is not updated till the zone deque is completely empty. When there is zone data (the deque is not empty and contains deque's of waypoints), the state machine changes into the GotZone state.

GotZone State

This state is similar to the WaitForZone state. It process also a deque, but the deque contains Waipoints. These waypoints are our actual measurement points. If there is a waypoint in the deque, the state machine changes into the GotWaypoint state. If there is no waypoint (the deque is empty), the planner removes the current zone and changes into the WaitForZone state.

GotWaypoint State

This state is responsible for the actual measurement. It stays in this state, till the waypoint is reached (move_base action server returns GoalStatus SUCCEEDED), an error occurred or the timer limit was exceeded (default is 60s, increase this if you get a lot of false positives). Each time the planer sends a MoveBaseAction message to the move_base action server. The move_base action server is responsible for the actual movement to the goal point. If the waypoint is not reachable, the robot will stay at it's current position (probably the last successful reached waypoint) and does not start trying to move to the goal point. Currently, the only way to detect this, is to use a timeout. When the waypoint is estimated as reachable, but in reality is not, the robot will crash. To prevent this you have to set all parameters appropriate (e.g. radius of the robot).

When the waypoint is reached or the wait condition was not satisfied (error or timeout occurred), the planner removes the waypoint from the current zone. Then the state machine changes into the GotZone state.

For further help see this tutorial.

Clone this wiki locally