Skip to content

GTRI based Sonar Notes and Description

arascon05 edited this page Apr 20, 2020 · 16 revisions

The GTRI-based Sonar model at https://github.com/Field-Robotics-Lab/gtri_based_sonar is based on and is forked from the SyllogismRXS Sonar model at https://github.com/SyllogismRXS/syllo-gazebo, ref. "A Computationally-Efficient 2D Imaging Sonar Model for Underwater Robotics Simulations in Gazebo", Georgia Tech . This fork has been modified as follows:

  • It has been ported to work with ROS Melodic and Gazebo 9.
  • The algorithm for converting the point cloud from the Gazebo Ray module to a Gazebo camera image has been merged into the Gazebo module. The image is no longer calculated and advertised on a separate ROS Node.
  • Modules not related to modeling Sonar have been removed in order to simplify the repository.

Example Usage

See https://github.com/Field-Robotics-Lab/DAVE/wiki/Sonar-Models#gtri for usage and example screenshot.

.xacro Interface

Here is a proposed xacro macro for instantiating a named Sonar instance onto a parent link:

Name: gtri_based_fls. Parameters: namespace, suffix, parent_link, topic, *origin.

ROS Interface

Here are the topics published by this Sonar model:

Message type Suggested topic name Description
sensor_msgs::PointCloud sonar_cloud The point cloud is calculated from Gazebo's RaySensor interface.
sensor_msgs::Image sonar_image The Sonar image calculated from the point cloud.
sensor_msgs::CameraInfo sonar_image_camera_info The height, width, and timestamp of the currently available Sonar image.

Implementation

The implementation is divided into two parts, 1) point cloud and 2) camera image:

Point Cloud

The point cloud is calculated from information available in Gazebo's RaySensor interface, https://osrf-distributions.s3.amazonaws.com/gazebo/api/dev/classgazebo_1_1sensors_1_1RaySensor.html, see file ImagingSonar.cc, currently on branch basement_world at https://github.com/Field-Robotics-Lab/gtri_based_sonar/blob/basement_world/gtri_based_fls_gazebo/src/ImagingSonar.cc.

Here is the basic flow:

Initialization

Function load reads the requested topic names and sets up the Sonar so it doesn't run unless there are topic listeners. It also sets Gazebo's RaySensor to get callbacks to function OnNewLaserScans from Gazebo's MultiRayShape class for processing scans during runtime.

Runtime

  • Function OnNewLaserScans takes a timestamp and, if not running behind, calls PutLaserData.
  • Using angle, ray, and range information in RaySensor, function PutLaserData performs nested iterations over the 2-dimensional array of VerticalRayCount by VerticalRangeCount. For each, it computes a cloud datapoint using math on range (length of ray) and retro (intensity of ray) values.

Sonar Image

The Sonar image is calculated from the point cloud, see file cloud_to_image.cpp, currently on branch basement_world at https://github.com/Field-Robotics-Lab/gtri_based_sonar/blob/basement_world/gtri_based_fls_gazebo/src/cloud_to_image.cpp. File ColorMaps.cpp maps the Sonar image to its rendered colors.

Initialization

Function init_cloud_to_image records the angle the image is calculated across and the Sonar's topic name for use during runtime.

Runtime

Function publish_cloud_to_image projects the point cloud onto a camera image and then publishes the camera image. Here are some of the transforms along the way:

  • A Gaussion sample is applied at each point.
  • A blank cv::Mat image is created
  • For each point: translate to x,y coordinate and draw a circle of radius 1.
  • Apply a median blur medianBlur of 3
  • Apply salt and pepper noise
  • Apply coloring using function Gray2Jet_matlab provided in file ColorMaps.cpp.
  • Manipulate contour points of type cv::Point.
  • Rotate the image.
  • Using cv_bridge::CvImage, convert the image to ROS type sensor_msgs::ImagePtr.
  • Timestamp the image.
  • Publish the image and its associated camera image info.
Clone this wiki locally