-
Notifications
You must be signed in to change notification settings - Fork 74
image_sonar_description
The uuv_simulator includes the gazebo_ros_image_sonar.cpp sensor plugin to generate synthetic forward looking sonar observations.
These notes are an attempt to describe both the plugin interface as well as the methods used in the implementation.
See fls_model_standalone for a working example.
Origins of this implementation are Cerqueira's "A novel GPU-based sonar simulator for real-time applications", although it is yet to be determined how consistent the current implmentation is with this description. An abreviate summary of the plugin is taken from UUV Simulator Issue #48
- Original implementation from the paper, using Gazebo+Rock https://github.com/romulogcerqueira/sonar_simulation
- KTH, SMARC @nilsbore implementation (https://github.com/smarc-project/smarc_simulations/blob/master/smarc_gazebo_plugins/smarc_gazebo_ros_plugins/src/gazebo_ros_image_sonar.cpp) claimed to be an implementation of "A novel GPU-based sonar simulator for real-time applications".
- Cherry picking from @nilsbore and @NickSadjoli was committed to uuv_simulator in PR#348, 12 March 2019. This is what is currently used in uuv_simulator for FLS
- The SMARC implementation was last updated 20 Nov., 2018. Note that there are three versions of the sensor in smarc_gazebo_ros_plugins
- The @NickSadjoli fork of gazebo_ros_image_sonar.cpp was last changed 22 Feb., 2019
- Given the above, can assume that the version in uuv_simulator is the most recent.
The topic names can be described in the plugin SDF. For this example, we have the following SDF configuration
<!-- This name is prepended to ROS topics -->
<cameraName>blueview_p900</cameraName>
<!-- ROS publication topics -->
<imageTopicName>image_raw</imageTopicName>
<cameraInfoTopicName>image_raw/camera_info</cameraInfoTopicName>
<pointCloudTopicName>point_cloud</pointCloudTopicName>
<depthImageTopicName>image_depth</depthImageTopicName>
<depthImageCameraInfoTopicName>image_depth/camera_info</depthImageCameraInfoTopicName>
Message Type | Default or Typical Topic | Description |
---|---|---|
sensor_msgs::PointCloud2 | /blueview_p900/point_cloud | The DepthCamera class provides a each new rgp point cloud which is published by the plugin. See Depth Campa API |
------------ | ------------------------ | ---------- |
sensor_msgs::Image | /blueview_p900/image_depth | |
sensor_msgs::Image | /blueview_p900/image_depth_normals | |
sensor_msgs::Image | /blueview_p900/image_depth_multibeam | |
sensor_msgs::Image | /blueview_p900/image_depth_normals | |
sensor_msgs::Image | /blueview_p900/image_depth_raw_sonar | |
sensor_msgs::Image | /blueview_p900/image_depth_sonar | This is the 2D sonar view that mimics what is to be expected from sonar ouput. |
sensor_msgs::CameraInfo | /blueview_p900/image_depth/camera_info | |
----------------- | ---------------------- | --- |
sensor_msgs::Image | /blueview_p900/image_raw | |
sensor_msgs::CameraInfo | /blueview_p900/image_raw/camera_info | |
sensor_msgs::CompressedImage | /blueview_p900/image_raw/compressed | |
sensor_msgs::CompressedImage | /blueview_p900/image_raw/compressedDepth | |
theora_image_transport::Packet | /blueview_p900/image_raw/theora |
The base of the implementation is the DepthCamera implementation in Gazebo, and the model for the plugin is the DepthCameraPlugin.
For the DepthCameraPlugin we see this implmentation pattern:
- DepthCamera (Inherits from Camera): API
- DepthCameraSensor: (Inherits from CameraSensor): Source (DepthCameraSensor.hh, DepthCameraSensorPrivate.hh and DepthCameraSensor.cc), [API](API
- DepthCameraPlugin (Inherits from SensorPlugin): Source ( DepthCameraPlugin.cc ), API
- DepthCameraSensor: (Inherits from CameraSensor): Source (DepthCameraSensor.hh, DepthCameraSensorPrivate.hh and DepthCameraSensor.cc), [API](API
For the Image Sonar, the DepthCameraPlugin is replaced with the image sonar plugin in the tree above. GazeboRosImageSonar inherits from the SensorPlugin and GazeboRosCameraUtils. It borrows heavily from the GazeboRosDepthCamera class.
The GazeboRosImageSonar recieves the depth image from the DepthCameraSensor. A bit A bit about how the DepthCameraSensor works.
* The basic idea is to use a render to texture workflow to create a 2D texture that represents depth as observed from the camera location. See [Z-buffering](https://docs.unity3d.com/Manual/SL-CameraDepthTexture.html), [Unity - Using Depth Textures](https://docs.unity3d.com/Manual/SL-DepthTextures.html), https://docs.unity3d.com/Manual/SL-CameraDepthTexture.html
* For each render cycle, the DepthCameraSensor copies the (DepthCamera](https://bitbucket.org/osrf/gazebo/src/9_to_10_2020-01-23/gazebo/rendering/DepthCamera.cc) *depth buffer*, which is a 1-D, 32-bit float array that is clipped, setting values outside of the near and far bounds to +/-infinity. See [DepthCameraSensor.cc::UpdateImpl](https://bitbucket.org/osrf/gazebo/src/ad0c7c71a3aeb6d2e184041c6fde460da7d325c4/gazebo/sensors/DepthCameraSensor.cc?at=default#lines-130). It appears that, unlike with Unity, the depth buffer is represented in engineering units of meters.
Here is how the 2D sonar image is created by the GazeboRosImageSonar plugin:
- Plugin sets up connection so that DepthCamera ConnectNewDepthFrame event calls...
- OnNewDepthFrame, calls...
- ComputeSonarImage...
- Converts the float pointer to an OpenCV Mat, depth_image, from the DepthCamera image. Mat is CV_32FC1 (32-bit one-channel)
- Calls ComputeNormalImage to generate normal_image from the depth_image
- Applies 2D filtering
- Dialates image
- Calls ConstructSonarImage to generate multibeam_image from depth and normal images.
- Implements the sonar equations. Target strenght, source level, noise level and directivity index are all hardcoded! These should be parameterized and/or parts of the models and environments.
- Publishes the sonar equation image - the multibeam_image.
- Calls ConstructScanImage to genereate raw_scan from depth and multibeam images. The raw_scan is a 400xN single channel, 32-bit OpenCV image (CV_32FC1) N is set by the horizontal FOV, where N = 2*(400*sin(fov/2))+20. fov is the horizontal FOV; in the example this is 88.6 degrees
- A couple of TODO comments here on things that could be done. Currently the size of the scan and the range! are hard-coded.
- The values for camera intrinsic prameters
focal_length_
,cx_
andcy_
are from GazeboRosCameraUtils. Have to look at source to find the defaults:- focal_length = width/(2*tan(hfov/2))
- principal point
- cx_= (width+1)/2
- cy_= (height+1)/2
- Applies median filter: Applies OpenCV dialate function. Parameterizing this would be an improvement.
- Applies speckle noise: Applies multiplicative Gaussian noise. Could/should be parameterized
- Publishes the raw sonar image
- Calls ConstructVisualScanImage to generate visual_scan (the 2D sonar view) from the raw_scan
- Draws the pie shaped scan image, including the background, grid-lines, etc. Uses OpenCV to draw the image.
- Color mapping from image intensity is fixed / hard-coded.
- Publishes the scan
- Publishes the depth_image (from the DepthCamera)
- ComputeSonarImage...
- OnNewDepthFrame, calls...
Some questions and thoughts for improvement.
- The horizontal FOV is set in the FLS example. In the Camera::UpdateFOV() function the vertical FOV is set by the camera aspect ratio. Would we save some cycles by changing the camera image height in https://github.com/Field-Robotics-Lab/DAVE/blob/master/fls_gazebo/models/blueview_p900/model.sdf ?
- Improve specal noise application.
- Currently hardcoded - shoudl be parameterized
- Do some research into how else it can be done?
- Add a flag to be able to turn off publishing all the intermediate images.