-
Notifications
You must be signed in to change notification settings - Fork 74
image_sonar_description
The uuv_simulator includes the gazebo_ros_image_sonar.cpp sensor plugin to generate synthetic forward looking sonar observations.
These notes are an attempt to describe both the plugin interface as well as the methods used in the implementation.
See fls_model_standalone for a working example.
The topic names can be described in the plugin SDF. For this example, we have the following SDF configuration
<!-- This name is prepended to ROS topics -->
<cameraName>blueview_p900</cameraName>
<!-- ROS publication topics -->
<imageTopicName>image_raw</imageTopicName>
<cameraInfoTopicName>image_raw/camera_info</cameraInfoTopicName>
<pointCloudTopicName>point_cloud</pointCloudTopicName>
<depthImageTopicName>image_depth</depthImageTopicName>
<depthImageCameraInfoTopicName>image_depth/camera_info</depthImageCameraInfoTopicName>
Message Type | Default or Typical Topic | Description |
---|---|---|
sensor_msgs::PointCloud2 | /blueview_p900/point_cloud | The DepthCamera class provides a each new rgp point cloud which is published by the plugin. See Depth Campa API |
------------ | ------------------------ | ---------- |
sensor_msgs::Image | /blueview_p900/image_depth | |
sensor_msgs::Image | /blueview_p900/image_depth_normals | |
sensor_msgs::Image | /blueview_p900/image_depth_multibeam | |
sensor_msgs::Image | /blueview_p900/image_depth_normals | |
sensor_msgs::Image | /blueview_p900/image_depth_raw_sonar | |
sensor_msgs::Image | /blueview_p900/image_depth_sonar | This is the 2D sonar view that mimics what is to be expected from sonar ouput. |
sensor_msgs::CameraInfo | /blueview_p900/image_depth/camera_info | |
----------------- | ---------------------- | --- |
sensor_msgs::Image | /blueview_p900/image_raw | |
sensor_msgs::CameraInfo | /blueview_p900/image_raw/camera_info | |
sensor_msgs::CompressedImage | /blueview_p900/image_raw/compressed | |
sensor_msgs::CompressedImage | /blueview_p900/image_raw/compressedDepth | |
theora_image_transport::Packet | /blueview_p900/image_raw/theora |
The base of the implementation is the DepthCamera implementation in Gazebo, and the model for the plugin is the DepthCameraPlugin.
For the DepthCameraPlugin we see this implmentation pattern:
- DepthCamera (Inherits from Camera): API
- DepthCameraSensor: (Inherits from CameraSensor): Source (DepthCameraSensor.hh, DepthCameraSensorPrivate.hh and DepthCameraSensor.cc), [API](API
- DepthCameraPlugin (Inherits from SensorPlugin): Source ( DepthCameraPlugin.cc ), API
- DepthCameraSensor: (Inherits from CameraSensor): Source (DepthCameraSensor.hh, DepthCameraSensorPrivate.hh and DepthCameraSensor.cc), [API](API
For the Image Sonar, the DepthCameraPlugin is replaced with the image sonar plugin in the tree above. GazeboRosImageSonar inherits from the SensorPlugin and GazeboRosCameraUtils. It borrows heavily from the GazeboRosDepthCamera class.
Here is how the 2D sonar image is created:
- Plugin sets up connection so that DepthCamera ConnectNewDepthFrame event calls...
- OnNewDepthFrame, calls...
- ComputeSonarImage...
- Converts the float pointer to an OpenCV Mat, depth_image, from the DepthCamera image. Mat is CV_32FC1 (32-bit one-channel)
- Calls ComputeNormalImage to generate normal_image from the depth_image
- Applies 2D filtering
- Dialates image
- Calls ConstructSonarImage to generate multibeam_image from depth and normal images.
- Implements the sonar equations. Target strenght, source level, noise level and directivity index are all hardcoded! These should be parameterized and/or parts of the models and environments.
- Publishes the sonar equation image - the multibeam_image.
- Calls ConstructScanImage to genereate raw_scan from depth and multibeam images. The raw_scan is a 400xN single channel, 32-bit OpenCV image (CV_32FC1) N is set by the horizontal FOV, where N = 2*(400*sin(fov/2))+20. fov is the horizontal FOV; in the example this is 88.6 degrees
- A couple of TODO comments here on things that could be done. Currently the size of the scan and the range! are hard-coded.
- The values for camera intrinsic prameters
focal_length_
,cx_
andcy_
are from GazeboRosCameraUtils. Have to look at source to find the defaults:- focal_length = width/(2*tan(hfov/2))
- principal point
- cx_= (width+1)/2
- cy_= (height+1)/2
- Applies median filter: Applies OpenCV dialate function. Parameterizing this would be an improvement.
- Applies speckle noise: Applies multiplicative Gaussian noise. Could/should be parameterized
- Publishes the raw sonar image
- Calls ConstructVisualScanImage to generate visual_scan (the 2D sonar view) from the raw_scan
- Draws the pie shaped scan image, including the background, grid-lines, etc. Uses OpenCV to draw the image.
- Color mapping from image intensity is fixed / hard-coded.
- Publishes the scan
- Publishes the depth_image (from the DepthCamera)
- ComputeSonarImage...
- OnNewDepthFrame, calls...
Some questions and thoughts for improvement.
- The horizontal FOV is set in the FLS example. In the Camera::UpdateFOV() function the vertical FOV is set by the camera aspect ratio. Would we save some cycles by changing the camera image height in https://github.com/Field-Robotics-Lab/DAVE/blob/master/fls_gazebo/models/blueview_p900/model.sdf ?
- Improve specal noise application.
- Currently hardcoded - shoudl be parameterized
- Do some research into how else it can be done?
- Add a flag to be able to turn off publishing all the intermediate images.