Expose Multiscan Matching Coarse Fine Resolution Multiplier#780
Expose Multiscan Matching Coarse Fine Resolution Multiplier#780louietouie wants to merge 7 commits intoSteveMacenski:jazzyfrom
Conversation
|
@louietouie can you summarize your testing on various parameters, and provide the graph figure, and the output maps for each of the settings? This should be information that other people are aware of and is searchable from our email chain. With that - what's a good setting we should change the configuration files to use by your testing? |
|
@louietouie what's a good setting we should change the configuration files to use by your testing? |
|
As you increase the multiplier, the runtimes get faster (up to a certain point), but theoretically, the scan matching should get less accurate. Before this PR, the multiplier was fixed at 2. From my second image below, even changing the multiplier to 4 had a substantial performance increase. To minimize the number of scans required, I believe the best multiplier (for The graph below combines scans used for both scan matching odometry (at .5 dimension and .01 resolution) and loop closure (at 4.0 dimension and .01 resolution). Here are some tests I did at multipliers 2, 4, and 10 using the a ros2 bag file so the path the robot takes was somewhat consistent. This is partially where I struggled; the tests were not repeatable (as is seen in the differences between the two equivalent-parameter tests at each multiplier level) and therefore it will be good to get feedback from others on the effects to accuracy. If the performance increase is substantial, but the accuracy loss is noticeable, it may be worth implementing the algorithm in D3 of this paper which shouldn't be too hard. I believe this would completely stop any accuracy losses, with close to the same performance increase this PR should provide (if any). |
|
I think 4 then seems reasonable. That seems to get the bulk of the computational benefits without being such a large change that I would expect massive differences. Can you do some testing on a few maps/worlds and let me know if you see any major differences?
I think that's worth an experiment! |
|
Definitely I can do that. I'm crunching to learn rust and finish a robot this week, but I will try to run some tests on other Gazebo maps early next week. Is there any other metrics to measure differences that you prefer (other than timing laserCallback)? |
|
I think you showed me sufficiently that this is an improvement in run-time performance. At this point I'm more interested in making sure map quality is preserved in exchange for it. |
|
@louietouie Hi! I wanted to follow up, were you able to run some experiments so we can merge this in? |
|
@SteveMacenski Sorry for the big delay. It's been a busy month but I have not forgotten. I will try to work on it this and next weekend. My plan is just to make a larger Gazebo map from scratch. If you have other ideas or access to any laserscan bag files of sims or real-world examples, I'd love to test those too. |
|
I really appreciate the update, thanks! I found a few datasets googling but I didn't find any in particular I can recommend |
bd197d2 to
172fe1b
Compare
|
What about just using a rosbag dataset? That would give us the best 1:1 (just make sure to replay in real-time, don't try to speed it up). You'd need your odom, TF (with map->odom stripped out), TF static, and laserscan topics. I think that's it (?). This isn't too old, I would still like this in the project! Please :-) |
|
Ah, good idea. I had been trying to make a rosbag dataset that would replay the robot trajectory, which would in turn result in the same sensor data/transformations. Makes much more sense to just record the sensor data directly. I will post results soon 👍 |
|
Let me know if there is any other tests you would like to see... i can do them pretty quickly now. |
|
what's the total integrated callback time for one versus the other in running over the dataset? The 0.003 vs 0.004 is pretty coarse information. If we see that 25% increase, then absolutely this seems awesome and I see virtually no degradation in the map quality. |
|
Sure. I'm struggling to find something higher-level than scan_filter_->registerCallback(
std::bind(&SlamToolbox::laserCallback, this, std::placeholders::_1)The way I measured this was... void AsynchronousSlamToolbox::laserCallback(
sensor_msgs::msg::LaserScan::ConstSharedPtr scan)
/*****************************************************************************/
{
const auto start = std::chrono::high_resolution_clock::now();
// store scan header
scan_header = scan->header;
// no odom info
Pose2 pose;
if (!pose_helper_->getOdomPose(pose, scan->header.stamp)) {
RCLCPP_WARN(get_logger(), "Failed to compute odom pose");
return;
}
// ensure the laser can be used
LaserRangeFinder * laser = getLaser(scan);
if (!laser) {
RCLCPP_WARN(get_logger(), "Failed to create laser device for"
" %s; discarding scan", scan->header.frame_id.c_str());
return;
}
// if not paused, process scan
if (shouldProcessScan(scan, pose)) {
addScan(laser, scan, pose);
}
const auto stop = std::chrono::high_resolution_clock::now();
const std::chrono::duration<double> diff = stop - start;
std::ofstream outfile("timing2_3", std::ofstream::app);
outfile << diff.count() << std::endl;
outfile.close();
} |
|
Sure - then just add them up for the file for the full runtime of 2 vs 4 PS we’ll also want this change in Rolling so it will be available for all future distributions |
|
Total time for Multiplier 2: 1.357905 s |
|
Makes sense. I can work on adding these same edits to the |
|
Oh wow, nice! Do you have a larger data set to test against per-chance? That's a pretty short processing time to get some metrics from. But that's a pretty great improvement indeed if that holds out! |








Basic Info
This PR has no corresponding tickets
Tested only on Jazzy, Ubuntu 24.04, small Gazebo simulations
Summary
This exposes the multiplier between the fine search and coarse search for the Multi-resolution Correlative Scan Matcher. This multiplier is only for the translational search (x and y), not the angle. So it scales up the fine resolutions
loop_search_space_resolutionandcorrelation_search_space_resolutionwith the new parametersloop_search_space_coarse_resolution_multiplierandcorrelation_search_space_coarse_resolution_multiplier. It keeps the default scale at 2.Documentation updates
I updated the
README.mdfile. I commented out new BOOST_SERIALIZATION_NVP parameters, as I noticed the PR form_pMinPassThroughdid.Future work
m_xPosesandxPoses) that could be consolidatedfine_search_angle_offsetshould be renamedfine_search_angle_resolution