You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The influence of the sensor noise model is illustrated in another experiment.
197
-
We do this experiment in a smaller-scale, indoor scenario, using RGB-D data to highlight that our method also works here.
198
-
In this experiment, we use a voxel size of 0.01m. RGB-D sensors based on structured light and/or stereo are notoriously noisy at longer distances.
199
-
Fig. 4(a) shows raw data from the TUM RGB-D SLAM dataset, featuring people moving around in an environment captured using a noisy RGB-D sensor (Kinect).
200
-
The noise is especially noticeable by the heavy wall distortion with errors above 0.5m.
201
-
In Fig. 4(b) to Fig. 4(c), we show the result of detecting dynamic points (yellow) with different parameter values for \(d_s\), that is, sensor noise, keeping \(d_p=1\).
202
-
As can be seen, by accounting for large enough sensor noise (Fig. 4(c) and Fig. 4(d)), the false positive points decrease substantially.
203
-
Too large \(d_s\) makes the method more conservative, but as long as there is enough and varied data, it might still work well, as demonstrated in Fig. 4(d).
204
-
</p>
205
-
</div>
179
+
</div>
206
180
207
-
<!-- Section II: Quantitative Results -->
208
-
<divclass="row">
209
181
<divclass="col-md-10 col-md-offset-1">
210
-
<h2> Section II: Quantitative Results</h2>
182
+
<h3>
183
+
Section I-C: DUFOMap Ablation Study in RGB-D dataset
The influence of the sensor noise model is illustrated in another experiment.
197
+
We do this experiment in a smaller-scale, indoor scenario, using RGB-D data to highlight that our method also works here.
198
+
In this experiment, we use a voxel size of 0.01m. RGB-D sensors based on structured light and/or stereo are notoriously noisy at longer distances.
199
+
Fig. 4(a) shows raw data from the TUM RGB-D SLAM dataset, featuring people moving around in an environment captured using a noisy RGB-D sensor (Kinect).
200
+
The noise is especially noticeable by the heavy wall distortion with errors above 0.5m.
201
+
In Fig. 4(b) to Fig. 4(c), we show the result of detecting dynamic points (yellow) with different parameter values for \(d_s\), that is, sensor noise, keeping \(d_p=1\).
202
+
As can be seen, by accounting for large enough sensor noise (Fig. 4(c) and Fig. 4(d)), the false positive points decrease substantially.
203
+
Too large \(d_s\) makes the method more conservative, but as long as there is enough and varied data, it might still work well, as demonstrated in Fig. 4(d).
204
+
</p>
211
205
</div>
206
+
207
+
<!-- Section II: Quantitative Results -->
208
+
<!-- <div class="row"> -->
212
209
<divclass="col-md-10 col-md-offset-1">
210
+
<h2> Section II: Quantitative Results</h2>
213
211
<h3>
214
212
Section II-A: Quantitative result in all KITTI sequence
Table II shows the dynamic removal results on the dataset from the paper with different sensor setups. Our proposed method, DUFOMap, get high scores on both SA and DA by accurately detecting dynamic points. This enables the generation of complete as well as clean maps for downstream tasks.
Table II shows the dynamic removal results on the dataset from the paper with different sensor setups. Our proposed method, DUFOMap, get high scores on both SA and DA by accurately detecting dynamic points. This enables the generation of complete as well as clean maps for downstream tasks.
225
+
</p>
231
226
232
-
<divclass="col-md-10 col-md-offset-1">
233
227
<h3>
234
228
Section II-B: Runtime comparison and detailed breakdown
Table III and Fig. 5 present present information on the run time of the different methods for two of the datasets, one with a 64-channel LiDAR (KITTI highway) and one with a 16-channel LiDAR (semi-indoor).
242
-
In general, our method outperforms other methods in both dense and sparse sensor settings.
243
-
A detailed breakdown of the execution time for our method is provided in Fig. 5. We observe that the ray-casting step, as expected, is the most computationally intensive.
Table III and Fig. 5 present present information on the run time of the different methods for two of the datasets, one with a 64-channel LiDAR (KITTI highway) and one with a 16-channel LiDAR (semi-indoor).
235
+
In general, our method outperforms other methods in both dense and sparse sensor settings.
236
+
A detailed breakdown of the execution time for our method is provided in Fig. 5. We observe that the ray-casting step, as expected, is the most computationally intensive.
237
+
</p>
246
238
</div>
247
-
</div>
248
239
249
-
250
-
<divclass="col-md-10 col-md-offset-1">
251
-
<hr><br>
252
240
</div>
253
241
242
+
<divclass="container" id="BibTex">
243
+
<divclass="col-md-10 col-md-offset-1">
244
+
<h3class="title">BibTeX</h3>
245
+
<p> If you find our work useful in your research, please consider citing:</p>
0 commit comments