Skip to content
Low-Keyy edited this page Dec 26, 2025 · 4 revisions

SALT is developed based on LABELER and supports efficient pre-annotation of point clouds, making the labeling process more convenient and significantly reducing manual labor. It greatly accelerates dataset annotation and benefits the wider robotics community.

This document primarily introduces the core features of SALT. For additional basic functionalities, please refer to the appendix documentation (adapted from LABELER).

Overview

As shown in the figure, this section mainly introduces the three key features of SALT:

  1. Point Cloud Pre-annotation (SALT)
  2. Fast Point Cloud Annotation (Merge)
  3. Automatic Instance-level Annotation (AutoInstance)

SALT

Click the SALT button, and the following configuration interface will pop up:

After clicking Update Config, the config.yaml file will automatically open for hyperparameter configuration.
These hyperparameters are mainly used to set the LiDAR configuration and platform motion parameters for different datasets.
We provide reference configuration files for the open-source datasets KITTI, KITTI-16, nuScenes, and S.MID, which users can modify according to their own datasets.

NOTE: In general, we recommend keeping most parameters at their default values except for the Path parameter.
Path specifies the model path, and users should update it according to the path of the downloaded model.

Typically, you only need to adjust the following five parameters:

  • sn
  • t
  • α
  • voxel size
  • DBSCAN eps and min_samples

First, the meaning of each parameter is as follows:

Parameter Description
conda_sh_path Path to conda environment
data_dir Path to the data directory
cache_dir Path to the cache director
resume Whether resume from the last inference frame or not
seg_ground Whether to separate ground or not
ground_num number of pre-segments used for ground
indoor Whether to separate the ceiling or not
camera_model Whether a camera is available
real_cam_num Number of real cameras
sn Number of frames to merge, determined by LiDAR resolution and platform speed
voxel_size Voxel size, determined by LiDAR resolution
eps DBSCAN clustering epsilon (radius)
min_samples Minimum number of points required for DBSCAN clusters
voxel_grow Growth factor for voxel expansion
pseduo_camera_num Number of pseudo cameras for augmentation
K Pseudo-camera intrinsics
rot Rotation axis, 0 for x-axis, 1 for y-axis, 2 for z-axis
tra Upward axis, 0 for x-axis, 1 for y-axis, 2 for z-axis
start_rot Initial rotation between baselink and lidarlink
camera_angle_ud(α) Initial pitch angle (α)
camera_angle_rl Determined by baselink
camera_position Initial translation vector (t)
width Pseudo-image width in pixels
height Pseudo-image height in pixels
sam2_checkpoint Path to the SAM2 model checkpoint
model_cfg Path to the SAM2 model configuration file
Dataset Lidar Type sn t α voxel size DBSCAN parameter
SemanticKITTI Velodyne HDL-64E 7 30 0.6 [0.2,0.2,0.2] [ [0.6,30],[1.2,50] ]
SemanticKITTI-16 Velodyne HDL-64E (Reduce LiDAR beams from 64 to 16) 7 30 0.6 [0.2,0.2,0.6] [ [1.2,20],[1.2,50] ]
nuScenes Velodyne HDL-32E 9 15 1.9 [0.2,0.2,0.6] [ [1.2,20],[1.2,50] ]
S.MID Livox Mid-360 7 35 2.5 [0.2,0.2,0.2] [ [0.6,10],[1.2,50] ]

These parameters can be derived based on the LiDAR beam configuration and platform motion speed.
Reference parameter values of voxel size in z axes can be obtained as follows (we recommend 0.2-0.6):

$$ \frac{v_{\text{relative}}}{40} \times \text{vertical resolution} $$

Examples

  • SemanticKITTI:

$$ \frac{20}{40} \times \frac{26.8}{64} = 0.209 $$

  • nuScenes:

$$ \frac{20}{40} \times \frac{41.33}{32} = 0.645 $$

  • S.MID (treated as a 32-beam mechanical spinning LiDAR at 10 Hz):

$$ \frac{2}{40} \times \frac{59.0}{32} = 0.184 $$

We recommend setting sn to a value between 7 and 15, depending on how static the environment is.

After configuring the parameters, click Run .
Wait for the progress bar to finish, and the point cloud pre-annotation will be completed.

Merge

In addition to the original labeling capabilities of LABELER, we introduce a new fast point cloud annotation feature.
Users simply need to click the Merge button, select the corresponding label, and double-click on the target object to complete rapid annotation.
The effect is demonstrated as follows:

AutoInstance

Once all point clouds have been labeled, click the AutoInstance button to perform instance-level annotation for all objects based on SAM2 tracking information.
The resulting effect is shown below:

Clone this wiki locally