We mainly follow the procedure in OneFormer3D.
-
Download ScanNet v2 data HERE. Link or move the 'scans' folder to
data/scannet. If you are performing segmentation tasks and want to upload the results to its official benchmark, please also link or move the 'scans_test' folder to this directory. -
In
data/scannetdirectory, extract point clouds and annotations by runningpython batch_load_scannet_data.py. Add the--scannet200flag if you want to get markup for the ScanNet200 dataset. -
Enter the project root directory, generate training data by running
python tools/create_data.py scannet --root-path ./data/scannet --out-dir ./data/scannet --extra-tag scannetor for ScanNet200:
mkdir data/scannet200
python tools/create_data.py scannet200 --root-path ./data/scannet --out-dir ./data/scannet200 --extra-tag scannet200The overall process for ScanNet could be achieved through the following script
python batch_load_scannet_data.py
cd ../..
python tools/create_data.py scannet --root-path ./data/scannet --out-dir ./data/scannet --extra-tag scannetOr for ScanNet200:
python batch_load_scannet_data.py --scannet200
cd ../..
mkdir data/scannet200
python tools/create_data.py scannet200 --root-path ./data/scannet --out-dir ./data/scannet200 --extra-tag scannet200The directory structure after pre-processing should be as below
scannet
├── meta_data
├── batch_load_scannet_data.py
├── load_scannet_data.py
├── scannet_utils.py
├── scans
├── scans_test
├── scannet_instance_data
├── points
│ ├── xxxxx.bin
├── instance_mask
│ ├── xxxxx.bin
├── semantic_mask
│ ├── xxxxx.bin
├── super_points
│ ├── xxxxx.bin
├── seg_info
│ ├── train_label_weight.npy
│ ├── train_resampled_scene_idxs.npy
│ ├── val_label_weight.npy
│ ├── val_resampled_scene_idxs.npy
├── scannet_oneformer3d_infos_train.pkl
├── scannet_oneformer3d_infos_val.pkl
├── scannet_oneformer3d_infos_test.pkl