Skip to content

IamMohitM/PointCloudToPatches

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

55 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Abstract

With Machine Learning, we have achieved promising results for 3D Model retrieval with the new input modality of 3D sketch [1]. To train a large-scale 3D Model retrieval system, we require a large-scale 3D sketch dataset. Ling et al [1], used FlowRep [3] as a graphical shape abstraction method to generate human-like 3D sketches. However, due to its strict input requirements, FlowRep could only process a limited number of 3D Models and consequently limiting the sketch dataset size.

In this thesis, we present the first learning-based method to generate human-like 3D sketches. We use 3D point clouds as the input modality to generate a set of control points of Coons Patches, where each patch is a set of four Bezier Curves. These Bezier curves therefore become the skeleton of human-like sketches. Our contributions are three-fold: i) We analyse and review FlowRep necessary to understand expected arbitrary 3D sketch generation, ii) We suggest the first learning-based method inspired by a 3D representation presented by Smirnov et al to generate 3D Sketches [2], iii) We provide possible directions for future work on this problem. While our sketches perform below par in terms of top-k accuracy for 3D model retrieval compared to FlowRep (SOTA) synthetic sketches, our method has a 100% success rate in processing input 3D models unlike FlowRep and achieves Mean Average Precision on par with FlowRep-based sketches for 3D Model Retrieval.

Example

Pretrained Models

Setup

Run the following commands in your terminal

git clone https://github.com/IamMohitM/PointCloudToPatches.git
cd PointCloudToPatches
pip install -e .

Train


python src/models/train_model.py --encoder PointNet
 --batch_size 2
 --no_cuda
 --dataset_path 'dataset/modelnet40_normal_resampled'
 --checkpoint_dir 'checkpoints'
 --log_dir_suffix 'test'
 --template_dir 'dataset/templates/sphere24'

The above will make a checkpoint directory 'checkpoints' where checkpoints are saved and a 'checkpoints/summaries' which contains training and validation logs for tensorboard

One can explore the arguments with python src/models/train_model.py --help

Generate Sketches

python src/scripts/reconstruct.py --pc_file "input.pts"
--output_file testing.pts
--file_type pts
--model_dir checkpoints
--template_dir dataset/templates/sphere24

reconstructs sketches from the input point cloud file.

Change Encoder (--encoder) to "edgeconv" if using EdgeConv based models.

References

[1] Ling et al, “Towards 3D VR-Sketch to 3D Shape Retrieval”, in International Conference on 3D Vision, 2020.

[2] Smirnov et al, “Learning Manifold Patch-Based Representations of Man-Made Shapes”, in International Conference on Learning Representations (ICLR), 2021

[3] Gori et al, “FlowRep: Descriptive curve networks for free-form design shapes", in ACM Transactions on Graphics (TOG), 2017

Acknowledgement

This project code is written with a combination of code from the following projects

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages