|
1 | 1 | ## Overview |
2 | 2 | - Two seperate point cloud renderers with support for splatting and phong lighting |
3 | | -- [Ground Truth Renderer](https://github.com/momower1/PointCloudEngine/wiki/Ground-Truth-Renderer) renders a subset of the point cloud and can input it into a neural network |
4 | | -- [Octree Renderer](https://github.com/momower1/PointCloudEngine/wiki/Octree-Renderer) builds an octree in a preprocessing step and renders the point cloud with LOD control |
5 | | -- [PlyToPointcloud](https://github.com/momower1/PointCloudEngine/wiki/PlyToPointcloud) tool for converting .ply files with _x,y,z,nx,ny,nz,red,green,blue_ format into the required .pointcloud format |
| 3 | +- Ground Truth Renderer renders a point cloud with splatting, pull push algorithm or neural rendering pipeline and can compare results against a mesh |
| 4 | +- Octree Renderer builds an octree in a preprocessing step and renders the point cloud with LOD control and splatting |
| 5 | +- PlyToPointcloud tool converts .ply files with _x,y,z,nx,ny,nz,red,green,blue_ format into the required .pointcloud format |
6 | 6 |
|
7 | 7 | ## Getting Started |
8 | | -- Follow install guide for latest Windows 10 64-Bit release from [Releases](https://github.com/momower1/PointCloudEngine/releases) |
9 | 8 | - Drag and drop your .ply files onto _PlyToPointcloud.exe_ |
10 | 9 | - Adjust the _Settings.txt_ file (optional) |
11 | 10 | - Run _PointCloudEngine.exe_ |
|
18 | 17 | - Move the camera so close that the individual splats are visible (depending on the scale this might not happen) |
19 | 18 | - Adjust the _Sampling Rate_ in such a way that the splats overlap just a bit |
20 | 19 | - Enable _Blending_, look at the point cloud from various distances and adjust the blend factor with so that only close surfaces are blended together |
| 20 | +- Pull-Push algorithm can be configured in a similar way |
| 21 | +- Neural Rendering Pipeline requires loading all three exported networks |
21 | 22 |
|
22 | | -## Developer Setup |
23 | | -- Install the following on your Windows machine at the default install locations |
24 | | - - [HDF5](https://www.hdfgroup.org/downloads/hdf5) |
25 | | - - [Cuda 10.0](https://developer.nvidia.com/cuda-10.0-download-archive?target_os=Windows&target_arch=x86_64&target_version=10) |
26 | | - - [Anaconda 3.7](https://repo.anaconda.com/archive/Anaconda3-2019.07-Windows-x86_64.exe) for all users and add it to PATH |
| 23 | +## Developer Setup (Windows) |
| 24 | +- The following libraries are required, make sure that the CUDA toolkit version and Pytorch version are exactly the same |
| 25 | + - [Cuda 11.7](https://developer.nvidia.com/cuda-11-7-0-download-archive?target_os=Windows&target_arch=x86_64&target_version=11&target_type=exe_local) |
| 26 | + - [Anaconda3 2022.10](https://www.anaconda.com/products/distribution#Downloads) for all users and add it to PATH |
27 | 27 | - Python environment in Visual Studio Installer (no need to install Python 3.7) |
28 | | - - Visual Studio 2019 Extension _Microsoft Visual Studio Installer Projects_ |
29 | | -- Run admin command line: |
30 | | - - _conda install pytorch torchvision cudatoolkit=10.0 -c pytorch_ |
| 28 | + - [Pytorch 1.13.1](https://pytorch.org/get-started/locally/) _conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia_ |
| 29 | +- Update include directories, library directories and post build event paths in Visual Studio PointCloudEngine property pages according to the installation paths |
31 | 30 |
|
32 | 31 | ## Example for supported .ply file |
33 | 32 | ``` |
@@ -55,48 +54,13 @@ end_header |
55 | 54 | - View the point cloud in different modes |
56 | 55 | - Splats: high quality splat rendering of the whole point cloud |
57 | 56 | - Points: high quality point rendering of the whole point cloud |
58 | | - - Neural Network: renders neural network output and loss channel comparison |
| 57 | + - Pull Push: fast screen-space inpainting algorithm applied to point rendering |
| 58 | + - Mesh: render .OBJ textured mesh for comparison |
| 59 | + - Neural Network: renders neural network output from the sparse point rendering |
59 | 60 | - Compare to a sparse subset of the point cloud with a different sampling rate |
60 | | -- HDF5 dataset generation containing color, normal and depth images of the point cloud |
61 | 61 | - Blending the colors of overlapping splats |
62 | 62 | - Phong Lighting |
63 | 63 |
|
64 | | -## Neural Network View Mode: |
65 | | -- Neural network should output the splat rendering from a sparse point rendering input |
66 | | -- The neural network must be loaded from a selected .pt file with _Load Model_ |
67 | | -- A description of the input/output channels of the network must be loaded from a .txt file with _Load Description_ |
68 | | -- Each entry consists of: |
69 | | - - String: Name of the channel (render mode) |
70 | | - - Int: Dimension of channel |
71 | | - - String: Identifying if the channel is input (inp) or output (tar) |
72 | | - - String: Transformation keywords e.g. normalization |
73 | | - - Int: Offset of this channel from the start channel |
74 | | -- Example for a simple description file: |
75 | | -``` |
76 | | -[['PointsSparseColor', 3, 'inp', 'normalize', 0], ['SplatsColor', 3, 'tar', 'normalize', 0]] |
77 | | -``` |
78 | | -- When a _Loss Function_ is selected |
79 | | - - Loss between two channels (_Loss Self_ and _Loss Target_) is computed |
80 | | - - Screen area between these channel renderings can be controlled _Loss Area_ |
81 | | - |
82 | | -## HDF5 Dataset Generation |
83 | | -- Rendering resolution must currently be a power of 2 |
84 | | -- There are two modes for the dataset generation with parameters in the _HDF5 Dataset_ tab |
85 | | - - Waypoint dataset: |
86 | | - - Interpolates the camera between the user set waypoints |
87 | | - - Use the _Advanced_ tab to add/remove a waypoint for the current camera perspective |
88 | | - - _Preview Waypoints_ shows a preview of the interpolation |
89 | | - - _Step Size_ controls the interpolation value between two waypoints |
90 | | - - Sphere dataset: |
91 | | - - Sweeps the camera along a sphere around the point cloud |
92 | | - - _Step Size_ influences the amount of viewing angles (0.2 results in ~1000 angles) |
93 | | - - Theta and Phi Min/Max values define the subset of the sphere to be sweeped |
94 | | - - Move the camera to the desired distance from the center of the point cloud before generation |
95 | | -- Start the generation process with _Generate Waypoint HDF5 Dataset_ or _Generate Sphere HDF5 Dataset_ |
96 | | - - Make sure that the density and rendering parameters for the _Sparse Splats_ view mode are set according to [Configure the rendering parameters](https://github.com/momower1/PointCloudEngine#configuring-the-rendering-parameters) |
97 | | - - The generated file will be stored in the HDF5 directory |
98 | | - - After generation all the view points can be selected with the _Camera Recording_ slider |
99 | | - |
100 | 64 | # Octree Renderer |
101 | 65 | ## Features |
102 | 66 | - Loads and renders point cloud datasets and generates an octree for level-of-detail |
@@ -133,4 +97,8 @@ end_header |
133 | 97 | - char[3] - normalized normal |
134 | 98 | - uchar[3] - rgb color |
135 | 99 |
|
| 100 | +# Training |
| 101 | +- The neural rendering pipeline is trained in Pytorch |
| 102 | +- Datasets for training can be created within the Engine under the Dataset tab |
| 103 | + |
136 | 104 | Copyright © Moritz Schöpf. All rights reserved. The use, distribution and change of this software and the source code without written permission of the author is prohibited. |
0 commit comments