Skip to content

Generate LiDAR Training Dataset #16

@rogeherlor

Description

@rogeherlor

Hi,

I am working with LiDAR DNNs and I want to see the effect of using the LiDAR corruption effect while training instead of using the original ModelNet40 objects. Instead of using your uploaded dataset during the validation stage, I plan to use it during the test stage, and I need to create the LiDAR training set.

When I try to generate the LiDAR training dataset based on the original training set of ModelNet40, ModelNet40-C/data/occlusion.py crashes because core_occlusion() returns a pcd with 0 points:

def core_occlusion(mesh, type, camera_extrinsic=None, camera_intrinsic=None, window_width=1080, window_height=720, n_points=None, downsample_ratio=None):
    if camera_extrinsic is None:
        camera_extrinsic = get_default_camera_extrinsic()
    
    if camera_intrinsic is None:
        camera_intrinsic = get_default_camera_intrinsic()

    camera_parameters = o3d.camera.PinholeCameraParameters()
    camera_parameters.extrinsic = camera_extrinsic
    camera_parameters.intrinsic.set_intrinsics(**camera_intrinsic)

    viewer = o3d.visualization.Visualizer()
    viewer.create_window(width=window_width, height=window_height)
    viewer.add_geometry(mesh)

    control = viewer.get_view_control()
    control.convert_from_pinhole_camera_parameters(camera_parameters)
    # viewer.run()

    depth = viewer.capture_depth_float_buffer(do_render=True)

    viewer.destroy_window()
    pcd = o3d.geometry.PointCloud.create_from_depth_image(depth, camera_parameters.intrinsic, extrinsic=camera_parameters.extrinsic)

    if downsample_ratio is not None:
        ratio =  int((1 - downsample_ratio) / downsample_ratio)
        pcd = pcd.uniform_down_sample(ratio)
    elif n_points is not None:
        # print(np.asarray(pcd.points).shape[0])
        ratio =  int(np.asarray(pcd.points).shape[0] / n_points)
        if ratio > 0:
            # if type == 'occlusion':
            set_points(pcd, shuffle_data(np.asarray(pcd.points)))
            pcd = pcd.uniform_down_sample(ratio)
    
    return pcd


def occlusion_1(mesh, type, severity, window_width=1080, window_height=720, n_points=None, downsample_ratio=None):
    points = get_points(mesh)
    points = normalize(points)
    set_points(mesh, points)
    if type == 'occlusion':
        camera_extrinsic = random_pose(severity)
    elif type == 'lidar':
        camera_extrinsic,pose = lidar_pose(severity)
    camera_intrinsic = get_default_camera_intrinsic(window_width, window_height)
    pcd = core_occlusion(mesh, type, camera_extrinsic=camera_extrinsic, camera_intrinsic=camera_intrinsic, window_width=window_width, window_height=window_height, n_points=n_points, downsample_ratio=downsample_ratio)

    points = get_points(pcd)
    if points.shape[0] < n_points:
        index = np.random.choice(points.shape[0], n_points)
        points = points[index]
    # points = normalize(points)
    # points = denomalize(points, scale, offset)
    if type == 'lidar':
        return points[:n_points,:], pose
    else:
        return points[:n_points,:]

Most of the objects are rendered (i.e.: person_0059.off):
image

But the ones that return 0 points are not rendered (i.e.: person_0060.off)

Could you upload the training dataset or let me know why some objects return 0 points (i.e.: person_0060.off)? Did this happen when converting the ModelNet40 test split?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions