Skip to content

What Makes a Good Pre-Training on Points Cloud? #9

@hansen7

Description

@hansen7

General Interpretability:

  • interpretable ml book, specifically sections on learned features, Shapley values, Influential Instance
  • "Network dissection: Quantifying interpretability of deep visual representations", CVPR 2017
  • "Feature Visualisation", Olah, et al., Distill 2017.
  • Bolei's Portfolio
  • Chiyuan's Portfolio (also, transfer learning)

General Pre-Training:

  • "Rethinking ImageNet Pre-training", ICCV 2019
  • "Rethinking Pre-training and Self-training", NeurIPS 2020
  • "What is being transferred in transfer learning?", NeurIPS 2020
  • "What Makes Instance Discrimination Good for Transfer Learning?", ICLR 2021 Sub

Ideas from Contrastive Learning:

Point Cloud Specific:

  • "Rotation Invariant Convolutions for 3D Point Clouds Deep Learning", 3DV 2019
  • "Quaternion Equivariant Capsule Networks for 3D Point Clouds", ECCV 2020
  • "Label-Efficient Learning on Point Clouds using Approximate Convex Decompositions", ECCV 2020
  • "On the Universality of Rotation Equivariant Point Cloud Networks", ICLR 2021 Sub

Extensions:

  • "Neural Similarity Learning", NeurIPS 2019

Metadata

Metadata

Assignees

Labels

LiteraturesArchives for Related Literatures

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions