Skip to content

Sparse kernels for better performance? #5

@f-dangel

Description

@f-dangel

The implementation of unfoldNd relies on one-hot convolution. This means the convolution kernels are highly sparse. Hence, the code could run faster when using sparse tensors.

Open questions:

  • What is the result of a sparse convolution with a dense input? If it's a dense tensor, that would be good.
  • Does using sparse tensors provide a benefit in terms of run time? (related: Add a benchmark #4)

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or requestwaitingWaiting for actions of a third party

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions