Skip to content

Conversation

@ymohit
Copy link

@ymohit ymohit commented Jul 2, 2021

This is in response to #1680 and following the suggestion of @wjmaddox. As of now, the plan is as follows:

  • Create sparse lazy tensor that can hold values and indices much like scipy sparse csr and/or coo matrices. In this step, we will stick to batch mode and other generic gpytorch features while giving the functionality of scipy.
  • Extend the above sparse lazy tensor to handle values and keys of SKI weight matrix W. This shall be a better alternative to using InterpolatedLazyTensor in typical GP SKI models.
  • We will visit methods (factorized approach and online gp) that require setting up W'Wv and set up efficient MVM and updates routine for them.

@ymohit ymohit changed the title Planning sparse lazy tensor Sparse lazy tensor and SKI weight matrix W Jul 2, 2021
@jacobrgardner
Copy link
Member

Part of the reason we implemented a lot of how we handle W ourselves is because back in like PyTorch 0.2 when we first implemented SKI, the sparse tensor support in native PyTorch was pretty slow.

It's still not a guarantee that just implementing W as a sparse tensor in native torch will be faster (or possible -- back then they didn't support batch matmuls), because W has special additional structure, but it could be worth a look.

If it's still better to special case W, I agree we should have SparseLazyTensor that wraps PyTorch native sparse tensors, and perhaps a subclass that deals with W specifically.

Finally, I wonder if this effort should be done on the linear_operator package we're trying to pull out of GPyTorch, because SparseLazyTensors aren't necessarily positive definite. This could be a good motivation to finally finish up a minimal version of that package and make it a dependency for GPyTorch.

@ymohit
Copy link
Author

ymohit commented Jul 2, 2021

Thanks for commenting @jacobrgardner.

I'll try to run some simple tests that shall allow us to compare current implementation with torch sparse case.

For now, I'll push this work here only as I'm not able to see how gpytorch depends on the linera_operator library. Feel free to drop any pointer if I'm missing any infomation.

@wjmaddox
Copy link
Collaborator

wjmaddox commented Jul 9, 2021

Linear_operator lives here: https://github.com/cornellius-gp/linear_operator and as you can see it's currently an inconsistently updated clone of gpytorch.lazy but the goal is to have linear_operator be a (more) stand-alone linear algebra package that supports generic linear operators (e.g. lazy tensors) beyond just those useful for GPs.

@Balandat
Copy link
Collaborator

Ultimately the hope is that the basic infrastructure for this will be implemented in pytorch proper (as it is for scipy / Tensorflow): pytorch/pytorch#28341.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants