You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi
I want to use GPytorch by avoiding the kernel creation with a tensor.
I just want to provide a matrix vector result without the matrix creation.
I read that the lazytensor or linear_operator package could do so but its use is not clear for me.
I propose a stupid example that fails and i was wondering how to introduce the scaling with the lazytensor to optimize the model.
import math
import torch
import gpytorch
from gpytorch.lazy.lazy_tensor import LazyTensor
class Stupid(LazyTensor):
def __init__(self,x):
self.size = x.shape[0]
def _matmul(self, v):
return v
def _size(self):
return torch.Size([self.size,self.size])
def _transpose_nonbatch(self):
return self
# Use the simplest form of GP model, exact inference
class StupidModel(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super().__init__(train_x, train_y, likelihood)
self.mean_module = gpytorch.means.ConstantMean()
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = Stupid(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
training_iter = 50
# Training data is 100 points in [0,1] inclusive regularly spaced
train_x = torch.linspace(0, 1, 100)
# True function is sin(2*pi*x) with Gaussian noise
train_y = torch.sin(train_x * (2 * math.pi)) + torch.randn(train_x.size()) * math.sqrt(0.04)
# Wrap training, prediction and plotting from the ExactGP-Tutorial into a function,
# so that we do not have to repeat the code later on
def train(model, likelihood, training_iter=training_iter):
# Use the adam optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=0.1) # Includes GaussianLikelihood parameters
# "Loss" for GPs - the marginal log likelihood
mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)
for i in range(training_iter):
# Zero gradients from previous iteration
optimizer.zero_grad()
# Output from model
output = model(train_x)
# Calc loss and backprop gradients
loss = -mll(output, train_y)
loss.backward()
optimizer.step()
msg = "Iter {}: Loss={:2.4f}, Noise={:2.4f}".format(i, loss.item(), model.likelihood.noise.item())
print(msg)
# initialize likelihood and model
likelihood = gpytorch.likelihoods.GaussianLikelihood()
# number of random projection
nbRandProj = 100
# rescale
h = 0.05
# initialize the new model
model = StupidModel(train_x, train_y, likelihood)
# set to training mode and train
model.train()
likelihood.train()
train(model, likelihood)
Could you tell me where i did wrong ?
A simple example in the documentation would be a great help.
Thank you for your package.
Regards
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Hi
I want to use GPytorch by avoiding the kernel creation with a tensor.
I just want to provide a matrix vector result without the matrix creation.
I read that the lazytensor or linear_operator package could do so but its use is not clear for me.
I propose a stupid example that fails and i was wondering how to introduce the scaling with the lazytensor to optimize the model.
Could you tell me where i did wrong ?
A simple example in the documentation would be a great help.
Thank you for your package.
Regards
Beta Was this translation helpful? Give feedback.
All reactions