Replies: 1 comment
-
Yes this should still use |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am actually generating correlated variables using numpy cholesky, I would like to use gpytorch to speed up the process, the maximum size of the matrix is not big (200x200) but I am calculating the correlated variable in a loop so the operation is repeated at every temporal iteration in my code.
def generate_correlated_variables(mean, covariance_matrix): cholesky_matrix = np.linalg.cholesky(covariance_matrix) normal_random_variables = np.random.normal(size=[mean.shape[0],mean.shape[1]]) correlated_variables = mean + np.dot(normal_random_variables, cholesky_matrix.T) return correlated_variables
my idea was to use :
eta_dist = MultivariateNormal(torch.zeros_like(mean), covariance_matrix) samples = eta_dist.sample()
but it is actually much slower than using numpy.
Do you have any explanation for this ?
(My understanding is that for a matrix of my current size I am still using Cholesky but torch.Cholesky, am I right?)
Do you have any recommendation on alternatives I could use to speed up the process ?
Beta Was this translation helpful? Give feedback.
All reactions