Skip to content
2 changes: 1 addition & 1 deletion docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ def find_version(*file_paths):
os.mkdir(examples_dest)

# Include examples in documentation
# This adds a lot of time to the doc buiod; to bypass use the environment variable SKIP_EXAMPLES=true
# This adds a lot of time to the doc build; to bypass use the environment variable SKIP_EXAMPLES=true
for root, dirs, files in os.walk(examples_source):
for dr in dirs:
os.mkdir(os.path.join(root.replace(examples_source, examples_dest), dr))
Expand Down
2 changes: 1 addition & 1 deletion docs/source/keops_kernels.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ These kernels are compatible with the GPyTorch KeOps integration.
For more information, see the `KeOps tutorial`_.

.. note::
Only some standard kernels have KeOps impementations.
Only some standard kernels have KeOps implementations.
If there is a kernel you want that's missing, consider submitting a pull request!


Expand Down
2 changes: 1 addition & 1 deletion docs/source/likelihoods.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ One-Dimensional Likelihoods
Likelihoods for GPs that are distributions of scalar functions.
(I.e. for a specific :math:`\mathbf x` we expect that :math:`f(\mathbf x) \in \mathbb{R}`.)

One-dimensional likelihoods should extend :obj:`gpytoch.likelihoods._OneDimensionalLikelihood` to
One-dimensional likelihoods should extend :obj:`gpytorch.likelihoods._OneDimensionalLikelihood` to
reduce the variance when computing approximate GP objective functions.
(Variance reduction is accomplished by using 1D Gauss-Hermite quadrature rather than MC-integration).

Expand Down
2 changes: 1 addition & 1 deletion docs/source/variational.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ most GP approximations:

- :obj:`VariationalDistribution`, which define the form of the approximate inducing value
posterior :math:`q(\mathbf u)`.
- :obj:`VarationalStrategies`, which define how to compute :math:`q(\mathbf f(\mathbf X))` from
- :obj:`VariationalStrategies`, which define how to compute :math:`q(\mathbf f(\mathbf X))` from
:math:`q(\mathbf u)`.
- :obj:`~gpytorch.mlls._ApproximateMarginalLogLikelihood`, which defines the objective function
to learn the approximate posterior (e.g. variational ELBO).
Expand Down
2 changes: 1 addition & 1 deletion examples/04_Variational_and_Approximate_GPs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ All approximate models consist of the following 3 composible objects:

- :obj:`VariationalDistribution`, which define the form of the approximate inducing value
posterior :math:`q(\mathbf u)`.
- :obj:`VarationalStrategies`, which define how to compute :math:`q(\mathbf f(\mathbf X))` from
- :obj:`VariationalStrategies`, which define how to compute :math:`q(\mathbf f(\mathbf X))` from
:math:`q(\mathbf u)`.
- :obj:`~gpytorch.mlls._ApproximateMarginalLogLikelihood`, which defines the objective function
to learn the approximate posterior (e.g. variational ELBO).
Expand Down
2 changes: 1 addition & 1 deletion gpytorch/distributions/multitask_multivariate_normal.py
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ def event_shape(self):
@classmethod
def from_batch_mvn(cls, batch_mvn, task_dim=-1):
"""
Reinterprate a batch of multivariate normal distributions as an (independent) multitask multivariate normal
Reinterpret a batch of multivariate normal distributions as an (independent) multitask multivariate normal
distribution.

:param ~gpytorch.distributions.MultivariateNormal batch_mvn: The base MVN distribution.
Expand Down
2 changes: 1 addition & 1 deletion gpytorch/distributions/multivariate_normal.py
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ def _repr_sizes(mean: Tensor, covariance_matrix: Tensor | LinearOperator) -> str
@property
def _unbroadcasted_scale_tril(self) -> Tensor:
if self.islazy and self.__unbroadcasted_scale_tril is None:
# cache root decoposition
# cache root decomposition
ust = to_dense(self.lazy_covariance_matrix.cholesky())
self.__unbroadcasted_scale_tril = ust
return self.__unbroadcasted_scale_tril
Expand Down