Skip to content

Commit 1adec5b

Browse files
Fix several typos (#2729)
fix typos in various files Co-authored-by: Kaiwen Wu <37524685+kayween@users.noreply.github.com>
1 parent 82adcaa commit 1adec5b

File tree

7 files changed

+7
-7
lines changed

7 files changed

+7
-7
lines changed

docs/source/conf.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ def find_version(*file_paths):
5151
os.mkdir(examples_dest)
5252

5353
# Include examples in documentation
54-
# This adds a lot of time to the doc buiod; to bypass use the environment variable SKIP_EXAMPLES=true
54+
# This adds a lot of time to the doc build; to bypass use the environment variable SKIP_EXAMPLES=true
5555
for root, dirs, files in os.walk(examples_source):
5656
for dr in dirs:
5757
os.mkdir(os.path.join(root.replace(examples_source, examples_dest), dr))

docs/source/keops_kernels.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ These kernels are compatible with the GPyTorch KeOps integration.
1212
For more information, see the `KeOps tutorial`_.
1313

1414
.. note::
15-
Only some standard kernels have KeOps impementations.
15+
Only some standard kernels have KeOps implementations.
1616
If there is a kernel you want that's missing, consider submitting a pull request!
1717

1818

docs/source/likelihoods.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ One-Dimensional Likelihoods
2222
Likelihoods for GPs that are distributions of scalar functions.
2323
(I.e. for a specific :math:`\mathbf x` we expect that :math:`f(\mathbf x) \in \mathbb{R}`.)
2424

25-
One-dimensional likelihoods should extend :obj:`gpytoch.likelihoods._OneDimensionalLikelihood` to
25+
One-dimensional likelihoods should extend :obj:`gpytorch.likelihoods._OneDimensionalLikelihood` to
2626
reduce the variance when computing approximate GP objective functions.
2727
(Variance reduction is accomplished by using 1D Gauss-Hermite quadrature rather than MC-integration).
2828

docs/source/variational.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ most GP approximations:
1313

1414
- :obj:`VariationalDistribution`, which define the form of the approximate inducing value
1515
posterior :math:`q(\mathbf u)`.
16-
- :obj:`VarationalStrategies`, which define how to compute :math:`q(\mathbf f(\mathbf X))` from
16+
- :obj:`VariationalStrategies`, which define how to compute :math:`q(\mathbf f(\mathbf X))` from
1717
:math:`q(\mathbf u)`.
1818
- :obj:`~gpytorch.mlls._ApproximateMarginalLogLikelihood`, which defines the objective function
1919
to learn the approximate posterior (e.g. variational ELBO).

examples/04_Variational_and_Approximate_GPs/index.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ All approximate models consist of the following 3 composible objects:
1212

1313
- :obj:`VariationalDistribution`, which define the form of the approximate inducing value
1414
posterior :math:`q(\mathbf u)`.
15-
- :obj:`VarationalStrategies`, which define how to compute :math:`q(\mathbf f(\mathbf X))` from
15+
- :obj:`VariationalStrategies`, which define how to compute :math:`q(\mathbf f(\mathbf X))` from
1616
:math:`q(\mathbf u)`.
1717
- :obj:`~gpytorch.mlls._ApproximateMarginalLogLikelihood`, which defines the objective function
1818
to learn the approximate posterior (e.g. variational ELBO).

gpytorch/distributions/multitask_multivariate_normal.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@ def event_shape(self):
8686
@classmethod
8787
def from_batch_mvn(cls, batch_mvn, task_dim=-1):
8888
"""
89-
Reinterprate a batch of multivariate normal distributions as an (independent) multitask multivariate normal
89+
Reinterpret a batch of multivariate normal distributions as an (independent) multitask multivariate normal
9090
distribution.
9191
9292
:param ~gpytorch.distributions.MultivariateNormal batch_mvn: The base MVN distribution.

gpytorch/distributions/multivariate_normal.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,7 @@ def _repr_sizes(mean: Tensor, covariance_matrix: Tensor | LinearOperator) -> str
8383
@property
8484
def _unbroadcasted_scale_tril(self) -> Tensor:
8585
if self.islazy and self.__unbroadcasted_scale_tril is None:
86-
# cache root decoposition
86+
# cache root decomposition
8787
ust = to_dense(self.lazy_covariance_matrix.cholesky())
8888
self.__unbroadcasted_scale_tril = ust
8989
return self.__unbroadcasted_scale_tril

0 commit comments

Comments
 (0)