Skip to content

Commit 9dbb714

Browse files
authored
[Doc] fix typos in documentation (dmlc#9458)
1 parent 4359356 commit 9dbb714

18 files changed

+32
-31
lines changed

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -48,6 +48,7 @@ Debug
4848
*.Rproj
4949
./xgboost.mpi
5050
./xgboost.mock
51+
*.bak
5152
#.Rbuildignore
5253
R-package.Rproj
5354
*.cache*

doc/build.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -119,7 +119,7 @@ An up-to-date version of the CUDA toolkit is required.
119119

120120
.. note:: Checking your compiler version
121121

122-
CUDA is really picky about supported compilers, a table for the compatible compilers for the latests CUDA version on Linux can be seen `here <https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html>`_.
122+
CUDA is really picky about supported compilers, a table for the compatible compilers for the latest CUDA version on Linux can be seen `here <https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html>`_.
123123

124124
Some distros package a compatible ``gcc`` version with CUDA. If you run into compiler errors with ``nvcc``, try specifying the correct compiler with ``-DCMAKE_CXX_COMPILER=/path/to/correct/g++ -DCMAKE_C_COMPILER=/path/to/correct/gcc``. On Arch Linux, for example, both binaries can be found under ``/opt/cuda/bin/``.
125125

doc/contrib/ci.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ GitHub Actions is also used to build Python wheels targeting MacOS Intel and App
3232
``python_wheels`` pipeline sets up environment variables prefixed ``CIBW_*`` to indicate the target
3333
OS and processor. The pipeline then invokes the script ``build_python_wheels.sh``, which in turns
3434
calls ``cibuildwheel`` to build the wheel. The ``cibuildwheel`` is a library that sets up a
35-
suitable Python environment for each OS and processor target. Since we don't have Apple Silion
35+
suitable Python environment for each OS and processor target. Since we don't have Apple Silicon
3636
machine in GitHub Actions, cross-compilation is needed; ``cibuildwheel`` takes care of the complex
3737
task of cross-compiling a Python wheel. (Note that ``cibuildwheel`` will call
3838
``pip wheel``. Since XGBoost has a native library component, we created a customized build
@@ -131,7 +131,7 @@ set up a credential pair in order to provision resources on AWS. See
131131
Worker Image Pipeline
132132
=====================
133133
Building images for worker machines used to be a chore: you'd provision an EC2 machine, SSH into it, and
134-
manually install the necessary packages. This process is not only laborous but also error-prone. You may
134+
manually install the necessary packages. This process is not only laborious but also error-prone. You may
135135
forget to install a package or change a system configuration.
136136

137137
No more. Now we have an automated pipeline for building images for worker machines.

doc/contrib/coding_guide.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -100,7 +100,7 @@ two automatic checks to enforce coding style conventions. To expedite the code r
100100

101101
Linter
102102
======
103-
We use `pylint <https://github.com/PyCQA/pylint>`_ and `cpplint <https://github.com/cpplint/cpplint>`_ to enforce style convention and find potential errors. Linting is especially useful for Python, as we can catch many errors that would have otherwise occured at run-time.
103+
We use `pylint <https://github.com/PyCQA/pylint>`_ and `cpplint <https://github.com/cpplint/cpplint>`_ to enforce style convention and find potential errors. Linting is especially useful for Python, as we can catch many errors that would have otherwise occurred at run-time.
104104

105105
To run this check locally, run the following command from the top level source tree:
106106

doc/contrib/donate.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ The Project Management Committee (PMC) of the XGBoost project appointed `Open So
2929

3030
All expenses incurred for hosting CI will be submitted to the fiscal host with receipts. Only the expenses in the following categories will be approved for reimbursement:
3131

32-
* Cloud exprenses for the cloud test farm (https://buildkite.com/xgboost)
32+
* Cloud expenses for the cloud test farm (https://buildkite.com/xgboost)
3333
* Cost of domain https://xgboost-ci.net
3434
* Monthly cost of using BuildKite
3535
* Hosting cost of the User Forum (https://discuss.xgboost.ai)

doc/contrib/unit_tests.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -169,7 +169,7 @@ supply a specified SANITIZER_PATH.
169169
170170
How to use sanitizers with CUDA support
171171
=======================================
172-
Runing XGBoost on CUDA with address sanitizer (asan) will raise memory error.
172+
Running XGBoost on CUDA with address sanitizer (asan) will raise memory error.
173173
To use asan with CUDA correctly, you need to configure asan via ASAN_OPTIONS
174174
environment variable:
175175

doc/faq.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ XGBoost supports missing values by default.
6363
In tree algorithms, branch directions for missing values are learned during training.
6464
Note that the gblinear booster treats missing values as zeros.
6565

66-
When the ``missing`` parameter is specifed, values in the input predictor that is equal to
66+
When the ``missing`` parameter is specified, values in the input predictor that is equal to
6767
``missing`` will be treated as missing and removed. By default it's set to ``NaN``.
6868

6969
**************************************

doc/jvm/java_intro.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -129,7 +129,7 @@ With parameters and data, you are able to train a booster model.
129129
130130
booster.saveModel("model.bin");
131131
132-
* Generaing model dump with feature map
132+
* Generating model dump with feature map
133133

134134
.. code-block:: java
135135

doc/prediction.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ After 1.4 release, we added a new parameter called ``strict_shape``, one can set
5454
Output is a 4-dim array with ``(n_samples, n_iterations, n_classes, n_trees_in_forest)``
5555
as shape. ``n_trees_in_forest`` is specified by the ``numb_parallel_tree`` during
5656
training. When strict shape is set to False, output is a 2-dim array with last 3 dims
57-
concatenated into 1. Also the last dimension is dropped if it eqauls to 1. When using
57+
concatenated into 1. Also the last dimension is dropped if it equals to 1. When using
5858
``apply`` method in scikit learn interface, this is set to False by default.
5959

6060

@@ -68,7 +68,7 @@ n_classes, n_trees_in_forest)``, while R with ``strict_shape=TRUE`` outputs
6868
Other than these prediction types, there's also a parameter called ``iteration_range``,
6969
which is similar to model slicing. But instead of actually splitting up the model into
7070
multiple stacks, it simply returns the prediction formed by the trees within range.
71-
Number of trees created in each iteration eqauls to :math:`trees_i = num\_class \times
71+
Number of trees created in each iteration equals to :math:`trees_i = num\_class \times
7272
num\_parallel\_tree`. So if you are training a boosted random forest with size of 4, on
7373
the 3-class classification dataset, and want to use the first 2 iterations of trees for
7474
prediction, you need to provide ``iteration_range=(0, 2)``. Then the first :math:`2

doc/python/sklearn_estimator.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ sklearn estimator interface is still working in progress.
2020

2121
You can find some some quick start examples at
2222
:ref:`sphx_glr_python_examples_sklearn_examples.py`. The main advantage of using sklearn
23-
interface is that it works with most of the utilites provided by sklearn like
23+
interface is that it works with most of the utilities provided by sklearn like
2424
:py:func:`sklearn.model_selection.cross_validate`. Also, many other libraries recognize
2525
the sklearn estimator interface thanks to its popularity.
2626

0 commit comments

Comments
 (0)