Skip to content

Commit bd1a21c

Browse files
committed
MNT add codespell
1 parent bc9fdf4 commit bd1a21c

15 files changed

+335
-352
lines changed

.github/workflows/static.yml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,3 +30,6 @@ jobs:
3030
- name: Type check
3131
run: |
3232
pixi run type
33+
- name: Spell check
34+
run: |
35+
pixi run spell

doc/multioutput.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ MIMO (Multi-Input Multi-Output) data. For classification, it can be used for
1212
multilabel data. Actually, for multiclass classification, which has one output with
1313
multiple categories, multioutput feature selection can also be useful. The multiclass
1414
classification can be converted to multilabel classification by one-hot encoding
15-
target ``y``. The cannonical correaltion coefficient between the features ``X`` and the
15+
target ``y``. The canonical correaltion coefficient between the features ``X`` and the
1616
one-hot encoded target ``y`` has equivalent relationship with Fisher's criterion in
1717
LDA (Linear Discriminant Analysis) [1]_. Applying :class:`FastCan` to the converted
1818
multioutput data may result in better accuracy in the following classification task

doc/narx.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ It should also be noted the different types of predictions in model training.
8282
ARX and OE model
8383
----------------
8484

85-
To better understant the two types of training, it is helpful to know two linear time series model structures,
85+
To better understand the two types of training, it is helpful to know two linear time series model structures,
8686
i.e., `ARX (AutoRegressive eXogenous) model <https://www.mathworks.com/help/ident/ref/arx.html>`_ and
8787
`OE (output error) model <https://www.mathworks.com/help/ident/ref/oe.html>`_.
8888

doc/ols_and_omp.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ The detailed difference between OLS and OMP can be found in [3]_.
1212
Here, let's briefly compare the three methods.
1313

1414

15-
Assume we have a feature matrix :math:`X_s \in \mathbb{R}^{N\times t}`, which constains
15+
Assume we have a feature matrix :math:`X_s \in \mathbb{R}^{N\times t}`, which contains
1616
:math:`t` selected features, and a target vector :math:`y \in \mathbb{R}^{N\times 1}`.
1717
Then the residual :math:`r \in \mathbb{R}^{N\times 1}` of the least-squares can be
1818
found by

doc/pruning.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -22,9 +22,9 @@ should be selected, as any additional samples can be represented by linear combi
2222
Therefore, the number to select has to be set to small.
2323

2424
To solve this problem, we use :func:`minibatch` to loose the redundancy check of :class:`FastCan`.
25-
The original :class:`FastCan` checks the redunancy within :math:`X_s \in \mathbb{R}^{n\times t}`,
25+
The original :class:`FastCan` checks the redundancy within :math:`X_s \in \mathbb{R}^{n\times t}`,
2626
which contains :math:`t` selected samples and n features,
27-
and the redunancy within :math:`Y \in \mathbb{R}^{n\times m}`, which contains :math:`m` atoms :math:`y_i`.
27+
and the redundancy within :math:`Y \in \mathbb{R}^{n\times m}`, which contains :math:`m` atoms :math:`y_i`.
2828
:func:`minibatch` ranks samples with multiple correlation coefficients between :math:`X_b \in \mathbb{R}^{n\times b}` and :math:`y_i`,
2929
where :math:`b` is batch size and :math:`b <= t`, instead of canonical correlation coefficients between :math:`X_s` and :math:`Y`,
3030
which is used in :class:`FastCan`.

examples/plot_fisher.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
66
.. currentmodule:: fastcan
77
8-
In this examples, we will demonstrate the cannonical correaltion coefficient
8+
In this examples, we will demonstrate the canonical correaltion coefficient
99
between the features ``X`` and the one-hot encoded target ``y`` has equivalent
1010
relationship with Fisher's criterion in LDA (Linear Discriminant Analysis).
1111
"""

examples/plot_intuitive.py

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -22,10 +22,10 @@
2222
# the predicted target by a linear regression model) and the target to describe its
2323
# usefulness, the results are shown in the following figure. It can be seen that
2424
# Feature 2 is the most useful and Feature 8 is the second. However, does that mean
25-
# that the total usefullness of Feature 2 + Feature 8 is the sum of their R-squared
25+
# that the total usefulness of Feature 2 + Feature 8 is the sum of their R-squared
2626
# scores? Probably not, because there may be redundancy between Feature 2 and Feature 8.
2727
# Actually, what we want is a kind of usefulness score which has the **superposition**
28-
# property, so that the usefullness of each feature can be added together without
28+
# property, so that the usefulness of each feature can be added together without
2929
# redundancy.
3030

3131
import matplotlib.pyplot as plt
@@ -125,7 +125,7 @@ def plot_bars(ids, r2_left, r2_selected):
125125
# Select the third feature
126126
# ------------------------
127127
# Again, let's compute the R-squared between Feature 2 + Feature 8 + Feature i and
128-
# the target, and the additonal R-squared contributed by the rest of the features is
128+
# the target, and the additional R-squared contributed by the rest of the features is
129129
# shown in following figure. It can be found that after selecting Features 2 and 8, the
130130
# rest of the features can provide a very limited contribution.
131131

@@ -145,8 +145,8 @@ def plot_bars(ids, r2_left, r2_selected):
145145
# at the RHS of the dashed lines. The fast computational speed is achieved by
146146
# orthogonalization, which removes the redundancy between the features. We use the
147147
# orthogonalization first to makes the rest of features orthogonal to the selected
148-
# features and then compute their additonal R-squared values. ``eta-cosine`` uses
149-
# the samilar idea, but has an additonal preprocessing step to compress the features
148+
# features and then compute their additional R-squared values. ``eta-cosine`` uses
149+
# the similar idea, but has an additional preprocessing step to compress the features
150150
# :math:`X \in \mathbb{R}^{N\times n}` and the target
151151
# :math:`X \in \mathbb{R}^{N\times n}` to :math:`X_c \in \mathbb{R}^{(m+n)\times n}`
152152
# and :math:`Y_c \in \mathbb{R}^{(m+n)\times m}`.

examples/plot_pruning.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ def _fastcan_pruning(
8181
# %%
8282
# Compare pruning methods
8383
# -----------------------
84-
# 100 samples are seleced from 150 original data with ``Random`` pruning and
84+
# 100 samples are selected from 150 original data with ``Random`` pruning and
8585
# ``FastCan`` pruning. The results show that ``FastCan`` pruning gives a higher
8686
# mean value of R-squared and a lower standard deviation.
8787

examples/plot_redundancy.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
datasets, which contain redundant features.
1010
Here four types of features should be distinguished:
1111
12-
* Unuseful features: the features do not contribute to the target
12+
* Useless features: the features do not contribute to the target
1313
* Dependent informative features: the features contribute to the target and form
1414
the redundant features
1515
* Redundant features: the features are constructed by linear transformation of

examples/plot_speed.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -147,7 +147,7 @@ def baseline(X, y, t):
147147
r_eta = FastCan(n_features_to_select, eta=True, verbose=0).fit(X, y).indices_
148148
r_base, _ = baseline(X, y, n_features_to_select)
149149

150-
print("The indices of the seleted features:", end="\n")
150+
print("The indices of the selected features:", end="\n")
151151
print(f"h-correlation: {r_h}")
152152
print(f"eta-cosine: {r_eta}")
153153
print(f"Baseline: {r_base}")

0 commit comments

Comments
 (0)