2222# the predicted target by a linear regression model) and the target to describe its
2323# usefulness, the results are shown in the following figure. It can be seen that
2424# Feature 2 is the most useful and Feature 8 is the second. However, does that mean
25- # that the total usefullness of Feature 2 + Feature 8 is the sum of their R-squared
25+ # that the total usefulness of Feature 2 + Feature 8 is the sum of their R-squared
2626# scores? Probably not, because there may be redundancy between Feature 2 and Feature 8.
2727# Actually, what we want is a kind of usefulness score which has the **superposition**
28- # property, so that the usefullness of each feature can be added together without
28+ # property, so that the usefulness of each feature can be added together without
2929# redundancy.
3030
3131import matplotlib .pyplot as plt
@@ -125,7 +125,7 @@ def plot_bars(ids, r2_left, r2_selected):
125125# Select the third feature
126126# ------------------------
127127# Again, let's compute the R-squared between Feature 2 + Feature 8 + Feature i and
128- # the target, and the additonal R-squared contributed by the rest of the features is
128+ # the target, and the additional R-squared contributed by the rest of the features is
129129# shown in following figure. It can be found that after selecting Features 2 and 8, the
130130# rest of the features can provide a very limited contribution.
131131
@@ -145,8 +145,8 @@ def plot_bars(ids, r2_left, r2_selected):
145145# at the RHS of the dashed lines. The fast computational speed is achieved by
146146# orthogonalization, which removes the redundancy between the features. We use the
147147# orthogonalization first to makes the rest of features orthogonal to the selected
148- # features and then compute their additonal R-squared values. ``eta-cosine`` uses
149- # the samilar idea, but has an additonal preprocessing step to compress the features
148+ # features and then compute their additional R-squared values. ``eta-cosine`` uses
149+ # the similar idea, but has an additional preprocessing step to compress the features
150150# :math:`X \in \mathbb{R}^{N\times n}` and the target
151151# :math:`X \in \mathbb{R}^{N\times n}` to :math:`X_c \in \mathbb{R}^{(m+n)\times n}`
152152# and :math:`Y_c \in \mathbb{R}^{(m+n)\times m}`.
0 commit comments