Skip to content

Commit beed8a1

Browse files
minor polish to eval on test set
1 parent d472553 commit beed8a1

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

source/classification2.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1527,7 +1527,7 @@ glue("n_neighbors_min", "{:0.0f}".format(accuracies_grid["n_neighbors"].min()))
15271527
```
15281528

15291529
At first glance, this is a bit surprising: the performance of the classifier
1530-
has not changed much at all despite tuning the number of neighbors! For example, our first model
1530+
has not changed much despite tuning the number of neighbors! For example, our first model
15311531
with $K =$ 3 (before we knew how to tune) had an estimated accuracy of {glue:text}`cancer_acc_1`%,
15321532
while the tuned model with $K =$ {glue:text}`best_k_unique` had an estimated accuracy
15331533
of {glue:text}`cancer_acc_tuned`%.

source/regression1.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -818,7 +818,7 @@ chapter.
818818
To assess how well our model might do at predicting on unseen data, we will
819819
assess its RMSPE on the test data. To do this, we first need to retrain the
820820
KNN regression model on the entire training data set using $K =$ {glue:text}`best_k_sacr`
821-
neighbors. Fortunately we do not have to do this ourselves manually; `scikit-learn`
821+
neighbors. As we saw in {numref}`Chapter %s <classification2>` we do not have to do this ourselves manually; `scikit-learn`
822822
does it for us automatically. To make predictions with the best model on the test data,
823823
we can use the `predict` method of the fit `GridSearchCV` object.
824824
We then use the `mean_squared_error`

0 commit comments

Comments
 (0)