@@ -890,6 +890,17 @@ regardless of what the new observation looks like. In general, if the model
890
890
* isn't influenced enough* by the training data, it is said to ** underfit** the
891
891
data.
892
892
893
+ ** Overfitting:** \index{overfitting!classification} In contrast, when we decrease the number of neighbors, each
894
+ individual data point has a stronger and stronger vote regarding nearby points.
895
+ Since the data themselves are noisy, this causes a more "jagged" boundary
896
+ corresponding to a * less simple* model. If you take this case to the extreme,
897
+ setting $K = 1$, then the classifier is essentially just matching each new
898
+ observation to its closest neighbor in the training data set. This is just as
899
+ problematic as the large $K$ case, because the classifier becomes unreliable on
900
+ new data: if we had a different training set, the predictions would be
901
+ completely different. In general, if the model * is influenced too much* by the
902
+ training data, it is said to ** overfit** the data.
903
+
893
904
``` {r 06-decision-grid-K, echo = FALSE, message = FALSE, fig.height = 10, fig.width = 10, fig.pos = "H", out.extra="", fig.cap = "Effect of K in overfitting and underfitting."}
894
905
ks <- c(1, 7, 20, 300)
895
906
plots <- list()
@@ -945,17 +956,6 @@ p_grid <- plot_grid(plotlist = p_no_legend, ncol = 2)
945
956
plot_grid(p_grid, legend, ncol = 1, rel_heights = c(1, 0.2))
946
957
```
947
958
948
- ** Overfitting:** \index{overfitting!classification} In contrast, when we decrease the number of neighbors, each
949
- individual data point has a stronger and stronger vote regarding nearby points.
950
- Since the data themselves are noisy, this causes a more "jagged" boundary
951
- corresponding to a * less simple* model. If you take this case to the extreme,
952
- setting $K = 1$, then the classifier is essentially just matching each new
953
- observation to its closest neighbor in the training data set. This is just as
954
- problematic as the large $K$ case, because the classifier becomes unreliable on
955
- new data: if we had a different training set, the predictions would be
956
- completely different. In general, if the model * is influenced too much* by the
957
- training data, it is said to ** overfit** the data.
958
-
959
959
Both overfitting and underfitting are problematic and will lead to a model
960
960
that does not generalize well to new data. When fitting a model, we need to strike
961
961
a balance between the two. You can see these two effects in Figure
0 commit comments