@@ -284,7 +284,7 @@ lm_test_results
284
284
```
285
285
286
286
Our final model's test error as assessed by RMSPE \index{RMSPE}
287
- is ` r format(round(lm_test_results |> filter(.metric == 'rmse') |> pull(.estimate)), big.mark=",", nsmall=0, scientific=FALSE) ` .
287
+ is \$ ` r format(round(lm_test_results |> filter(.metric == 'rmse') |> pull(.estimate)), big.mark=",", nsmall=0, scientific=FALSE) ` .
288
288
Remember that this is in units of the response variable, and here that
289
289
is US Dollars (USD). Does this mean our model is "good" at predicting house
290
290
sale price based off of the predictor of home size? Again, answering this is
@@ -504,7 +504,7 @@ lm_mult_test_results
504
504
```
505
505
506
506
Our model's test error as assessed by RMSPE
507
- is ` r format(round(lm_mult_test_results |> filter(.metric == 'rmse') |> pull(.estimate)), big.mark=",", nsmall=0, scientific=FALSE) ` .
507
+ is \$ ` r format(round(lm_mult_test_results |> filter(.metric == 'rmse') |> pull(.estimate)), big.mark=",", nsmall=0, scientific=FALSE) ` .
508
508
In the case of two predictors, we can plot the predictions made by our linear regression creates a * plane* of best fit, as
509
509
shown in Figure \@ ref(fig:08-3DlinReg).
510
510
@@ -614,12 +614,12 @@ lm_mult_test_results
614
614
```
615
615
616
616
We obtain an RMSPE \index{RMSPE} for the multivariable linear regression model
617
- of ` r format(lm_mult_test_results |> filter(.metric == 'rmse') |> pull(.estimate), big.mark=",", nsmall=0, scientific = FALSE) ` . This prediction error
617
+ of \$ ` r format(lm_mult_test_results |> filter(.metric == 'rmse') |> pull(.estimate), big.mark=",", nsmall=0, scientific = FALSE) ` . This prediction error
618
618
is less than the prediction error for the multivariable KNN regression model,
619
619
indicating that we should likely choose linear regression for predictions of
620
620
house sale price on this data set. Revisiting the simple linear regression model
621
621
with only a single predictor from earlier in this chapter, we see that the RMSPE for that model was
622
- ` r format(lm_test_results |> filter(.metric == 'rmse') |> pull(.estimate), big.mark=",", nsmall=0, scientific = FALSE) ` ,
622
+ \$ ` r format(lm_test_results |> filter(.metric == 'rmse') |> pull(.estimate), big.mark=",", nsmall=0, scientific = FALSE) ` ,
623
623
which is slightly higher than that of our more complex model. Our model with two predictors
624
624
provided a slightly better fit on test data than our model with just one.
625
625
As mentioned earlier, this is not always the case: sometimes including more
0 commit comments