You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -73,8 +74,8 @@ The second model includes a nonlinear dependence on horsepower. You can think of
73
74
For models involving only a very few explanatory variables, a plot of the model can give immediate insight. The `mod_plot` function reduces the work to make such a plot.
Two important additional arguments to `mod_plot` are
@@ -89,8 +90,8 @@ Two important additional arguments to `mod_plot` are
89
90
The `iris` dataset has four explanatory variables. Here's species shown as a function of two of the variables:
90
91
91
92
```{r}
92
-
gf_point(Sepal.Length ~ Petal.Length, color = ~ Species, data = iris) %>%
93
-
gf_theme(legend.position = "top")
93
+
theme_update(legend.position = "top")
94
+
gf_point(Sepal.Length ~ Petal.Length, color = ~ Species, data = iris)
94
95
```
95
96
For later comparison to the models that we'll train, note that when the petal length and sepal length are both large, the flowers are almost always *virginica*.
96
97
@@ -108,7 +109,7 @@ Notice that the model architectures used to create the two models come from two
Since this is a classifier, the plot of the model function shows the *probability* of one of the output classes. That's *virginica* here. When the petal length is small, say around 1, the flower is very unlikely to be *virginica*. But for large petal lengths, and especially for large petal lengths and large sepal lengths, the flower is almost certain to be *virginica*.
@@ -118,17 +119,15 @@ If your interest is in a class other than *virginica*, you can specify the class
118
119
The second iris model has four explanatory variables. This is as many as `mod_plot` will display:
The plot shows that the flower species does not depend on either of the two variables displayed on the x-axis and with color: the sepal width and the sepal length. This is why the line is flat and the colors overlap. But you can easily see a dependence on petal width and, to a very limited extent, on petal length.
125
125
126
126
The choice of which role in the plot is played by which explanatory variable is up to you. Here the dependence on petal length and width are emphasized by using them for x-position and color:
The arguments to the function are the same as for all the `mod_eval_fun` methods. The body of the function pulls out the `posterior` component, coerces it to a data frame and removes the row names. It isn't always this easy. But once the function is available in your session, you can test it out. (Make sure to give it a data set as inputs to the model)
339
337
340
338
```{r error = TRUE}
341
-
mod_eval_fun.lda(my_mod, data = iris[c(30, 80, 120),])
339
+
mod_eval_fun(my_mod, data = iris[c(30, 80, 120),])
342
340
```
343
341
344
-
Now things should work.
342
+
Now the high-level functions in `mosaicModel` can work on LDA models.
0 commit comments