You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
TabNet has intrinsic explainability feature through the visualization of attention map, either **aggregated**:
94
94
95
95
```{r model-explain}
96
-
#| fig.alt: "An expainability plot showing for each variable of the test-set on the y axis the importance along each observation on the x axis. The value is a mask agggregate."
96
+
#| fig.alt: "An heatmap as explainability plot showing for each variable of the test-set on the y axis the importance along each observation on the x axis. The value is a mask agggregate."
97
97
explain <- tabnet_explain(fit, test)
98
98
autoplot(explain)
99
99
```
100
100
101
101
or at **each layer** through the `type = "steps"` option:
102
102
103
103
```{r step-explain}
104
-
#| fig.alt: "An small-multiple expainability plot for each step of the Tabnet network. Each plot shows for each variable of the test-set on the y axis the importance along each observation on the x axis."
104
+
#| fig.alt: "An small-multiple heatmap as explainability plot for each step of the Tabnet network. Each plot shows for each variable of the test-set on the y axis the importance along each observation on the x axis."
The example here is a toy example as the `train` dataset does actually contain outcomes. The vignette [`vignette("selfsupervised_training")`](articles/selfsupervised_training.html) will gives you the complete correct workflow step-by-step.
118
+
The example here is a toy example as the `train` dataset does actually contain outcomes. The vignette [`vignette("selfsupervised_training")`](https://mlverse.github.io/tabnet/articles/selfsupervised_training.html) will gives you the complete correct workflow step-by-step.
119
119
120
120
## {tidymodels} integration
121
121
122
122
The integration within tidymodels workflows offers you unlimited opportunity to compare {tabnet} models with challengers.
123
123
124
-
Don't miss the [`vignette("tidymodels-interface")`](articles/tidymodels-interface.html) for that.
124
+
Don't miss the [`vignette("tidymodels-interface")`](https://mlverse.github.io/tabnet/articles/tidymodels-interface.html) for that.
125
125
126
126
## Missing data in predictors
127
127
128
128
{tabnet} leverage the masking mechanism to deal with missing data, so you don't have to remove the entries in your dataset with some missing values in the predictors variables.
129
129
130
-
See [`vignette("Missing_data_predictors")`](articles/Missing_data_predictors.html)
130
+
See [`vignette("Missing_data_predictors")`](https://mlverse.github.io/tabnet/articles/Missing_data_predictors.html)
131
131
132
132
## Imbalanced binary classification
133
133
134
134
{tabnet} includes a Area under the $Min(FPR,FNR)$ (AUM) loss function `nn_aum_loss()` dedicated to your imbalanced binary classification tasks.
135
135
136
-
Try it out in [`vignette("aum_loss")`](articles/aum_loss.html)
136
+
Try it out in [`vignette("aum_loss")`](https://mlverse.github.io/tabnet/articles/aum_loss.html)
<imgsrc="man/figures/README-model-explain-1.png"alt="An expainability plot showing for each variable of the test-set on the y axis the importance along each observation on the x axis. The value is a mask agggregate."width="100%" />
129
+
<imgsrc="man/figures/README-model-explain-1.png"alt="An heatmap as explainability plot showing for each variable of the test-set on the y axis the importance along each observation on the x axis. The value is a mask agggregate."width="100%" />
130
130
131
131
or at **each layer** through the `type = "steps"` option:
132
132
133
133
```r
134
134
autoplot(explain, type="steps")
135
135
```
136
136
137
-
<imgsrc="man/figures/README-step-explain-1.png"alt="An small-multiple expainability plot for each step of the Tabnet network. Each plot shows for each variable of the test-set on the y axis the importance along each observation on the x axis."width="100%" />
137
+
<imgsrc="man/figures/README-step-explain-1.png"alt="An small-multiple heatmap as explainability plot for each step of the Tabnet network. Each plot shows for each variable of the test-set on the y axis the importance along each observation on the x axis."width="100%" />
138
138
139
139
## Self-supervised pretraining
140
140
@@ -152,7 +152,7 @@ autoplot(pretrain)
152
152
153
153
The example here is a toy example as the `train` dataset does actually
0 commit comments