You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
And the``Tabformer`` family, i.e. Transformers for Tabular data:
120
+
The``Tabformer`` family, i.e. Transformers for Tabular data:
122
121
123
122
4.**TabTransformer**: details on the TabTransformer can be found in
124
123
[TabTransformer: Tabular Data Modeling Using Contextual Embeddings](https://arxiv.org/pdf/2012.06678.pdf).
@@ -133,12 +132,19 @@ on the Fasformer can be found in
133
132
the Perceiver can be found in
134
133
[Perceiver: General Perception with Iterative Attention](https://arxiv.org/abs/2103.03206)
135
134
135
+
And probabilistic DL models for tabular data based on
136
+
[Weight Uncertainty in Neural Networks](https://arxiv.org/abs/1505.05424):
137
+
138
+
9.**BayesianWide**: Probabilistic adaptation of the `Wide` model.
139
+
10.**BayesianTabMlp**: Probabilistic adaptation of the `TabMlp` model
140
+
136
141
Note that while there are scientific publications for the TabTransformer,
137
142
SAINT and FT-Transformer, the TabFasfFormer and TabPerceiver are our own
138
143
adaptation of those algorithms for tabular data.
139
144
140
-
For details on these models and their options please see the examples in the
141
-
Examples folder and the documentation.
145
+
For details on these models (and all the other models in the library for the
146
+
different data modes) and their corresponding options please see the examples
147
+
in the Examples folder and the documentation.
142
148
143
149
### Installation
144
150
@@ -165,13 +171,6 @@ cd pytorch-widedeep
165
171
pip install -e .
166
172
```
167
173
168
-
**Important note for Mac users**: Since `python
169
-
3.8`, [the `multiprocessing` library start method changed from `'fork'` to`'spawn'`](https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods) which affects the data-loaders.
170
-
For the time being, `pytorch-widedeep` sets the `num_workers` to 0 when using
171
-
Mac and python version 3.8+.
172
-
173
-
Note that this issue does not affect Linux users.
174
-
175
174
### Quick start
176
175
177
176
Binary classification with the [adult
@@ -181,7 +180,6 @@ using `Wide` and `DeepDense` and defaults settings.
181
180
Building a wide (linear) and deep model with ``pytorch-widedeep``:
182
181
183
182
```python
184
-
185
183
import pandas as pd
186
184
import numpy as np
187
185
import torch
@@ -191,16 +189,15 @@ from pytorch_widedeep import Trainer
191
189
from pytorch_widedeep.preprocessing import WidePreprocessor, TabPreprocessor
192
190
from pytorch_widedeep.models import Wide, TabMlp, WideDeep
193
191
from pytorch_widedeep.metrics import Accuracy
192
+
from pytorch_widedeep.datasets import load_adult
193
+
194
194
195
-
# the following 4 lines are not directly related to ``pytorch-widedeep``. I
196
-
# assume you have downloaded the dataset and place it in a dir called
0 commit comments