You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+38-21Lines changed: 38 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,6 +22,8 @@ using wide and deep models.
22
22
23
23
**Experiments and comparisson with `LightGBM`**: [TabularDL vs LightGBM](https://github.com/jrzaurin/tabulardl-benchmark)
24
24
25
+
**Slack**: if you want to contribute or just want to chat with us, join [slack](https://join.slack.com/t/pytorch-widedeep/shared_invite/zt-soss7stf-iXpVuLeKZz8lGTnxxtHtTw)
26
+
25
27
### Introduction
26
28
27
29
``pytorch-widedeep`` is based on Google's [Wide and Deep Algorithm](https://arxiv.org/abs/1606.07792)
@@ -82,10 +84,11 @@ into:
82
84
83
85
It is important to emphasize that **each individual component, `wide`,
84
86
`deeptabular`, `deeptext` and `deepimage`, can be used independently** and in
85
-
isolation. For example, one could use only `wide`, which is in simply a linear
86
-
model. In fact, one of the most interesting functionalities
87
+
isolation. For example, one could use only `wide`, which is in simply a
88
+
linear model. In fact, one of the most interesting functionalities
87
89
in``pytorch-widedeep`` is the ``deeptabular`` component. Currently,
88
-
``pytorch-widedeep`` offers 4 models for that component:
90
+
``pytorch-widedeep`` offers the following different models for that
91
+
component:
89
92
90
93
1.``TabMlp``: this is almost identical to the [tabular
91
94
model](https://docs.fast.ai/tutorial.tabular.html) in the fantastic
@@ -100,11 +103,26 @@ passed through a series of ResNet blocks built with dense layers.
0 commit comments