44</p >
55
66[ ![ PyPI version] ( https://badge.fury.io/py/pytorch-widedeep.svg )] ( https://pypi.org/project/pytorch-widedeep/ )
7- [ ![ Python 3.8 3. 9 3.10 3.11] ( https://img.shields.io/badge/python-3.8 %20%7C%203.9 %20%7C%203.10 %20%7C%203.11 -blue.svg )] ( https://pypi.org/project/pytorch-widedeep/ )
7+ [ ![ Python 3.9 3.10 3.11 3.12 ] ( https://img.shields.io/badge/python-3.9 %20%7C%203.10 %20%7C%203.11 %20%7C%203.12 -blue.svg )] ( https://pypi.org/project/pytorch-widedeep/ )
88[ ![ Build Status] ( https://github.com/jrzaurin/pytorch-widedeep/actions/workflows/build.yml/badge.svg )] ( https://github.com/jrzaurin/pytorch-widedeep/actions )
99[ ![ Documentation Status] ( https://readthedocs.org/projects/pytorch-widedeep/badge/?version=latest )] ( https://pytorch-widedeep.readthedocs.io/en/latest/?badge=latest )
1010[ ![ codecov] ( https://codecov.io/gh/jrzaurin/pytorch-widedeep/branch/master/graph/badge.svg )] ( https://codecov.io/gh/jrzaurin/pytorch-widedeep )
@@ -30,21 +30,21 @@ text and images using Wide and Deep models in Pytorch
3030The content of this document is organized as follows:
3131
3232- [ pytorch-widedeep] ( #pytorch-widedeep )
33- - [ Introduction] ( #introduction )
34- - [ Architectures] ( #architectures )
35- - [ The `` deeptabular `` component] ( #the-deeptabular-component )
36- - [ The `` rec `` module] ( #the-rec-module )
37- - [ Text and Images] ( #text-and-images )
38- - [ Installation] ( #installation )
39- - [ Developer Install] ( #developer-install )
40- - [ Quick start] ( #quick-start )
41- - [ Testing] ( #testing )
42- - [ How to Contribute] ( #how-to-contribute )
43- - [ Acknowledgments] ( #acknowledgments )
44- - [ License] ( #license )
45- - [ Cite] ( #cite )
46- - [ BibTex] ( #bibtex )
47- - [ APA] ( #apa )
33+ - [ Introduction] ( #introduction )
34+ - [ Architectures] ( #architectures )
35+ - [ The `` deeptabular `` component] ( #the-deeptabular-component )
36+ - [ The `` rec `` module] ( #the-rec-module )
37+ - [ Text and Images] ( #text-and-images )
38+ - [ Installation] ( #installation )
39+ - [ Developer Install] ( #developer-install )
40+ - [ Quick start] ( #quick-start )
41+ - [ Testing] ( #testing )
42+ - [ How to Contribute] ( #how-to-contribute )
43+ - [ Acknowledgments] ( #acknowledgments )
44+ - [ License] ( #license )
45+ - [ Cite] ( #cite )
46+ - [ BibTex] ( #bibtex )
47+ - [ APA] ( #apa )
4848
4949### Introduction
5050
@@ -58,12 +58,10 @@ With that in mind there are a number of architectures that can be implemented
5858with the library. The main components of those architectures are shown in the
5959Figure below:
6060
61-
6261<p align =" center " >
6362 <img width =" 750 " src =" mkdocs/sources/docs/figures/widedeep_arch_new.png " >
6463</p >
6564
66-
6765In math terms, and following the notation in the
6866[ paper] ( https://arxiv.org/abs/1606.07792 ) , the expression for the architecture
6967without a `` deephead `` component can be formulated as:
@@ -72,7 +70,6 @@ without a ``deephead`` component can be formulated as:
7270 <img width =" 500 " src =" mkdocs/sources/docs/figures/architecture_1_math.png " >
7371</p >
7472
75-
7673Where &sigma ; is the sigmoid function, * 'W'* are the weight matrices applied to the wide model and to the final
7774activations of the deep models, * 'a'* are these final activations,
7875&phi ; (x) are the cross product transformations of the original features * 'x'* , and
@@ -162,12 +159,10 @@ Face models.
162159
163160** 1. Wide and Tabular component (aka deeptabular)**
164161
165-
166162<p align =" center " >
167163 <img width =" 400 " src =" mkdocs/sources/docs/figures/arch_1.png " >
168164</p >
169165
170-
171166``` python
172167from pytorch_widedeep.preprocessing import TabPreprocessor, WidePreprocessor
173168from pytorch_widedeep.models import Wide, TabMlp, WideDeep
@@ -213,7 +208,6 @@ trainer.fit(
213208 <img width =" 400 " src =" mkdocs/sources/docs/figures/arch_2.png " >
214209</p >
215210
216-
217211``` python
218212from pytorch_widedeep.preprocessing import TabPreprocessor, TextPreprocessor
219213from pytorch_widedeep.models import TabMlp, BasicRNN, WideDeep
@@ -452,12 +446,10 @@ activations. In other words, it does not need to inherit from
452446` BaseWDModelComponent ` . This base class simply checks the existence of such
453447property and avoids some typing errors internally.
454448
455-
456449<p align =" center " >
457450 <img width="600" src="mkdocs/sources/docs/figures/arch_6.png">
458451</p >
459452
460-
461453``` python
462454import torch
463455
@@ -599,7 +591,6 @@ passed through two separate models and then "fused" via a dot product.
599591 <img width =" 350 " src =" mkdocs/sources/docs/figures/arch_7.png " >
600592</p >
601593
602-
603594``` python
604595import numpy as np
605596import pandas as pd
@@ -695,7 +686,6 @@ actually a different architecture.
695686 <img width =" 200 " src =" mkdocs/sources/docs/figures/arch_8.png " >
696687</p >
697688
698-
699689``` python
700690from pytorch_widedeep.preprocessing import TabPreprocessor, TextPreprocessor, ImagePreprocessor
701691from pytorch_widedeep.models import TabMlp, BasicRNN, WideDeep, ModelFuser, Vision
@@ -823,6 +813,7 @@ The recommendation algorithms in the `rec` module are:
823813See the examples for details on how to use these models.
824814
825815### Text and Images
816+
826817For the text component, ` deeptext ` , the library offers the following models:
827818
8288191 . ** BasicRNN** : a simple RNN 2. ** AttentiveRNN** : a RNN with an attention
@@ -843,7 +834,7 @@ following families:
843834 'mobilenetv2', 'mnasnet', 'efficientnet' and 'squeezenet'. These are
844835 offered via ` torchvision ` and wrapped up in the ` Vision ` class.
845836
846- ### Installation
837+ ### Installation
847838
848839Install using pip:
849840
0 commit comments