1+ <div align =" center " >
2+ <img src =" https://github.com/alawryaguila/multi-view-AE/blob/master/docs/figures/logo.png " width =" 500px " >
3+
4+ # Multi-modal representation learning using autoencoders
5+
16![ Build Status] ( https://github.com/alawryaguila/multi-view-ae/actions/workflows/ci.yml/badge.svg )
27[ ![ Documentation Status] ( https://readthedocs.org/projects/multi-view-ae/badge/?version=latest )] ( https://multi-view-ae.readthedocs.io/en/latest/?badge=latest )
38[ ![ Python Version] ( https://img.shields.io/badge/python-3.7%20%7C%203.8%20%7C%203.9%20%7C%203.10%20-blue )] ( https://github.com/alawryaguila/multi-view-ae )
49[ ![ DOI] ( https://joss.theoj.org/papers/10.21105/joss.05093/status.svg )] ( https://joss.theoj.org/papers/10.21105/joss.05093 )
510[ ![ version] ( https://img.shields.io/pypi/v/multiviewae )] ( https://pypi.org/project/multiviewae/ )
611[ ![ codecov] ( https://codecov.io/gh/alawryaguila/multi-view-AE/graph/badge.svg?token=NKO935MXFG )] ( https://codecov.io/gh/alawryaguila/multi-view-AE )
712[ ![ downloads] ( https://img.shields.io/pypi/dm/multiviewae )] ( https://pypi.org/project/multiviewae/ )
8- # Multi-view-AE: Multi-modal representation learning using autoencoders
9- <p align =" center " >
10- <img src =" https://github.com/alawryaguila/multi-view-AE/blob/master/docs/figures/logo.png " width =" 600px " ></center >
11- </p >
13+
14+ </div >
1215
1316` multi-view-AE ` is a collection of multi-modal autoencoder models for learning joint representations from multiple modalities of data. The package is structured such that all models have ` fit ` , ` predict_latents ` and ` predict_reconstruction ` methods. All models are built in Pytorch and Pytorch-Lightning.
1417
1518For more information on implemented models and how to use the package, please see the [ documentation] ( https://multi-view-ae.readthedocs.io/en/latest/ ) .
1619
20+ ## Library schematic
21+ <p align =" center " >
22+ <img src =" https://github.com/alawryaguila/multi-view-AE/blob/master/docs/figures/schematic_diagram.png " width =" 800px " ></center >
23+ </p >
24+
1725## Models Implemented
1826
1927Below is a table with the models contained within this repository and links to the original papers.
@@ -22,10 +30,9 @@ Below is a table with the models contained within this repository and links to t
2230| :------------:| :-------------------------------------------------------------------------------------------:| :----------------:| :-----------:|
2331| mcVAE | Multi-Channel Variational Autoencoder (mcVAE) | >=1 | [ link] ( http://proceedings.mlr.press/v97/antelmi19a.html ) |
2432| AE | Multi-view Autoencoder | >=1 | |
25- | AAE | Multi-view Adversarial Autoencoder with separate latent representations | >=1 | |
33+ | mAAE | Multi-view Adversarial Autoencoder | >=1 | |
2634| DVCCA | Deep Variational CCA | 2 | [ link] ( https://arxiv.org/abs/1610.03454 ) |
27- | jointAAE | Multi-view Adversarial Autoencoder with joint latent representation | >=1 | |
28- | wAAE | Multi-view Adversarial Autoencoder with joint latent representation and wasserstein loss | >=1 | |
35+ | mWAE | Multi-view Adversarial Autoencoder with a wasserstein loss | >=1 | |
2936| mmVAE | Variational mixture-of-experts autoencoder (MMVAE) | >=1 | [ link] ( https://arxiv.org/abs/1911.03393 ) |
3037| mVAE | Multimodal Variational Autoencoder (MVAE) | >=1 | [ link] ( https://arxiv.org/abs/1802.05335 ) |
3138| me_mVAE | Multimodal Variational Autoencoder (MVAE) with separate ELBO terms for each view | >=1 | [ link] ( https://arxiv.org/abs/1802.05335 ) |
0 commit comments