This is a pytorch implementation of the following paper [AAAI] [arXiv]:
@inproceedings{takahashi2019variational,
title={Variational Autoencoder with Implicit Optimal Priors},
author={Takahashi, Hiroshi and Iwata, Tomoharu and Yamanaka, Yuki and Yamada, Masanori and Yagi, Satoshi},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={33},
pages={5066--5073},
year={2019}
}
Please read license.txt before reading or using the files.
Please install python>=3.6, torch, torchvision, numpy and scipy.
usage: main.py [-h] [--dataset DATASET] [--prior PRIOR]
[--learning_rate LEARNING_RATE] [--seed SEED]
- You can choose the
datasetfrom following four image datasets:MNIST,OMNIGLOT,HistopathologyandFreyFaces. - You can choose the
priorof the VAE fromnormal(standard Gaussian prior) oriop(implicit optimal prior). - You can also change the random
seedof the training andlearning_rateof the optimizer (Adam).
MNIST with standard Gaussian prior:
python main.py --dataset MNIST --prior normal
MNIST with implicit optimal prior:
python main.py --dataset MNIST --prior iop
- After the training, the mean of log-likelihood for test dataset will be displayed.
- The detailed information of the training and test will be saved in
npydirectory.