Skip to content

Commit 99edabc

Browse files
Update README.md
1 parent 8fe84f9 commit 99edabc

File tree

1 file changed

+30
-6
lines changed

1 file changed

+30
-6
lines changed

README.md

Lines changed: 30 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -27,21 +27,20 @@ The repository has implementations for the following Bayesian layers:
2727
LinearRadial
2828
Conv1dRadial, Conv2dRadial, Conv3dRadial, ConvTranspose1dRadial, ConvTranspose2dRadial, ConvTranspose3dRadial
2929
LSTMRadial
30-
-->
3130
3231
- [ ] **Variational layers with Gaussian mixture model (GMM) posteriors using reparameterized Monte Carlo estimators** (in `pre-alpha`)
3332
3433
LinearMixture
3534
Conv1dMixture, Conv2dMixture, Conv3dMixture, ConvTranspose1dMixture, ConvTranspose2dMixture, ConvTranspose3dMixture
3635
LSTMMixture
36+
-->
3737

3838
Please refer to [documentation](doc/bayesian_torch.layers.md#layers) of Bayesian layers for details.
3939

4040
Other features include:
4141
- [x] [AvUC](https://github.com/IntelLabs/bayesian-torch/blob/main/bayesian_torch/utils/avuc_loss.py): Accuracy versus Uncertainty Calibration loss [[Krishnan and Tickoo 2020](https://proceedings.neurips.cc/paper/2020/file/d3d9446802a44259755d38e6d163e820-Paper.pdf)]
4242
- [x] [MOPED](https://github.com/IntelLabs/bayesian-torch/blob/main/bayesian_torch/utils/util.py#L72): specifying weight priors and variational posteriors with Empirical Bayes [[Krishnan et al. 2019](https://arxiv.org/abs/1906.05323)]
43-
- [ ] dnn_to_bnn: An API to convert deterministic deep neural network (dnn) model of any architecture to Bayesian deep neural network (bnn) model, simplifying the model definition by replacing neural network layers with corresponding Bayesian layers (`updating soon...`)
44-
43+
- [x] [dnn_to_bnn](https://github.com/IntelLabs/bayesian-torch/blob/main/bayesian_torch/models/dnn_to_bnn.py#L127): An API to convert deterministic deep neural network (dnn) model of any architecture to Bayesian deep neural network (bnn) model, simplifying the model definition i.e. drop-in replacements of Convolutional, Linear and LSTM layers to corresponding Bayesian layers. This will enable seamless conversion of existing topology of larger models to Bayesian deep neural network models for extending towards uncertainty-aware applications.
4544

4645
## Installation
4746

@@ -119,19 +118,44 @@ sh scripts/test_deterministic_cifar.sh
119118
If you use this code, please cite as:
120119
```sh
121120
@misc{krishnan2020bayesiantorch,
122-
author = {Ranganath Krishnan and Piero Esposito},
121+
author = {Ranganath Krishnan and Piero Esposito and Mahesh Subedar},
123122
title = {Bayesian-Torch: Bayesian neural network layers for uncertainty estimation},
124123
year = {2020},
125124
publisher = {GitHub},
126125
howpublished = {\url{https://github.com/IntelLabs/bayesian-torch}}
127126
}
128127
```
129-
130-
Cite the weight sampling methods as well: [Blundell et al. 2015](https://arxiv.org/abs/1505.05424); [Wen et al. 2018](https://arxiv.org/abs/1803.04386)
128+
Accuracy versus Uncertainty Calibration (AvUC) loss
129+
```sh
130+
@inproceedings{NEURIPS2020_d3d94468,
131+
title = {Improving model calibration with accuracy versus uncertainty optimization},
132+
author = {Krishnan, Ranganath and Tickoo, Omesh},
133+
booktitle = {Advances in Neural Information Processing Systems},
134+
volume = {33},
135+
pages = {18237--18248},
136+
year = {2020},
137+
url = {https://proceedings.neurips.cc/paper/2020/file/d3d9446802a44259755d38e6d163e820-Paper.pdf}
138+
139+
}
140+
```
141+
MOdel Priors with Empirical Bayes using DNN (MOPED)
142+
```sh
143+
@inproceedings{krishnan2020specifying,
144+
title={Specifying weight priors in bayesian deep neural networks with empirical bayes},
145+
author={Krishnan, Ranganath and Subedar, Mahesh and Tickoo, Omesh},
146+
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
147+
volume={34},
148+
number={04},
149+
pages={4477--4484},
150+
year={2020},
151+
url = {https://ojs.aaai.org/index.php/AAAI/article/view/5875}
152+
}
153+
```
131154

132155
**Contributors**
133156
- Ranganath Krishnan
134157
- Piero Esposito
158+
- Mahesh Subedar
135159

136160
This code is intended for researchers and developers, enables to quantify principled uncertainty estimates from deep neural network predictions using stochastic variational inference in Bayesian neural networks.
137161
Feedbacks, issues and contributions are welcome. Email to <[email protected]> for any questions.

0 commit comments

Comments
 (0)