Skip to content

Commit e290d85

Browse files
Update README.md with likelihood info
1 parent 7df92f3 commit e290d85

File tree

1 file changed

+32
-10
lines changed

1 file changed

+32
-10
lines changed

README.md

Lines changed: 32 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -77,13 +77,13 @@ Next, we create our BayesFlow setup consisting of a summary and an inference net
7777
```python
7878
summary_net = bf.networks.DeepSet()
7979
inference_net = bf.networks.InvertibleNetwork(num_params=2)
80-
amortizer = bf.amortizers.AmortizedPosterior(inference_net, summary_net)
80+
amortized_posterior = bf.amortizers.AmortizedPosterior(inference_net, summary_net)
8181
```
8282

8383
Finally, we connect the networks with the generative model via a `Trainer` instance:
8484

8585
```python
86-
trainer = bf.trainers.Trainer(amortizer=amortizer, generative_model=generative_model)
86+
trainer = bf.trainers.Trainer(amortizer=amortized_posterior, generative_model=generative_model)
8787
```
8888

8989
We are now ready to train an amortized posterior approximator. For instance,
@@ -113,7 +113,7 @@ per data set:
113113

114114
```python
115115
new_sims = trainer.configurator(generative_model(200))
116-
posterior_draws = amortizer.sample(new_sims, n_samples=500)
116+
posterior_draws = amortized_posterior.sample(new_sims, n_samples=500)
117117
```
118118

119119
We can then quickly inspect the how well the model can recover its parameters
@@ -167,7 +167,7 @@ In order to use this method, you should only provide the `summary_loss_fun` argu
167167
to the `AmortizedPosterior` instance:
168168

169169
```python
170-
amortizer = bf.amortizers.AmortizedPosterior(inference_net, summary_net, summary_loss_fun='MMD')
170+
amortized_posterior = bf.amortizers.AmortizedPosterior(inference_net, summary_net, summary_loss_fun='MMD')
171171
```
172172

173173
The amortizer knows how to combine its losses and you can inspect the summary space for outliers during inference.
@@ -207,13 +207,13 @@ Next, we construct our neural network with a `PMPNetwork` for approximating post
207207
```python
208208
summary_net = bf.networks.DeepSet()
209209
probability_net = bf.networks.PMPNetwork(num_models=2)
210-
amortizer = bf.amortizers.AmortizedModelComparison(probability_net, summary_net)
210+
amortized_bmc = bf.amortizers.AmortizedModelComparison(probability_net, summary_net)
211211
```
212212

213213
We combine all previous steps with a `Trainer` instance and train the neural approximator:
214214

215215
```python
216-
trainer = bf.trainers.Trainer(amortizer=amortizer, generative_model=meta_model)
216+
trainer = bf.trainers.Trainer(amortizer=amortized_bmc, generative_model=meta_model)
217217
losses = trainer.train_online(epochs=3, iterations_per_epoch=100, batch_size=32)
218218
```
219219

@@ -226,7 +226,7 @@ sims = trainer.configurator(meta_model(5000))
226226
When feeding the data to our trained network, we almost immediately obtain posterior model probabilities for each of the 5000 data sets:
227227

228228
```python
229-
model_probs = amortizer.posterior_probs(sims)
229+
model_probs = amortized_bmc.posterior_probs(sims)
230230
```
231231

232232
How good are these predicted probabilities in the closed world? We can have a look at the calibration:
@@ -257,13 +257,35 @@ C. (2021). Amortized Bayesian Model Comparison with Evidental Deep Learning.
257257
doi:10.1109/TNNLS.2021.3124052 available for free at: https://arxiv.org/abs/2004.10629
258258

259259
- Schmitt, M., Radev, S. T., & Bürkner, P. C. (2022). Meta-Uncertainty in
260-
Bayesian Model Comparison. <em>ArXiv preprint</em>, available for free at:
261-
https://arxiv.org/abs/2210.07278
260+
Bayesian Model Comparison. In <em>International Conference on Artificial Intelligence
261+
and Statistics</em>, 11-29, PMLR, available for free at: https://arxiv.org/abs/2210.07278
262262

263263
- Elsemüller, L., Schnuerch, M., Bürkner, P. C., & Radev, S. T. (2023). A Deep
264264
Learning Method for Comparing Bayesian Hierarchical Models. <em>ArXiv preprint</em>,
265265
available for free at: https://arxiv.org/abs/2301.11873
266266

267267
## Likelihood emulation
268268

269-
Example coming soon...
269+
In order to learn the exchangeable (i.e., permutation invariant) likelihood from the minimal example instead of the posterior, you may use the `AmortizedLikelihood` wrapper:
270+
271+
```python
272+
likelihood_net = bf.networks.InvertibleNetwork(num_params=2)
273+
amortized_likelihood = bf.amortizers.AmortizedLikelihood(likelihood_net)
274+
```
275+
276+
This wrapper can interact with a `Trainer` instance in the same way as the `AmortizedPosterior`. Finally, you can also learn the likelihood and the posterior *simultaneously* by using the `AmortizedPosteriorLikelihood` wrapper and choosing your preferred training scheme:
277+
278+
```python
279+
joint_amortizer = bf.amortizers.AmortizedPosteriorLikelihood(amortized_posterior, amortized_likelihood)
280+
```
281+
282+
Learning both densities enables us to approximate marginal likelihoods or perform approximate leave-one-out cross-validation (LOO-CV) for prior or posterior predictive model comparison, respectively.
283+
284+
### References and Further Reading
285+
286+
Radev, S. T., Schmitt, M., Pratz, V., Picchini, U., Köthe, U., & Bürkner, P. C. (2023).
287+
JANA: Jointly Amortized Neural Approximation of Complex Bayesian Models. <em>arXiv preprint</em>,
288+
available for free at: https://arxiv.org/abs/2302.09125
289+
290+
## Support
291+
This work is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy -– EXC-2181 - 390900948 (the Heidelberg Cluster of Excellence STRUCTURES) and -- EXC-2075 - 390740016 (the Stuttgart Cluster of Excellence SimTech), the Informatics for Life initiative funded by the Klaus Tschira Foundation, and Google Cloud through the Academic Research Grants program.

0 commit comments

Comments
 (0)