Skip to content

Commit 115148b

Browse files
Update README.md [skip ci]
1 parent a0dc5de commit 115148b

File tree

1 file changed

+7
-9
lines changed

1 file changed

+7
-9
lines changed

README.md

Lines changed: 7 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -169,7 +169,7 @@ to the `AmortizedPosterior` instance:
169169
amortizer = bf.amortizers.AmortizedPosterior(inference_net, summary_net, summary_loss_fun='MMD')
170170
```
171171

172-
The amortizer knows how to combine its losses.
172+
The amortizer knows how to combine its losses and you can inspect the summary space for outliers during inference.
173173

174174
### References and Further Reading
175175

@@ -179,8 +179,7 @@ preprint</em>, available for free at: https://arxiv.org/abs/2112.08866
179179

180180
## Model Comparison
181181

182-
BayesFlow can not only be used for parameter estimation, but also to approximate Bayesian model comparison via posterior model probabilities or Bayes factors.
183-
182+
BayesFlow can not only be used for parameter estimation, but also to perform approximate Bayesian model comparison via posterior model probabilities or Bayes factors.
184183
Let's extend the minimal example from before with a second model $M_2$ that we want to compare with our original model $M_1$:
185184

186185
```python
@@ -220,33 +219,32 @@ losses = trainer.train_online(epochs=3, iterations_per_epoch=100, batch_size=32)
220219
Let's simulate data sets from our models to check our networks' performance:
221220

222221
```python
223-
sim_data = trainer.configurator(meta_model(5000))
224-
sim_indices = sim_data["model_indices"]
222+
sims = trainer.configurator(meta_model(5000))
225223
```
226224

227225
When feeding the data to our trained network, we almost immediately obtain posterior model probabilities for each of the 5000 data sets:
228226

229227
```python
230-
sim_preds = amortizer(sim_data)
228+
model_probs = amortizer.posterior_probs(sims)
231229
```
232230

233231
How good are these predicted probabilities? We can have a look at the calibration:
234232

235233
```python
236-
cal_curves = bf.diagnostics.plot_calibration_curves(sim_indices, sim_preds)
234+
cal_curves = bf.diagnostics.plot_calibration_curves(sims["model_indices"], model_probs)
237235
```
238236

239237
<img src="img/showcase_calibration_curves.png" width=65% height=65%>
240238

241239
Our approximator shows excellent calibration, with the calibration curve being closely aligned to the diagonal, an expected calibration error (ECE) near 0 and most predicted probabilities being certain of the model underlying a data set. We can further assess patterns of misclassification with a confusion matrix:
242240

243241
```python
244-
conf_matrix = bf.diagnostics.plot_confusion_matrix(sim_indices, sim_preds)
242+
conf_matrix = bf.diagnostics.plot_confusion_matrix(sims["model_indices"], model_probs)
245243
```
246244

247245
<img src="img/showcase_confusion_matrix.png" width=44% height=44%>
248246

249-
For the vast majority of simulated data sets, the generating model is correctly detected. With these diagnostic results backing us up, we can safely apply our trained network to empirical data.
247+
For the vast majority of simulated data sets, the "true" data-generating model is correctly identified. With these diagnostic results backing us up, we can proceed and apply our trained network to empirical data.
250248

251249
BayesFlow is also able to conduct model comparison for hierarchical models. See this [tutorial notebook](docs/source/tutorial_notebooks/Hierarchical_Model_Comparison_MPT.ipynb) for an introduction to the associated workflow.
252250

0 commit comments

Comments
 (0)