Skip to content

Commit 5d01c65

Browse files
Update README.md
1 parent e290d85 commit 5d01c65

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

README.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# BayesFlow <img src="img/bayesflow_hex.png" align="right" width=20% height=20% />
1+
# BayesFlow <img src="https://github.com/stefanradev93/BayesFlow/blob/master/img/bayesflow_hex.png?raw=true" align="right" width=20% height=20% />
22

33
[![Actions Status](https://github.com/stefanradev93/bayesflow/workflows/Tests/badge.svg)](https://github.com/stefanradev93/bayesflow/actions)
44
[![Licence](https://img.shields.io/github/license/stefanradev93/BayesFlow)](https://img.shields.io/github/license/stefanradev93/BayesFlow)
@@ -31,7 +31,7 @@ when working with intractable simulators whose behavior as a whole is too
3131
complex to be described analytically. The figure below presents a higher-level
3232
overview of neurally bootstrapped Bayesian inference.
3333

34-
<img src="img/high_level_framework.png" width=80% height=80%>
34+
<img src="https://github.com/stefanradev93/BayesFlow/blob/master/img/high_level_framework.png?raw=true" width=80% height=80%>
3535

3636
## Getting Started: Parameter Estimation
3737

@@ -101,7 +101,7 @@ the model-amortizer combination:
101101
fig = trainer.diagnose_sbc_histograms()
102102
```
103103

104-
<img src="img/showcase_sbc.png" width=65% height=65%>
104+
<img src="https://github.com/stefanradev93/BayesFlow/blob/master/img/showcase_sbc.png?raw=true" width=65% height=65%>
105105

106106
The histograms are roughly uniform and lie within the expected range for
107107
well-calibrated inference algorithms as indicated by the shaded gray areas.
@@ -123,7 +123,7 @@ across the simulated data sets.
123123
fig = bf.diagnostics.plot_recovery(posterior_draws, new_sims['parameters'])
124124
```
125125

126-
<img src="img/showcase_recovery.png" width=65% height=65%>
126+
<img src="https://github.com/stefanradev93/BayesFlow/blob/master/img/showcase_recovery.png?raw=true" width=65% height=65%>
127127

128128
For any individual data set, we can also compare the parameters' posteriors with
129129
their corresponding priors:
@@ -132,7 +132,7 @@ their corresponding priors:
132132
fig = bf.diagnostics.plot_posterior_2d(posterior_draws[0], prior=generative_model.prior)
133133
```
134134

135-
<img src="img/showcase_posterior.png" width=45% height=45%>
135+
<img src="https://github.com/stefanradev93/BayesFlow/blob/master/img/showcase_posterior.png?raw=true" width=45% height=45%>
136136

137137
We see clearly how the posterior shrinks relative to the prior for both
138138
model parameters as a result of conditioning on the data.
@@ -161,7 +161,7 @@ amortized inference if the generative model is a poor representation of reality?
161161
A modified loss function optimizes the learned summary statistics towards a unit
162162
Gaussian and reliably detects model misspecification during inference time.
163163

164-
![](docs/source/images/model_misspecification_amortized_sbi.png?raw=true)
164+
![](https://github.com/stefanradev93/BayesFlow/blob/master/docs/source/images/model_misspecification_amortized_sbi.png?raw=true)
165165

166166
In order to use this method, you should only provide the `summary_loss_fun` argument
167167
to the `AmortizedPosterior` instance:
@@ -235,15 +235,15 @@ How good are these predicted probabilities in the closed world? We can have a lo
235235
cal_curves = bf.diagnostics.plot_calibration_curves(sims["model_indices"], model_probs)
236236
```
237237

238-
<img src="img/showcase_calibration_curves.png" width=65% height=65%>
238+
<img src="https://github.com/stefanradev93/BayesFlow/blob/master/img/showcase_calibration_curves.png?raw=true" width=65% height=65%>
239239

240240
Our approximator shows excellent calibration, with the calibration curve being closely aligned to the diagonal, an expected calibration error (ECE) near 0 and most predicted probabilities being certain of the model underlying a data set. We can further assess patterns of misclassification with a confusion matrix:
241241

242242
```python
243243
conf_matrix = bf.diagnostics.plot_confusion_matrix(sims["model_indices"], model_probs)
244244
```
245245

246-
<img src="img/showcase_confusion_matrix.png" width=44% height=44%>
246+
<img src="https://github.com/stefanradev93/BayesFlow/blob/master/img/showcase_confusion_matrix.png?raw=true" width=44% height=44%>
247247

248248
For the vast majority of simulated data sets, the "true" data-generating model is correctly identified. With these diagnostic results backing us up, we can proceed and apply our trained network to empirical data.
249249

0 commit comments

Comments
 (0)