Skip to content

Commit 190b187

Browse files
committed
Merge branch 'Future' of https://github.com/stefanradev93/BayesFlow into Future
2 parents 8be19bd + 3cee5d6 commit 190b187

File tree

1 file changed

+9
-11
lines changed

1 file changed

+9
-11
lines changed

README.md

Lines changed: 9 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -18,24 +18,22 @@ when working with intractable simulators whose behavior as a whole is too comple
1818

1919
![Overview](https://github.com/stefanradev93/BayesFlow/blob/Future/img/high_level_framework.png?raw=true)
2020

21-
Currently, the following training approaches are implemented:
22-
1. Online training
23-
2. Offline training (external simulations)
24-
3. Offline training (internal simulations)
25-
4. Experience replay
26-
5. Round-based training
27-
2821
## Parameter Estimation
2922

3023
The BayesFlow approach for amortized parameter estimation is based on our paper:
3124

32-
Radev, S. T., Mertens, U. K., Voss, A., Ardizzone, L., & Köthe, U. (2020). BayesFlow: Learning complex stochastic models with invertible neural networks. <em>IEEE Transactions on Neural Networks and Learning Systems</em>, available for free at: https://arxiv.org/abs/2003.06281. The general pattern for building amortized posterior approximators is illsutrated below:
33-
34-
![BayesFlow](https://github.com/stefanradev93/BayesFlow/blob/Future/docs/source/tutorial_notebooks/img/trainer_connection.png?raw=true)
25+
Radev, S. T., Mertens, U. K., Voss, A., Ardizzone, L., & Köthe, U. (2020). BayesFlow: Learning complex stochastic models with invertible neural networks. <em>IEEE Transactions on Neural Networks and Learning Systems</em>, available for free at: https://arxiv.org/abs/2003.06281.
3526

3627
### Minimal Example
3728

38-
For instance, in order to tackle a simple memoryless model with 10 free parameters, we first need to set up an optional summary network and an inference network:
29+
```python
30+
def prior(D=2, mu=0., sigma=1.0):
31+
return np.random.default_rng().normal(loc=mu, scale=sigma, size=D)
32+
33+
def simulator(theta, n_obs=50, scale=1.0):
34+
return np.random.default_rng().normal(loc=theta, scale=scale, size=(n_obs, theta.shape[0]))
35+
```
36+
3937
```python
4038
summary_net = InvariantNetwork()
4139
inference_net = InvertibleNetwork({'n_params': 10})

0 commit comments

Comments
 (0)