Skip to content

Commit 91d3f8b

Browse files
committed
Add docs and minimal example
1 parent 79b4fbb commit 91d3f8b

File tree

2 files changed

+81
-16
lines changed

2 files changed

+81
-16
lines changed

README.md

Lines changed: 31 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ It provides users with:
99

1010
- A user-friendly API for rapid Bayesian workflows
1111
- A rich collection of neural network architectures
12-
- Multi-Backend Support via [Keras3](https://keras.io/keras_3/): You can use [PyTorch](https://github.com/pytorch/pytorch), [TensorFlow](https://github.com/tensorflow/tensorflow), or [JAX](https://github.com/google/jax)
12+
- Multi-backend support via [Keras3](https://keras.io/keras_3/): You can use [PyTorch](https://github.com/pytorch/pytorch), [TensorFlow](https://github.com/tensorflow/tensorflow), or [JAX](https://github.com/google/jax)
1313

1414
BayesFlow is designed to be a flexible and efficient tool that enables rapid statistical inference
1515
fueled by continuous progress in generative AI and Bayesian inference.
@@ -29,11 +29,36 @@ neural networks for parameter estimation, model comparison, and model validation
2929
when working with intractable simulators whose behavior as a whole is too
3030
complex to be described analytically.
3131

32-
## Disclaimer
32+
## Getting Started
3333

34-
This is the current dev version of BayesFlow, which constitutes a complete refactor of the library built on Keras 3. This way, you can now use any of the major deep learning libraries as backend for BayesFlow. The refactor is still work in progress with some of the advanced features not yet implemented. We are actively working on them and promise to catch up soon.
34+
Using the high-level interface is easy, as demonstrated by the minimal working example below:
3535

36-
If you encounter any issues, please don't hesitate to open an issue here on [Github](https://github.com/bayesflow-org/bayesflow/issues) or ask questions on our [Discourse Forums](https://discuss.bayesflow.org/).
36+
```python
37+
import bayesflow as bf
38+
39+
workflow = bf.BasicWorkflow(
40+
inference_network=bf.networks.FlowMatching(),
41+
summary_network=bf.networks.TimeSeriesTransformer(),
42+
inference_variables=["parameters"],
43+
summary_variables=["observables"],
44+
simulator=bf.simulators.SIR()
45+
)
46+
47+
history = workflow.fit_online(epochs=50, batch_size=32, num_batches_per_epoch=500)
48+
49+
diagnostics = workflow.plot_default_diagnostics(test_data=300)
50+
```
51+
52+
For an in-depth exposition, check out our walkthrough notebooks below. More tutorials are always welcome!
53+
54+
1. [Linear regression starter example](examples/Linear_Regression_Starter.ipynb)
55+
2. [From ABC to BayesFlow](examples/From_ABC_to_BayesFlow.ipynb)
56+
3. [Two moons starter example](examples/Two_Moons_Starter.ipynb)
57+
4. [Rapid iteration with point estimators](examples/Lotka_Volterra_point_estimation_and_expert_stats.ipynb)
58+
5. [SIR model with custom summary network](examples/SIR_Posterior_Estimation.ipynb)
59+
6. [Hyperparameter optimization](examples/Hyperparameter_Optimization.ipynb)
60+
7. [Bayesian experimental design](examples/Bayesian_Experimental_Design.ipynb)
61+
8. [Simple model comparison example](examples/One_Sample_TTest.ipynb)
3762

3863
## Install
3964

@@ -89,19 +114,9 @@ cd <local-path-to-bayesflow-repository>
89114
conda env create --file environment.yaml --name bayesflow
90115
```
91116

92-
## Getting Started
93-
94-
Check out some of our walk-through notebooks below. We are actively working on porting all notebooks to the new interface so more will be available soon!
117+
## Reporting Issues
95118

96-
1. [Linear regression starter example](examples/Linear_Regression_Starter.ipynb)
97-
2. [From ABC to BayesFlow](examples/From_ABC_to_BayesFlow.ipynb)
98-
3. [Two moons starter example](examples/Two_Moons_Starter.ipynb)
99-
4. [SIR model with custom summary network](examples/SIR_Posterior_Estimation.ipynb)
100-
5. [Hyperparameter optimization](examples/Hyperparameter_Optimization.ipynb)
101-
6. [Bayesian experimental design](examples/Bayesian_Experimental_Design.ipynb)
102-
7. [Simple model comparison example (One-Sample T-Test)](examples/One_Sample_TTest.ipynb)
103-
8. [Rapid iteration with point estimation and expert statistics for Lotka-Volterra dynamics](examples/Lotka_Volterra_point_estimation_and_expert_stats.ipynb)
104-
9. More coming soon...
119+
If you encounter any issues, please don't hesitate to open an issue here on [Github](https://github.com/bayesflow-org/bayesflow/issues) or ask questions on our [Discourse Forums](https://discuss.bayesflow.org/).
105120

106121
## Documentation \& Help
107122

bayesflow/approximators/point_approximator.py

Lines changed: 50 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,29 @@ def estimate(
2424
split: bool = False,
2525
**kwargs,
2626
) -> dict[str, dict[str, np.ndarray]]:
27+
"""
28+
Provides point estimates based on provided conditions (e.g., observables).
29+
30+
This method processes input conditions, computes estimates, applies necessary adapter transformations,
31+
and optionally splits the resulting arrays along the last axis.
32+
33+
Parameters
34+
----------
35+
conditions : dict[str, np.ndarray]
36+
A dictionary mapping variable names to NumPy arrays representing the conditions
37+
for the estimation process.
38+
split : bool, optional
39+
If True, the estimated arrays are split along the last axis, by default False.
40+
**kwargs
41+
Additional keyword arguments passed to underlying processing functions.
42+
43+
Returns
44+
-------
45+
dict[str, dict[str, np.ndarray]]
46+
A nested dictionary where the top-level keys correspond to original variable names,
47+
and values contain dictionaries mapping estimation results to NumPy arrays.
48+
"""
49+
2750
conditions = self._prepare_conditions(conditions, **kwargs)
2851
estimates = self._estimate(**conditions, **kwargs)
2952
estimates = self._apply_inverse_adapter_to_estimates(estimates, **kwargs)
@@ -45,6 +68,33 @@ def sample(
4568
split: bool = False,
4669
**kwargs,
4770
) -> dict[str, np.ndarray]:
71+
"""
72+
Generate samples from point estimates based on provided conditions. These samples
73+
will generally not correspond to samples from the fully Bayesian posterior, since
74+
they will assume some parametric form (e.g., Gaussian in the case of mean score).
75+
76+
This method draws a specified number of samples according to the given conditions,
77+
applies necessary transformations, and optionally splits the resulting arrays along the last axis.
78+
79+
Parameters
80+
----------
81+
num_samples : int
82+
The number of samples to generate.
83+
conditions : dict[str, np.ndarray]
84+
A dictionary mapping variable names to NumPy arrays representing the conditions
85+
for the sampling process.
86+
split : bool, optional
87+
If True, the sampled arrays are split along the last axis, by default False.
88+
**kwargs
89+
Additional keyword arguments passed to underlying processing functions.
90+
91+
Returns
92+
-------
93+
dict[str, np.ndarray]
94+
A dictionary where keys correspond to variable names and values are NumPy arrays
95+
containing the generated samples.
96+
"""
97+
4898
conditions = self._prepare_conditions(conditions, **kwargs)
4999
samples = self._sample(num_samples, **conditions, **kwargs)
50100
samples = self._apply_inverse_adapter_to_samples(samples, **kwargs)

0 commit comments

Comments
 (0)