|
4 | 4 | "cell_type": "markdown", |
5 | 5 | "metadata": {}, |
6 | 6 | "source": [ |
7 | | - "# Moving from Bayesflow 1.0 to 2.0" |
| 7 | + "# Moving from Bayesflow 1.0 to 2.0\n", |
| 8 | + "\n", |
| 9 | + "_Author: Leona Odole_" |
8 | 10 | ] |
9 | 11 | }, |
10 | 12 | { |
11 | 13 | "cell_type": "markdown", |
12 | 14 | "metadata": {}, |
13 | 15 | "source": [ |
14 | | - "Current users of bayesflow will notice that with the update to 2.0 many things have changed and this short guide aims to clarify those changes. Users familiar with the previous Quickstart guide will notice that it follows a similar structure but assumes that users are already familiar with bayesflow so omits many of the the mathematical explaination in favor of demonstrating the differences in workflow. For a more detailed explaination of any of the bayesflow framework, users should read the linear regresion example notebook. \n", |
| 16 | + "Older users of bayesflow will notice that with the update to version 2.0 many things have changed. This short guide aims to clarify those changes. Users familiar with the previous Quickstart guide will notice that it follows a similar structure, but assumes that users are already familiar with bayesflow. So we omit many of the the mathematical explaination in favor of demonstrating the differences in workflow. For a more detailed explaination of any of the bayesflow framework, users should read, for example, the linear regresion example notebook. \n", |
15 | 17 | "\n", |
16 | | - "Additionally to avoid confusion, when necessary similarly named objects from _bayesflow1.0_ will have 1.0 after their name, whereas those from _bayesflow2.0_ will not. Finally a short table with a summary of the function call changes is provided at the end of the guide. " |
| 18 | + "Additionally to avoid confusion, similarly named objects from _bayesflow1.0_ will have 1.0 after their name, whereas those from _bayesflow2.0_ will not. Finally, a short table with a summary of the function call changes is provided at the end of the guide. " |
17 | 19 | ] |
18 | 20 | }, |
19 | 21 | { |
|
81 | 83 | "\n", |
82 | 84 | "def likelihood_model(theta, n_obs):\n", |
83 | 85 | " x = np.random.normal(loc=theta, size=(n_obs, theta.shape[0]))\n", |
84 | | - " return dict(x=x)\n" |
| 86 | + " return dict(x=x)" |
85 | 87 | ] |
86 | 88 | }, |
87 | 89 | { |
|
107 | 109 | "cell_type": "markdown", |
108 | 110 | "metadata": {}, |
109 | 111 | "source": [ |
110 | | - "Whereas the new framework directly uses the likelihood and prior functions directly in the simulator. We also a define a meta function which allows us to dynamically set the batch size. " |
| 112 | + "Whereas the new framework directly uses the likelihood and prior functions directly in the simulator. We also a define a meta function which allows us, for example, to dynamically set the number of observations per simulated dataset. " |
111 | 113 | ] |
112 | 114 | }, |
113 | 115 | { |
|
116 | 118 | "metadata": {}, |
117 | 119 | "outputs": [], |
118 | 120 | "source": [ |
119 | | - "def meta(batch_size):\n", |
| 121 | + "def meta():\n", |
120 | 122 | " return dict(n_obs=1)\n", |
121 | 123 | "\n", |
122 | 124 | "simulator = bf.make_simulator([theta_prior, likelihood_model], meta_fn=meta)" |
|
146 | 148 | "source": [ |
147 | 149 | "### 2. Adapter and Data Configuration\n", |
148 | 150 | "\n", |
149 | | - "In _bayesflow2.0_ we now need to specify the data configuration. For example we should specify which variables are `summary_variables` meaning observations that will be summarized in the summary network, the `inference_variables` meaning the prior draws on which we're interested in training the posterior network and the `inference_conditions` which specify our number of observations. Previously these things were inferred from the type of network used, but now they should be defined explictly with the `adapter`. This allows users to ??? " |
| 151 | + "In _bayesflow2.0_ we now need to specify the data configuration. For example we should specify which variables are `summary_variables` meaning observations that will be summarized in the summary network, the `inference_variables` meaning the prior draws on which we're interested in training the posterior network and the `inference_conditions` which specify our number of observations. Previously these things were inferred from the type of network used, but now they should be defined explictly with the `adapter`. The new approach is much more explicit and extensible. It also makes it easier to change individual settings, while keeping other settings at their defaults." |
150 | 152 | ] |
151 | 153 | }, |
152 | 154 | { |
|
224 | 226 | "cell_type": "markdown", |
225 | 227 | "metadata": {}, |
226 | 228 | "source": [ |
227 | | - "Previously the actual training and amortization was done in two steps with two different objects the `Amortizer1.0` and `Trainer1.0` . First users would create an amortizer containing the summary and inference networks." |
| 229 | + "Previously the actual training and amortization was done in two steps with two different objects the `Amortizer1.0` and `Trainer1.0`. First, users would create an amortizer containing the summary and inference networks." |
228 | 230 | ] |
229 | 231 | }, |
230 | 232 | { |
|
266 | 268 | "cell_type": "markdown", |
267 | 269 | "metadata": {}, |
268 | 270 | "source": [ |
269 | | - "Whereas previously a `Trainer1.0` object for training, now users call fit on the `approximator` directly. For additional flexibility in training the `approximator` also has two additional arguments the `learning rate` and `optimizer`. The optimizer can be any keras optimizer." |
| 271 | + "Whereas previously a `Trainer1.0` object for training, now users call fit on the `approximator` directly. For additional flexibility in training the `approximator` also has two additional arguments the `learning_rate` and `optimizer`. The optimizer can be any keras optimizer." |
270 | 272 | ] |
271 | 273 | }, |
272 | 274 | { |
|
283 | 285 | "cell_type": "markdown", |
284 | 286 | "metadata": {}, |
285 | 287 | "source": [ |
286 | | - "Users must then compile the `approximator` in oder to ??? " |
| 288 | + "Users must then compile the `approximator` with the `optimizer` to make everything ready for training." |
287 | 289 | ] |
288 | 290 | }, |
289 | 291 | { |
|
323 | 325 | "source": [ |
324 | 326 | "## 5.Diagnostics \n", |
325 | 327 | "Another change was made in the model diagnostics, much of the functionality remains the same, but the naming convention has changes. For example previously users would plot losses by using \n", |
326 | | - "`bf.diagnostics.plot_losses()` in bf 2.0 we instead have all the plotting function group together in `bf.diagnostics.plots` which means the corresponding function in 2.0 is `bf.diagnostics.plots.loss()`." |
| 328 | + "`bf.diagnostics.plot_losses()`. In *bayesflow2.0*, we instead have all the plotting function grouped together in `bf.diagnostics.plots`. This means, for example, that the loss function is now in `bf.diagnostics.plots.loss()`." |
327 | 329 | ] |
328 | 330 | }, |
329 | 331 | { |
|
341 | 343 | "cell_type": "markdown", |
342 | 344 | "metadata": {}, |
343 | 345 | "source": [ |
344 | | - "This was done as we have also added diagnostic metrics such as calibration error, posterior contraction, and root mean squared error. These functions can accordingly be found in `bf.diagnostics.metrics` but for more information please see the API. " |
| 346 | + "This was done as we have also added diagnostic metrics such as calibration error, posterior contraction, and root mean squared error. These functions can accordingly be found in `bf.diagnostics.metrics`. For more information please see the documentation." |
345 | 347 | ] |
346 | 348 | }, |
347 | | - { |
348 | | - "cell_type": "markdown", |
349 | | - "metadata": {}, |
350 | | - "source": [ |
351 | | - "# Other New Features? " |
352 | | - ] |
353 | | - }, |
354 | | - { |
355 | | - "cell_type": "markdown", |
356 | | - "metadata": {}, |
357 | | - "source": [] |
358 | | - }, |
359 | 349 | { |
360 | 350 | "cell_type": "markdown", |
361 | 351 | "metadata": {}, |
362 | 352 | "source": [ |
363 | 353 | "# Summary Change Table \n", |
364 | 354 | "\n", |
365 | | - "| 1.0 | 2.0 Useage |\n", |
| 355 | + "| Bayesflow 1.0 | Bayesflow 2.0 useage |\n", |
366 | 356 | "| :--------| :---------| \n", |
367 | 357 | "| `Prior`, `Simulator` | Defunct and no longer standalone objects but incorporated into `simulator` | \n", |
368 | 358 | "|`GenerativeModel` | Defunct with it's functionality having been taken over by `simulations.make_simulator` | \n", |
369 | 359 | "| `training.configurator` | Functionality taken over by `Adapter` | \n", |
370 | 360 | "|`Trainer` | Functionality taken over by `fit` method of `Approximator` | \n", |
371 | 361 | "| `AmortizedPosterior`| Renamed to `Approximator` | " |
372 | 362 | ] |
373 | | - }, |
374 | | - { |
375 | | - "cell_type": "code", |
376 | | - "execution_count": null, |
377 | | - "metadata": {}, |
378 | | - "outputs": [], |
379 | | - "source": [] |
380 | 363 | } |
381 | 364 | ], |
382 | 365 | "metadata": { |
|
0 commit comments