|
57 | 57 | "\n", |
58 | 58 | "In this tutorial, we will illustrate how to perform posterior inference on simple, stationary SIR-like models (complex models will be tackled in a further notebook). SIR-like models comprise suitable illustrative examples, since they generate time-series and their outputs represent the results of solving a system of ordinary differential equations (ODEs).\n", |
59 | 59 | "\n", |
60 | | - "The details for tackling stochastic epidemiological models are described in our corresponding paper, which you can consult for a more formal exposition and a more comprehensive treatment of neural architectures:\n", |
| 60 | + "The details for tackling stochastic epidemiological models with neural networks are described in our corresponding paper, which you can consult for a more formal exposition and a more comprehensive treatment of neural architectures:\n", |
61 | 61 | "\n", |
62 | 62 | "<em>OutbreakFlow: Model-based Bayesian inference of disease outbreak dynamics with invertible neural networks and its application to the COVID-19 pandemics in Germany</em> https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1009472" |
63 | 63 | ] |
|
69 | 69 | "metadata": {}, |
70 | 70 | "outputs": [], |
71 | 71 | "source": [ |
72 | | - "RNG = np.random.default_rng(2022)" |
| 72 | + "RNG = np.random.default_rng(2023)" |
73 | 73 | ] |
74 | 74 | }, |
75 | 75 | { |
|
148 | 148 | "outputs": [], |
149 | 149 | "source": [ |
150 | 150 | "def model_prior():\n", |
151 | | - " \"\"\"Generates random draws from the prior.\"\"\"\n", |
| 151 | + " \"\"\"Generates a random draw from the joint prior.\"\"\"\n", |
152 | 152 | "\n", |
153 | 153 | " lambd = RNG.lognormal(mean=np.log(0.4), sigma=0.5)\n", |
154 | 154 | " mu = RNG.lognormal(mean=np.log(1 / 8), sigma=0.2)\n", |
|
173 | 173 | "id": "retained-namibia", |
174 | 174 | "metadata": {}, |
175 | 175 | "source": [ |
176 | | - "During training, we will also standardize the prior draws, that is, ensure zero means and unit scale. We will do this purely for technical reasons - neural networks like scaled values. In addition, our current prior ranges differ vastly, so each parameter will contribute disproportionately to the loss function.\n", |
| 176 | + "During training, we will also standardize the prior draws, that is, ensure zero location and unit scale. We will do this purely for technical reasons - neural networks like scaled values. In addition, our current prior ranges differ vastly, so each parameter axis may contribute disproportionately to the loss function.\n", |
177 | 177 | "\n", |
178 | 178 | "Here, we will use the `estimate_means_and_stds()` method of a `Prior` instance, which will estimate the prior means and standard deviations from random draws. We could have also just taken the analytic means and standard deviations, but these may not be available in all settings (e.g., implicit priors).\n", |
179 | 179 | "\n", |
180 | | - "<strong>Caution:</strong> Make sure you have a seed or you set a seed whenever you are doing a Monte-Carlo estimation, since your results might differ slightly due to the empirical variation of the estimates!" |
| 180 | + "<strong>Caution:</strong> Make sure you have a seed or you set a seed whenever you are doing Monte Carlo estimation, since your results might differ slightly due to the empirical variation of the estimates!" |
181 | 181 | ] |
182 | 182 | }, |
183 | 183 | { |
|
215 | 215 | "\n", |
216 | 216 | "def convert_params(mu, phi):\n", |
217 | 217 | " \"\"\"Helper function to convert mean/dispersion parameterization of a negative binomial to N and p,\n", |
218 | | - " as expected by numpy.\n", |
| 218 | + " as expected by numpy's negative_binomial.\n", |
219 | 219 | "\n", |
220 | 220 | " See https://en.wikipedia.org/wiki/Negative_binomial_distribution#Alternative_formulations\n", |
221 | 221 | " \"\"\"\n", |
|
227 | 227 | "\n", |
228 | 228 | "\n", |
229 | 229 | "def stationary_SIR(params, N, T, eps=1e-5):\n", |
230 | | - " \"\"\"Performs a forward simulation from the stationary SIR model given a random draw from the prior,\"\"\"\n", |
| 230 | + " \"\"\"Performs a forward simulation from the stationary SIR model given a random draw from the prior.\"\"\"\n", |
231 | 231 | "\n", |
232 | 232 | " # Extract parameters and round I0 and D\n", |
233 | 233 | " lambd, mu, D, I0, psi = params\n", |
|
0 commit comments