You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: examples/samplers/MLDA_introduction.ipynb
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -63,7 +63,7 @@
63
63
"\n",
64
64
"[Gravity surveying](./MLDA_gravity_surveying.ipynb): In this notebook, we use MLDA to solve a 2-dimensional gravity surveying inverse problem. Evaluating the likelihood requires solving a PDE, which we do using [scipy](https://www.scipy.org/). We also compare the performance of MLDA with other PyMC samplers (Metropolis, DEMetropolisZ).\n",
65
65
"\n",
66
-
"[Variance reduction 1](./MLDA_variance_reduction_linear_regression.ipynb) and [Variance reduction 2](https://github.com/alan-turing-institute/pymc/blob/mlda_all_notebooks/docs/source/notebooks/MLDA_variance_reduction_groundwater.ipynb) (external link): Those two notebooks demonstrate the variance reduction feature in a linear regression model and a groundwater flow model. This feature allows the user to define a quantity of interest that they need to estimate using the MCMC samples. It then collects those quantities of interest, as well as differences of these quantities between levels, during MLDA sampling. The collected quentities can then be used to produce an estimate which has lower variance than a standard estimate that uses samples from the fine chain only. The first notebook does not have external dependencies, while the second one requires FEniCS. Note that the second notebook is outside the core PyMC repository because FEniCS is not a PyMC dependency.\n",
66
+
"[Variance reduction 1](./MLDA_variance_reduction_linear_regression.ipynb) and [Variance reduction 2](https://github.com/alan-turing-institute/pymc3/blob/mlda_all_notebooks/docs/source/notebooks/MLDA_variance_reduction_groundwater.ipynb) (external link): Those two notebooks demonstrate the variance reduction feature in a linear regression model and a groundwater flow model. This feature allows the user to define a quantity of interest that they need to estimate using the MCMC samples. It then collects those quantities of interest, as well as differences of these quantities between levels, during MLDA sampling. The collected quentities can then be used to produce an estimate which has lower variance than a standard estimate that uses samples from the fine chain only. The first notebook does not have external dependencies, while the second one requires FEniCS. Note that the second notebook is outside the core PyMC repository because FEniCS is not a PyMC dependency.\n",
67
67
"\n",
68
68
"[Adaptive error model](https://github.com/alan-turing-institute/pymc3/blob/mlda_all_notebooks/docs/source/notebooks/MLDA_adaptive_error_model.ipynb) (external link): In this notebook we use MLDA to tackle another inverse problem; groundwarer flow modeling. The aim is to infer the posterior distribution of model parameters (hydraulic conductivity) given data (measurements of hydraulic head). In this example we make use of PyTensor Ops in order to define a \"black box\" likelihood, i.e. a likelihood that uses external code. Specifically, our likelihood uses the [FEniCS](https://fenicsproject.org/) library to solve a PDE. This is a common scenario, as PDEs of this type are slow to solve with scipy or other standard libraries. Note that this notebook is outside the core PyMC repository because FEniCS is not a PyMC dependency. We employ the adaptive error model (AEM) feature and compare the performance of basic MLDA with AEM-enhanced MLDA. The idea of Adaptive Error Model (AEM) is to estimate the mean and variance of the forward-model error between adjacent levels, i.e. estimate the bias of the coarse forward model compared to the fine forward model, and use those estimates to correct the coarse model. Using the technique should improve ESS/sec on the fine level.\n",
Copy file name to clipboardExpand all lines: examples/samplers/MLDA_introduction.myst.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -57,7 +57,7 @@ Please note that the MLDA sampler is new in PyMC. The user should be extra criti
57
57
58
58
[Gravity surveying](./MLDA_gravity_surveying.ipynb): In this notebook, we use MLDA to solve a 2-dimensional gravity surveying inverse problem. Evaluating the likelihood requires solving a PDE, which we do using [scipy](https://www.scipy.org/). We also compare the performance of MLDA with other PyMC samplers (Metropolis, DEMetropolisZ).
59
59
60
-
[Variance reduction 1](./MLDA_variance_reduction_linear_regression.ipynb) and [Variance reduction 2](https://github.com/alan-turing-institute/pymc/blob/mlda_all_notebooks/docs/source/notebooks/MLDA_variance_reduction_groundwater.ipynb) (external link): Those two notebooks demonstrate the variance reduction feature in a linear regression model and a groundwater flow model. This feature allows the user to define a quantity of interest that they need to estimate using the MCMC samples. It then collects those quantities of interest, as well as differences of these quantities between levels, during MLDA sampling. The collected quentities can then be used to produce an estimate which has lower variance than a standard estimate that uses samples from the fine chain only. The first notebook does not have external dependencies, while the second one requires FEniCS. Note that the second notebook is outside the core PyMC repository because FEniCS is not a PyMC dependency.
60
+
[Variance reduction 1](./MLDA_variance_reduction_linear_regression.ipynb) and [Variance reduction 2](https://github.com/alan-turing-institute/pymc3/blob/mlda_all_notebooks/docs/source/notebooks/MLDA_variance_reduction_groundwater.ipynb) (external link): Those two notebooks demonstrate the variance reduction feature in a linear regression model and a groundwater flow model. This feature allows the user to define a quantity of interest that they need to estimate using the MCMC samples. It then collects those quantities of interest, as well as differences of these quantities between levels, during MLDA sampling. The collected quentities can then be used to produce an estimate which has lower variance than a standard estimate that uses samples from the fine chain only. The first notebook does not have external dependencies, while the second one requires FEniCS. Note that the second notebook is outside the core PyMC repository because FEniCS is not a PyMC dependency.
61
61
62
62
[Adaptive error model](https://github.com/alan-turing-institute/pymc3/blob/mlda_all_notebooks/docs/source/notebooks/MLDA_adaptive_error_model.ipynb) (external link): In this notebook we use MLDA to tackle another inverse problem; groundwarer flow modeling. The aim is to infer the posterior distribution of model parameters (hydraulic conductivity) given data (measurements of hydraulic head). In this example we make use of PyTensor Ops in order to define a "black box" likelihood, i.e. a likelihood that uses external code. Specifically, our likelihood uses the [FEniCS](https://fenicsproject.org/) library to solve a PDE. This is a common scenario, as PDEs of this type are slow to solve with scipy or other standard libraries. Note that this notebook is outside the core PyMC repository because FEniCS is not a PyMC dependency. We employ the adaptive error model (AEM) feature and compare the performance of basic MLDA with AEM-enhanced MLDA. The idea of Adaptive Error Model (AEM) is to estimate the mean and variance of the forward-model error between adjacent levels, i.e. estimate the bias of the coarse forward model compared to the fine forward model, and use those estimates to correct the coarse model. Using the technique should improve ESS/sec on the fine level.
@@ -380,29 +384,33 @@ Our trace plot also indicates there is a small effect of social fragmentation on
380
384
381
385
The payoff of all this work is that we can now visualize what it means to decompose the variance into explanatory, spatial and unstructured parts. One way to make this vivid is to inspect each component of the model individually. We'll see what the model thinks NYC should look like if spatial effects were the only source of variance, then we'll turn to the explanatory effect and finally the random effect.
382
386
383
-
We'll extract the means of several parameters to generate predictions. In the first case, we'll visualize only the predictions that come from the spatial component of the model. In other words, we are assuming $\rho = 1$ and we ignore $\theta$ and social fragmentation.
387
+
In the first case, we'll visualize only the predictions that come from the spatial component of the model. In other words, we are assuming $\rho = 1$ and we ignore $\theta$ and social fragmentation.
@@ -413,40 +421,53 @@ Spatial smoothing is especially useful for forecasting. Imagine there was a low-
413
421
We can notice that there are three neighborhoods of risk, represented by large yellow clusters, that are well-captured. This suggests that a lot of the explanation for traffic accidents has to do with unidentified but spatially structured causes. By contrast, the social fragmentation index only explains a single neighborhood of risk in the bottom center of the map (with a few small pockets of success elsewhere).
Finally, we might look at the unstructured variance by assuming $\rho = 0$. If our model managed to partition variance successfully, there should not be too many spatial clusters left over in the unstructured variance. Instead, variance should be scattered all over the map.
0 commit comments