Skip to content

Commit fd9b13b

Browse files
manual fix for toc anchor links
1 parent ccfd11a commit fd9b13b

File tree

1 file changed

+42
-4
lines changed

1 file changed

+42
-4
lines changed

examples/Intro_Amortized_Posterior_Estimation.ipynb

Lines changed: 42 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,44 @@
88
},
99
"source": [
1010
"<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n",
11-
"<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Introduction\" data-toc-modified-id=\"Introduction-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Introduction</a></span></li><li><span><a href=\"#Defining-the-Generative-Model\" data-toc-modified-id=\"Defining-the-Generative-Model-2\"><span class=\"toc-item-num\">2&nbsp;&nbsp;</span>Defining the Generative Model</a></span><ul class=\"toc-item\"><li><span><a href=\"#Prior\" data-toc-modified-id=\"Prior-2.1\"><span class=\"toc-item-num\">2.1&nbsp;&nbsp;</span>Prior</a></span></li><li><span><a href=\"#Simulator\" data-toc-modified-id=\"Simulator-2.2\"><span class=\"toc-item-num\">2.2&nbsp;&nbsp;</span>Simulator</a></span></li><li><span><a href=\"#Generative-Model\" data-toc-modified-id=\"Generative-Model-2.3\"><span class=\"toc-item-num\">2.3&nbsp;&nbsp;</span>Generative Model</a></span></li></ul></li><li><span><a href=\"#Defining-the-Neural-Approximator\" data-toc-modified-id=\"Defining-the-Neural-Approximator-3\"><span class=\"toc-item-num\">3&nbsp;&nbsp;</span>Defining the Neural Approximator</a></span><ul class=\"toc-item\"><li><span><a href=\"#Summary-Network\" data-toc-modified-id=\"Summary-Network-3.1\"><span class=\"toc-item-num\">3.1&nbsp;&nbsp;</span>Summary Network</a></span></li><li><span><a href=\"#Inference-Network\" data-toc-modified-id=\"Inference-Network-3.2\"><span class=\"toc-item-num\">3.2&nbsp;&nbsp;</span>Inference Network</a></span></li><li><span><a href=\"#Amortized-Posterior\" data-toc-modified-id=\"Amortized-Posterior-3.3\"><span class=\"toc-item-num\">3.3&nbsp;&nbsp;</span>Amortized Posterior</a></span></li></ul></li><li><span><a href=\"#Defining-the-Trainer\" data-toc-modified-id=\"Defining-the-Trainer-4\"><span class=\"toc-item-num\">4&nbsp;&nbsp;</span>Defining the Trainer</a></span></li><li><span><a href=\"#Training-Phase\" data-toc-modified-id=\"Training-Phase-5\"><span class=\"toc-item-num\">5&nbsp;&nbsp;</span>Training Phase</a></span><ul class=\"toc-item\"><li><span><a href=\"#Online-Training\" data-toc-modified-id=\"Online-Training-5.1\"><span class=\"toc-item-num\">5.1&nbsp;&nbsp;</span>Online Training</a></span></li><li><span><a href=\"#Inspecting-the-Loss\" data-toc-modified-id=\"Inspecting-the-Loss-5.2\"><span class=\"toc-item-num\">5.2&nbsp;&nbsp;</span>Inspecting the Loss</a></span></li><li><span><a href=\"#Validating-Consistency\" data-toc-modified-id=\"Validating-Consistency-5.3\"><span class=\"toc-item-num\">5.3&nbsp;&nbsp;</span>Validating Consistency</a></span><ul class=\"toc-item\"><li><span><a href=\"#Latent-space-inspection\" data-toc-modified-id=\"Latent-space-inspection-5.3.1\"><span class=\"toc-item-num\">5.3.1&nbsp;&nbsp;</span>Latent space inspection</a></span></li><li><span><a href=\"#Simulation-Based-Calibration\" data-toc-modified-id=\"Simulation-Based-Calibration-5.3.2\"><span class=\"toc-item-num\">5.3.2&nbsp;&nbsp;</span>Simulation-Based Calibration</a></span></li><li><span><a href=\"#Posterior-z-score-and-contraction\" data-toc-modified-id=\"Posterior-z-score-and-contraction-5.3.3\"><span class=\"toc-item-num\">5.3.3&nbsp;&nbsp;</span>Posterior z-score and contraction</a></span></li></ul></li></ul></li><li><span><a href=\"#Inference-Phase\" data-toc-modified-id=\"Inference-Phase-6\"><span class=\"toc-item-num\">6&nbsp;&nbsp;</span>Inference Phase</a></span></li></ul></div>"
11+
"<div class=\"toc\">\n",
12+
" <ul class=\"toc-item\">\n",
13+
" <li><span><a href=\"#introduction\" data-toc-modified-id=\"introduction-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Introduction</a></span></li>\n",
14+
" <li>\n",
15+
" <span><a href=\"#defining-the-generative-model\" data-toc-modified-id=\"defining-the-generative-model-2\"><span class=\"toc-item-num\">2&nbsp;&nbsp;</span>Defining the Generative Model</a></span>\n",
16+
" <ul class=\"toc-item\">\n",
17+
" <li><span><a href=\"#prior\" data-toc-modified-id=\"prior-2.1\"><span class=\"toc-item-num\">2.1&nbsp;&nbsp;</span>Prior</a></span></li>\n",
18+
" <li><span><a href=\"#simulator\" data-toc-modified-id=\"simulator-2.2\"><span class=\"toc-item-num\">2.2&nbsp;&nbsp;</span>Simulator</a></span></li>\n",
19+
" <li><span><a href=\"#generative-model\" data-toc-modified-id=\"generative-model-2.3\"><span class=\"toc-item-num\">2.3&nbsp;&nbsp;</span>Generative Model</a></span></li>\n",
20+
" </ul>\n",
21+
" </li>\n",
22+
" <li>\n",
23+
" <span><a href=\"#defining-the-neural-approximator\" data-toc-modified-id=\"defining-the-neural-approximator-3\"><span class=\"toc-item-num\">3&nbsp;&nbsp;</span>Defining the Neural Approximator</a></span>\n",
24+
" <ul class=\"toc-item\">\n",
25+
" <li><span><a href=\"#summary-network\" data-toc-modified-id=\"summary-network-3.1\"><span class=\"toc-item-num\">3.1&nbsp;&nbsp;</span>Summary Network</a></span></li>\n",
26+
" <li><span><a href=\"#inference-network\" data-toc-modified-id=\"inference-network-3.2\"><span class=\"toc-item-num\">3.2&nbsp;&nbsp;</span>Inference Network</a></span></li>\n",
27+
" <li><span><a href=\"#amortized-posterior\" data-toc-modified-id=\"amortized-posterior-3.3\"><span class=\"toc-item-num\">3.3&nbsp;&nbsp;</span>Amortized Posterior</a></span></li>\n",
28+
" </ul>\n",
29+
" </li>\n",
30+
" <li><span><a href=\"#defining-the-trainer\" data-toc-modified-id=\"defining-the-trainer-4\"><span class=\"toc-item-num\">4&nbsp;&nbsp;</span>Defining the Trainer</a></span></li>\n",
31+
" <li>\n",
32+
" <span><a href=\"#training-phase\" data-toc-modified-id=\"training-phase-5\"><span class=\"toc-item-num\">5&nbsp;&nbsp;</span>Training Phase</a></span>\n",
33+
" <ul class=\"toc-item\">\n",
34+
" <li><span><a href=\"#online-training\" data-toc-modified-id=\"online-training-5.1\"><span class=\"toc-item-num\">5.1&nbsp;&nbsp;</span>Online Training</a></span></li>\n",
35+
" <li><span><a href=\"#inspecting-the-loss\" data-toc-modified-id=\"inspecting-the-loss-5.2\"><span class=\"toc-item-num\">5.2&nbsp;&nbsp;</span>Inspecting the Loss</a></span></li>\n",
36+
" <li>\n",
37+
" <span><a href=\"#validating-consistency\" data-toc-modified-id=\"validating-consistency-5.3\"><span class=\"toc-item-num\">5.3&nbsp;&nbsp;</span>Validating Consistency</a></span>\n",
38+
" <ul class=\"toc-item\">\n",
39+
" <li><span><a href=\"#latent-space-inspection\" data-toc-modified-id=\"latent-space-inspection-5.3.1\"><span class=\"toc-item-num\">5.3.1&nbsp;&nbsp;</span>Latent space inspection</a></span></li>\n",
40+
" <li><span><a href=\"#simulation-based-calibration\" data-toc-modified-id=\"simulation-based-calibration-5.3.2\"><span class=\"toc-item-num\">5.3.2&nbsp;&nbsp;</span>Simulation-Based Calibration</a></span></li>\n",
41+
" <li><span><a href=\"#posterior-z-score-and-contraction\" data-toc-modified-id=\"posterior-z-score-and-contraction-5.3.3\"><span class=\"toc-item-num\">5.3.3&nbsp;&nbsp;</span>Posterior z-score and contraction</a></span></li>\n",
42+
" </ul>\n",
43+
" </li>\n",
44+
" </ul>\n",
45+
" </li>\n",
46+
" <li><span><a href=\"#inference-phase\" data-toc-modified-id=\"inference-phase-6\"><span class=\"toc-item-num\">6&nbsp;&nbsp;</span>Inference Phase</a></span></li>\n",
47+
" </ul>\n",
48+
"</div>\n"
1249
]
1350
},
1451
{
@@ -832,7 +869,7 @@
832869
"id": "departmental-preservation",
833870
"metadata": {},
834871
"source": [
835-
"We can inspect the evolution of the loss via a utility function ``plot_losses``, for which we have imported the ``diagnostics`` module from ``BayesFlow``."
872+
"Bayesian models can be complex and computationally intensive, and metrics like training and validation loss can provide critical insights into the model's performance and stability. A decreasing loss over time indicates that the model is learning effectively, while fluctuations or increases in loss might suggest issues in the training process, such as overfitting, underfitting, or inappropriate learning rate settings. We can inspect the evolution of the loss via a utility function ``plot_losses``, for which we have imported the ``diagnostics`` module from ``BayesFlow``. "
836873
]
837874
},
838875
{
@@ -866,7 +903,7 @@
866903
"### Validating Consistency\n",
867904
"Validating the consistency of our model-amortizer coupling is an important step which should be performed before any real data are presented to the networks. In other words, the model should work in the ''small world'', before going out in the world of real data. This involves testing the model under known conditions and ensuring that it behaves logically and accurately. It is a critical step to avoid surprises when the model is later exposed to real and more complex data. In addition to a smooth loss reduction curve, we can use at least four handy diagnostic utilities. \n",
868905
"\n",
869-
"For a better illustration, we will start by generating some test simulations (not seen during training). Note, that we also use the default configurator to prepare these test simulations for interacting with the networks."
906+
"For a better illustration, we will start by generating some test simulations (not seen during training) using the simulator `model`. Note, that we also use the default configurator to prepare these test simulations for interacting with the networks."
870907
]
871908
},
872909
{
@@ -885,7 +922,8 @@
885922
"metadata": {},
886923
"source": [
887924
"#### Latent space inspection\n",
888-
"Since our training objective prescribes a unit Gaussian to the latent variable $\\boldsymbol{z}$ (see: https://arxiv.org/abs/2003.06281), we expect that, upon good convergence, the latent space will exhibit the prescribed probabilistic structure. Good convergence means that the model has learned an appropriate representation of the data in its latent space. We can quickly inspect this structure by calling the ``plot_latent_space_2d`` function from the `diagnostics` module."
925+
"\n",
926+
"Since our training objective prescribes a unit Gaussian to the latent variable $\\boldsymbol{z}$ (see: https://arxiv.org/abs/2003.06281), we expect that, upon good convergence, the latent space will exhibit the prescribed probabilistic structure. Good convergence means that the model has learned an appropriate representation of the data in its latent space. We can quickly inspect this structure by calling the ``plot_latent_space_2d`` function from the `diagnostics` module. "
889927
]
890928
},
891929
{

0 commit comments

Comments
 (0)