Skip to content

Commit 7114293

Browse files
committed
Formatting fix in notebook
1 parent a55612e commit 7114293

File tree

1 file changed

+2
-35
lines changed

1 file changed

+2
-35
lines changed

examples/Lotka_Volterra_point_estimation_and_expert_stats.ipynb

Lines changed: 2 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -554,16 +554,14 @@
554554
"\n",
555555
"* To estimate **quantiles**, the following is a strictly proper scoring rule:\n",
556556
"$$L(\\hat \\theta, \\theta; \\tau) = (\\hat \\theta - \\theta)(\\mathbf{1}_{\\hat \\theta - \\theta > 0} - \\tau)$$\n",
557-
"Here we write an indicator function as $\\mathbf{1}_{\\hat \\theta - \\theta > 0}$ to evaluate to 1 for overestimation (positive $\\hat \\theta - \\theta$) and $0$ otherwise.\n",
558557
"\n",
559-
" For $\\tau=\\frac 1 2$, over- or underestimating a true posterior sample $\\theta$ is weighted equally. In fact, the quantile loss with $\\tau=\\frac 1 2$ is identical to the median loss (up to a scaling of $\\frac 1 2$). For the same reasons, both estimate the median of the posterior.\n",
558+
" Here we write an indicator function as $\\mathbf{1}_{\\hat \\theta - \\theta > 0}$ to evaluate to 1 for overestimation (positive $\\hat \\theta - \\theta$) and $0$ otherwise.\n",
560559
"\n",
560+
" For $\\tau=\\frac 1 2$, over- or underestimating a true posterior sample $\\theta$ is weighted equally. In fact, the quantile loss with $\\tau=\\frac 1 2$ is identical to the median loss (up to a scaling of $\\frac 1 2$). For the same reasons, both estimate the median of the posterior.\n",
561561
"\n",
562562
" More generally, $\\tau \\in (0,1)$ is the quantile level, that is the point where to evaluate the [quantile function](https://en.wikipedia.org/wiki/Quantile_function).\n",
563563
"\n",
564564
"\n",
565-
"\n",
566-
"\n",
567565
"* Note, that when approximating the full distribution in BayesFlow we score a **probability estimate** $\\hat p(\\theta|x)$ with the log-score,\n",
568566
"$$L(\\hat p(\\theta|x), \\theta) = \\log (\\hat p(\\theta)) $$\n",
569567
"which is also a strictly proper scoring rule.\n",
@@ -791,16 +789,6 @@
791789
"Just for fun and because we can, let us save the trained point approximator to disk."
792790
]
793791
},
794-
{
795-
"cell_type": "code",
796-
"execution_count": 20,
797-
"id": "0de263ba-b9a9-4aca-bf8d-0b01b18ef4e8",
798-
"metadata": {},
799-
"outputs": [],
800-
"source": [
801-
"point_inference_workflow.approximator.build_from_data(adapter(training_data))"
802-
]
803-
},
804792
{
805793
"cell_type": "code",
806794
"execution_count": 21,
@@ -854,27 +842,6 @@
854842
"Since one point estimate already summarizes many posterior samples, we only have to do one forward pass with a point inference network, where we would have to make ~100 passes with a generative, full posterior approximator."
855843
]
856844
},
857-
{
858-
"cell_type": "code",
859-
"execution_count": 23,
860-
"id": "2f3833f9-a155-49aa-9e0a-d1b264c72fda",
861-
"metadata": {},
862-
"outputs": [
863-
{
864-
"data": {
865-
"text/plain": [
866-
"False"
867-
]
868-
},
869-
"execution_count": 23,
870-
"metadata": {},
871-
"output_type": "execute_result"
872-
}
873-
],
874-
"source": [
875-
"point_inference_workflow.approximator.built"
876-
]
877-
},
878845
{
879846
"cell_type": "code",
880847
"execution_count": 24,

0 commit comments

Comments
 (0)