Skip to content

Commit 5159f61

Browse files
committed
Fix equation labels
1 parent 38e4395 commit 5159f61

File tree

1 file changed

+24
-63
lines changed

1 file changed

+24
-63
lines changed

notebooks/5.0-Visualization-and-modeling.ipynb

Lines changed: 24 additions & 63 deletions
Original file line numberDiff line numberDiff line change
@@ -812,10 +812,7 @@
812812
{
813813
"cell_type": "markdown",
814814
"metadata": {
815-
"collapsed": false,
816-
"jupyter": {
817-
"outputs_hidden": false
818-
}
815+
"collapsed": false
819816
},
820817
"source": [
821818
"## Linear fits\n",
@@ -835,10 +832,7 @@
835832
{
836833
"cell_type": "markdown",
837834
"metadata": {
838-
"collapsed": false,
839-
"jupyter": {
840-
"outputs_hidden": false
841-
}
835+
"collapsed": false
842836
},
843837
"source": [
844838
"### Explicit formulas for uniform uncertainty\n",
@@ -853,16 +847,16 @@
853847
"which we can differentiate to find the extremal conditions\n",
854848
"\n",
855849
"\\begin{align}\n",
856-
"\\left.\\frac{\\partial{\\chi^2}}{\\partial m}\\right|_{\\hat{m},\\hat{c}} &= -\\frac{2}{\\alpha^2}\\sum_{k=1}^N[y_k - (\\hat{m}x_k + \\hat{c})]x_k = 0,\\\\\n",
857-
"\\left.\\frac{\\partial{\\chi^2}}{\\partial c}\\right|_{\\hat{m},\\hat{c}} &= -\\frac{2}{\\alpha^2}\\sum_{k=1}^N[y_k - (\\hat{m}x_k + \\hat{c})] = 0.\n",
850+
"\\left.\\frac{\\partial{\\chi^2}}{\\partial m}\\right|_{\\hat{m},\\hat{c}} &= -\\frac{2}{\\alpha^2}\\sum_{k=1}^N[y_k - (\\hat{m}x_k + \\hat{c})]x_k = 0,\\label{eq:extremal-m}\\tag{4}\\\\\n",
851+
"\\left.\\frac{\\partial{\\chi^2}}{\\partial c}\\right|_{\\hat{m},\\hat{c}} &= -\\frac{2}{\\alpha^2}\\sum_{k=1}^N[y_k - (\\hat{m}x_k + \\hat{c})] = 0.\\label{eq:extremal-c}\\tag{5}\n",
858852
"\\end{align}\n",
859853
"\n",
860854
"Solving this system of equations for $\\hat{m}$ and $\\hat{c}$ give *MU* Eqs. (5.1) and (5.2) (rewritten for consistency with the notation here),\n",
861855
"\n",
862856
"\\begin{align}\n",
863-
"\\hat{c} &= \\frac{\\left(\\sum_k x_k^2\\right)\\left(\\sum_k y_k\\right) - \\left(\\sum_k x_k\\right)\\left(\\sum_k x_k y_k\\right)}{\\Delta}, \\label{eq:c-hat}\\tag{4}\\\\\n",
864-
"\\hat{m} &= \\frac{N\\sum_k x_k y_k - \\left(\\sum_k x_k\\right)\\left(\\sum_k y_k\\right)}{\\Delta},\\quad\\text{with}\\\\\n",
865-
"\\Delta &= N\\sum_k x_k^2 - \\left(\\sum_k x_k\\right)^2. \\label{eq:m-hat}\\tag{5}\n",
857+
"\\hat{c} &= \\frac{\\left(\\sum_k x_k^2\\right)\\left(\\sum_k y_k\\right) - \\left(\\sum_k x_k\\right)\\left(\\sum_k x_k y_k\\right)}{\\Delta}, \\label{eq:c-hat}\\tag{6}\\\\\n",
858+
"\\hat{m} &= \\frac{N\\sum_k x_k y_k - \\left(\\sum_k x_k\\right)\\left(\\sum_k y_k\\right)}{\\Delta},\\quad\\text{with} \\label{eq:m-hat}\\tag{7}\\\\\n",
859+
"\\Delta &= N\\sum_k x_k^2 - \\left(\\sum_k x_k\\right)^2. \\label{eq:delta}\\tag{8}\n",
866860
"\\end{align}\n",
867861
"\n",
868862
"These equations express $\\hat{m}$ and $\\hat{c}$ *as unique, explicit functions of the data* that will *automatically* minimize $\\chi^2$ in Eq. (\\ref{eq:lsquniform}) for *any* values of $\\{x_k, y_k\\}$. This is one of the chief advantages of a linear fit: the least-squares fit parameters may be determined unambiguously from the data, without a search algorithm. It is important to recognize, though, that these equations will yield values even when $\\{x_k, y_k\\}$ do *not* satisfy a linear relationship, so we must always evaluate the quality of the fit before assigning much significance to the best-fit parameters obtained from it.\n",
@@ -872,34 +866,34 @@
872866
"Now that we have expressed $\\hat{m}$ and $\\hat{c}$ as functions of $\\{x_k, y_k\\}$, we can use standard error propagation to compute the uncertainties in $\\hat{m}$ and $\\hat{c}$. To do this, we must treat each measurement $y_k$ as an independent variable in the functions $\\hat{m}$ and $\\hat{c}$, all with standard error $\\alpha$. Recall that we have assumed the $x_k$ variables as known, so we do not need to propagate their uncertainty.\n",
873867
"\n",
874868
"\\begin{align}\n",
875-
"\\alpha_\\hat{m}^2 &= \\sum_{l = 1}^N\\left(\\frac{\\partial\\hat{m}}{\\partial y_l}\\right)^2\\alpha^2 = \\frac{\\alpha^2}{\\Delta^2}\\sum_{l = 1}^N\\left(Nx_l - \\sum_{k=1}^N x_k\\right)^2 = \\frac{N}{\\Delta}\\alpha^2,\\\\\n",
876-
"\\alpha_\\hat{c}^2 &= \\sum_{l = 1}^N\\left(\\frac{\\partial\\hat{c}}{\\partial y_l}\\right)^2\\alpha^2 = \\frac{\\alpha^2}{\\Delta^2}\\sum_{l = 1}^N\\left(\\sum_{k=1}^N x_k^2 - x_l\\sum_{k=1}^N x_k\\right)^2 = \\frac{\\sum_{k=1}^N x_k^2}{\\Delta}\\alpha^2.\n",
869+
"\\alpha_\\hat{m}^2 &= \\sum_{l = 1}^N\\left(\\frac{\\partial\\hat{m}}{\\partial y_l}\\right)^2\\alpha^2 = \\frac{\\alpha^2}{\\Delta^2}\\sum_{l = 1}^N\\left(Nx_l - \\sum_{k=1}^N x_k\\right)^2 = \\frac{N}{\\Delta}\\alpha^2, \\label{eq:alpha-m-hat-sq}\\tag{9}\\\\\n",
870+
"\\alpha_\\hat{c}^2 &= \\sum_{l = 1}^N\\left(\\frac{\\partial\\hat{c}}{\\partial y_l}\\right)^2\\alpha^2 = \\frac{\\alpha^2}{\\Delta^2}\\sum_{l = 1}^N\\left(\\sum_{k=1}^N x_k^2 - x_l\\sum_{k=1}^N x_k\\right)^2 = \\frac{\\sum_{k=1}^N x_k^2}{\\Delta}\\alpha^2.\\label{eq:alpha-c-hat-sq}\\tag{10}\n",
877871
"\\end{align}\n",
878872
"\n",
879873
"Taking the square root of these expressions, we have\n",
880874
"\n",
881875
"\\begin{align}\n",
882-
"\\alpha_{\\hat{m}} &= \\alpha\\sqrt{\\frac{N}{\\Delta}}, & \\alpha_{\\hat{c}} &= \\alpha\\sqrt{\\frac{\\sum_{k=1}^N x_k^2}{\\Delta}}, \\label{eq:alpha-m-c}\\tag{6}\n",
876+
"\\alpha_{\\hat{m}} &= \\alpha\\sqrt{\\frac{N}{\\Delta}}, & \\alpha_{\\hat{c}} &= \\alpha\\sqrt{\\frac{\\sum_{k=1}^N x_k^2}{\\Delta}}, \\label{eq:alpha-m-c}\\tag{11}\n",
883877
"\\end{align}\n",
884878
"\n",
885879
"which are equivalent to *MU* Eqs. (5.3) and (5.4) if we substitute $\\alpha = \\alpha_\\text{CU}$, where\n",
886880
"\n",
887881
"$$\n",
888-
"\\alpha_{CU} = \\sqrt{\\frac{1}{N-2}\\sum_{k=1}^{N}[y_k - (\\hat{m}x_k + \\hat{c})]^2} \\label{eq:alpha-CU}\\tag{7}\n",
882+
"\\alpha_{CU} = \\sqrt{\\frac{1}{N-2}\\sum_{k=1}^{N}[y_k - (\\hat{m}x_k + \\hat{c})]^2} \\label{eq:alpha-CU}\\tag{12}\n",
889883
"$$\n",
890884
"\n",
891885
"is known as the *common uncertainty*. The distinction here is that $\\alpha$ is assumed to be *known*, just as we assume $\\{x_k\\}$ known, while we determine $\\alpha_\\text{CU}$ from the same data that we use to obtain the fit. \n",
892886
"\n",
893887
"Just as the data will vary from one set of measurements to another, so will $\\alpha_\\text{CU}$, while $\\alpha$ remains fixed. If your estimate for $\\alpha$ is accurate, then the expectation of $\\alpha_\\text{CU}$ (ie, the average over $N\\rightarrow\\infty$ data sets) will be\n",
894888
"\n",
895889
"$$\n",
896-
"\\left\\langle \\alpha_\\text{CU}^2\\right\\rangle = \\alpha^2,\n",
890+
"\\left\\langle \\alpha_\\text{CU}^2\\right\\rangle = \\alpha^2,\\label{eq:E-alpha-CU}\\tag{13}\n",
897891
"$$\n",
898892
"\n",
899893
"so the difference between $\\alpha_\\text{CU}$ and $\\alpha$ is usually small. But if your estimate for $\\alpha$ is *inaccurate*, then the difference between $\\alpha_\\text{CU}$ and $\\alpha$ can be large, so comparing the two provides a good consistency check. The standard way to make this comparison is to compute\n",
900894
"\n",
901895
"$$\n",
902-
"\\chi^2_\\text{min} = \\chi^2(\\hat{m}, \\hat{c}) = \\frac{1}{\\alpha^2}\\sum_{k=1}^N[y_k - (\\hat{m}x_k + \\hat{c})]^2 = \\frac{\\alpha_{CU}^2}{\\alpha^2}(N-2),\n",
896+
"\\chi^2_\\text{min} = \\chi^2(\\hat{m}, \\hat{c}) = \\frac{1}{\\alpha^2}\\sum_{k=1}^N[y_k - (\\hat{m}x_k + \\hat{c})]^2 = \\frac{\\alpha_{CU}^2}{\\alpha^2}(N-2),\\label{eq:chi2}\\tag{14}\n",
903897
"$$\n",
904898
"\n",
905899
"and check that $\\chi^2_\\text{min}\\approx N - 2$. We compare to $N-2$ here because there are two free fit parameters, $\\hat{m}$ and $\\hat{c}$—in general, for a fit with $p$ fit parameters, we would compare to $N-p$. For more sophisticated ways to evaluate a fit, see the [Residual analysis](#Residual-analysis) section below.\n",
@@ -911,10 +905,7 @@
911905
"cell_type": "code",
912906
"execution_count": null,
913907
"metadata": {
914-
"collapsed": false,
915-
"jupyter": {
916-
"outputs_hidden": false
917-
}
908+
"collapsed": false
918909
},
919910
"outputs": [],
920911
"source": [
@@ -990,10 +981,7 @@
990981
{
991982
"cell_type": "markdown",
992983
"metadata": {
993-
"collapsed": false,
994-
"jupyter": {
995-
"outputs_hidden": false
996-
}
984+
"collapsed": false
997985
},
998986
"source": [
999987
"### Exercise 2\n",
@@ -1004,10 +992,7 @@
1004992
"cell_type": "code",
1005993
"execution_count": null,
1006994
"metadata": {
1007-
"collapsed": false,
1008-
"jupyter": {
1009-
"outputs_hidden": false
1010-
}
995+
"collapsed": false
1011996
},
1012997
"outputs": [],
1013998
"source": [
@@ -1027,10 +1012,7 @@
10271012
{
10281013
"cell_type": "markdown",
10291014
"metadata": {
1030-
"collapsed": false,
1031-
"jupyter": {
1032-
"outputs_hidden": false
1033-
}
1015+
"collapsed": false
10341016
},
10351017
"source": [
10361018
"### Linear fits with `polyfit`\n",
@@ -1061,10 +1043,7 @@
10611043
"cell_type": "code",
10621044
"execution_count": null,
10631045
"metadata": {
1064-
"collapsed": false,
1065-
"jupyter": {
1066-
"outputs_hidden": false
1067-
}
1046+
"collapsed": false
10681047
},
10691048
"outputs": [],
10701049
"source": [
@@ -1084,10 +1063,7 @@
10841063
{
10851064
"cell_type": "markdown",
10861065
"metadata": {
1087-
"collapsed": false,
1088-
"jupyter": {
1089-
"outputs_hidden": false
1090-
}
1066+
"collapsed": false
10911067
},
10921068
"source": [
10931069
"### Exercise 3\n",
@@ -1098,10 +1074,7 @@
10981074
"cell_type": "code",
10991075
"execution_count": null,
11001076
"metadata": {
1101-
"collapsed": false,
1102-
"jupyter": {
1103-
"outputs_hidden": false
1104-
}
1077+
"collapsed": false
11051078
},
11061079
"outputs": [],
11071080
"source": [
@@ -1118,10 +1091,7 @@
11181091
{
11191092
"cell_type": "markdown",
11201093
"metadata": {
1121-
"collapsed": false,
1122-
"jupyter": {
1123-
"outputs_hidden": false
1124-
}
1094+
"collapsed": false
11251095
},
11261096
"source": [
11271097
"## Residual analysis\n",
@@ -1150,10 +1120,7 @@
11501120
"cell_type": "code",
11511121
"execution_count": null,
11521122
"metadata": {
1153-
"collapsed": false,
1154-
"jupyter": {
1155-
"outputs_hidden": false
1156-
}
1123+
"collapsed": false
11571124
},
11581125
"outputs": [],
11591126
"source": [
@@ -1198,10 +1165,7 @@
11981165
{
11991166
"cell_type": "markdown",
12001167
"metadata": {
1201-
"collapsed": false,
1202-
"jupyter": {
1203-
"outputs_hidden": false
1204-
}
1168+
"collapsed": false
12051169
},
12061170
"source": [
12071171
"### Exercise 4\n",
@@ -1212,10 +1176,7 @@
12121176
"cell_type": "code",
12131177
"execution_count": null,
12141178
"metadata": {
1215-
"collapsed": false,
1216-
"jupyter": {
1217-
"outputs_hidden": false
1218-
}
1179+
"collapsed": false
12191180
},
12201181
"outputs": [],
12211182
"source": [

0 commit comments

Comments
 (0)