@@ -9,50 +9,60 @@ methods which have been implemented in parmest.
99
10101. Reduced Hessian Method
1111
12- When the objective function is "SSE":
12+ When the objective function is the sum of squared errors (SSE) between the
13+ observed and predicted values of the measured variables, the covariance matrix is:
1314
1415 .. math ::
1516 V_{\boldsymbol {\theta }} = 2 \sigma ^2 \left (\frac {\partial ^2 \text {SSE}}
1617 {\partial \boldsymbol {\theta } \partial \boldsymbol {\theta }}\right )^{-1 }_{\boldsymbol {\theta }
1718 = \boldsymbol {\theta }^*}
1819
19- When the objective function is "SSE_weighted" :
20+ When the objective function is the weighted SSE (WSSE), the covariance matrix is :
2021
2122 .. math ::
22- V_{\boldsymbol {\theta }} = 2 \sigma ^ 2 \left (\frac {\partial ^2 \text {WSSE}}
23+ V_{\boldsymbol {\theta }} = \left (\frac {\partial ^2 \text {WSSE}}
2324 {\partial \boldsymbol {\theta } \partial \boldsymbol {\theta }}\right )^{-1 }_{\boldsymbol {\theta }
2425 = \boldsymbol {\theta }^*}
2526
26- Where SSE is the sum of squared errors between the observed (data) and predicted
27- values of the measured variables, WSSE is the weighted SSE ,
28- :math: `\boldsymbol {\theta }` are the unknown parameters, :math: ` \boldsymbol { \theta ^*}`
29- are the estimate of the unknown parameters, and :math: `\sigma ^2 ` is the variance of
30- the measurement error. When the standard deviation of the measurement error is not
31- supplied by the user, parmest approximates the variance of the measurement error as
27+ Where :math: `V_{ \boldsymbol { \theta }}` is the covariance matrix of the estimated
28+ parameters, :math: ` \boldsymbol { \theta }` are the unknown parameters ,
29+ :math: `\boldsymbol {\theta ^* }` are the estimate of the unknown parameters, and
30+ :math: `\sigma ^2 ` is the variance of the measurement error. When the standard
31+ deviation of the measurement error is not supplied by the user, parmest
32+ approximates the variance of the measurement error as
3233 :math: `\sigma ^2 = \frac {1 }{n-l} \sum e_i^2 ` where :math: `n` is the number of data
33- points, :math: `l` is the number of fitted parameters, and :math: `e_i` is the residual
34- for experiment :math: `i`.
34+ points, :math: `l` is the number of fitted parameters, and :math: `e_i` is the
35+ residual for experiment :math: `i`.
3536
36372. Finite Difference Method
3738
39+ In this method, the covariance matrix, :math: `V_{\boldsymbol {\theta }}`, is
40+ calculated by applying the Gauss-Newton approximation to the Hessian,
41+ :math: `\frac {\partial ^2 \text {SSE}}{\partial \boldsymbol {\theta } \partial \boldsymbol {\theta }}`
42+ or
43+ :math: `\frac {\partial ^2 \text {WSSE}}{\partial \boldsymbol {\theta } \partial \boldsymbol {\theta }}`,
44+ leading to:
45+
3846 .. math ::
39- V_{\boldsymbol {\theta }} = \left ( \sum _{r = 1 }^n \mathbf {G}_{r }^{\mathrm {T}} \mathbf {W}
40- \mathbf {G}_{r } \right )^{-1 }
47+ V_{\boldsymbol {\theta }} = \left (\sum _{i = 1 }^n \mathbf {G}_{i }^{\mathrm {T}} \mathbf {W}
48+ \mathbf {G}_{i } \right )^{-1 }
4149
4250 This method uses central finite difference to compute the Jacobian matrix,
43- :math: `\mathbf {G}_{r }`, which is the sensitivity of the measured variables with
44- respect to the parameters, :math: `\boldsymbol {\theta }`. :math: ` \mathbf {W}` is a diagonal
45- matrix containing the inverse of the variance of the measurement errors,
46- :math: `\sigma ^2 `.
51+ :math: `\mathbf {G}_{i }`, for experiment :math: `i`, which is the sensitivity of
52+ the measured variables with respect to the parameters, :math: `\boldsymbol {\theta }`.
53+ :math: ` \mathbf {W}` is a diagonal matrix containing the inverse of the variance
54+ of the measurement errors, :math: `\sigma ^2 `.
4755
48563. Automatic Differentiation Method
4957
58+ Similar to the finite difference method, the covariance matrix is calculated as:
59+
5060 .. math ::
51- V_{\boldsymbol {\theta }} = \left ( \sum _{r = 1 }^n \mathbf {G}_{\text {kaug},\, r }^{\mathrm {T}}
52- \mathbf {W} \mathbf {G}_{\text {kaug},\, r } \right )^{-1 }
61+ V_{\boldsymbol {\theta }} = \left ( \sum _{i = 1 }^n \mathbf {G}_{\text {kaug},\, i }^{\mathrm {T}}
62+ \mathbf {W} \mathbf {G}_{\text {kaug},\, i } \right )^{-1 }
5363
54- This method uses the model optimality (KKT) condition to compute the Jacobian matrix,
55- :math: `\mathbf {G}_{\text {kaug},\, r} `.
64+ However, this method uses the model optimality (KKT) condition to compute the
65+ Jacobian matrix, :math: `\mathbf {G}_{\text {kaug},\, i}`, for experiment :math: `i `.
5666
5767In parmest, the covariance matrix can be calculated after defining the
5868:class: `~pyomo.contrib.parmest.parmest.Estimator ` object and estimating the unknown
0 commit comments