Skip to content

Commit 1251c5b

Browse files
committed
Final updates to parmest.py, test_parmest.py, and covariance.rst
1 parent 1086747 commit 1251c5b

File tree

4 files changed

+246
-274
lines changed

4 files changed

+246
-274
lines changed

doc/OnlineDocs/explanation/analysis/parmest/covariance.rst

Lines changed: 15 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -9,30 +9,36 @@ methods which have been implemented in parmest.
99

1010
1. Reduced Hessian Method
1111

12-
When the objective function is the sum of squared errors (SSE) between the
13-
observed and predicted values of the measured variables, the covariance matrix is:
12+
When the objective function is the sum of squared errors (SSE):
13+
:math:`\text{SSE} = \sum_{i = 1}^n \left(y_{i} - \hat{y}_{i}\right)^2`,
14+
the covariance matrix is:
1415

1516
.. math::
1617
V_{\boldsymbol{\theta}} = 2 \sigma^2 \left(\frac{\partial^2 \text{SSE}}
1718
{\partial \boldsymbol{\theta} \partial \boldsymbol{\theta}}\right)^{-1}_{\boldsymbol{\theta}
1819
= \boldsymbol{\theta}^*}
1920
20-
When the objective function is the weighted SSE (WSSE), the covariance matrix is:
21+
When the objective function is the weighted SSE (WSSE):
22+
:math:`\text{WSSE} = \frac{1}{2} \left(\mathbf{y} - f(\mathbf{x};\boldsymbol{\theta})\right)^\text{T}
23+
\mathbf{W} \left(\mathbf{y} - f(\mathbf{x};\boldsymbol{\theta})\right)`,
24+
the covariance matrix is:
2125

2226
.. math::
2327
V_{\boldsymbol{\theta}} = \left(\frac{\partial^2 \text{WSSE}}
2428
{\partial \boldsymbol{\theta} \partial \boldsymbol{\theta}}\right)^{-1}_{\boldsymbol{\theta}
2529
= \boldsymbol{\theta}^*}
2630
2731
Where :math:`V_{\boldsymbol{\theta}}` is the covariance matrix of the estimated
28-
parameters, :math:`\boldsymbol{\theta}` are the unknown parameters,
29-
:math:`\boldsymbol{\theta^*}` are the estimates of the unknown parameters, and
30-
:math:`\sigma^2` is the variance of the measurement error. When the standard
32+
parameters, :math:`y` are the observed measured variables, :math:`\hat{y}` are the
33+
predicted measured variables, :math:`n` is the number of data points,
34+
:math:`\boldsymbol{\theta}` are the unknown parameters, :math:`\boldsymbol{\theta^*}`
35+
are the estimates of the unknown parameters, :math:`\mathbf{x}` are the decision
36+
variables, and :math:`\mathbf{W}` is a diagonal matrix containing the inverse of the
37+
variance of the measurement error, :math:`\sigma^2`. When the standard
3138
deviation of the measurement error is not supplied by the user, parmest
3239
approximates the variance of the measurement error as
33-
:math:`\sigma^2 = \frac{1}{n-l} \sum e_i^2` where :math:`n` is the number of data
34-
points, :math:`l` is the number of fitted parameters, and :math:`e_i` is the
35-
residual for experiment :math:`i`.
40+
:math:`\sigma^2 = \frac{1}{n-l} \sum e_i^2` where :math:`l` is the number of
41+
fitted parameters, and :math:`e_i` is the residual for experiment :math:`i`.
3642

3743
In parmest, this method computes the inverse of the Hessian by scaling the
3844
objective function (SSE or WSSE) with a constant probability factor.

doc/OnlineDocs/explanation/analysis/parmest/overview.rst

Lines changed: 17 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -41,23 +41,32 @@ that for most experiments, only small parts of :math:`x` will change
4141
from one experiment to the next.
4242

4343
The following least squares objective can be used to estimate parameter
44-
values, where data points are indexed by :math:`s=1,\ldots,S`
44+
values assuming Gaussian independent and identically distributed measurement
45+
errors, where data points are indexed by :math:`s=1,\ldots,S`
4546

4647
.. math::
4748
4849
\min_{{\theta}} Q({\theta};{\tilde{x}}, {\tilde{y}}) \equiv \sum_{s=1}^{S}q_{s}({\theta};{\tilde{x}}_{s}, {\tilde{y}}_{s}) \;\;
4950
50-
where
51+
where :math:`q_{s}({\theta};{\tilde{x}}_{s}, {\tilde{y}}_{s})` can be:
5152

52-
.. math::
53+
1. Sum of squared errors
54+
55+
.. math::
56+
57+
q_{s}({\theta};{\tilde{x}}_{s}, {\tilde{y}}_{s}) =
58+
\sum_{i=1}^{m}\left({\tilde{y}}_{s,i} - g_{i}({\tilde{x}}_{s};{\theta})\right)^{2}
59+
60+
2. Weighted sum of squared errors
61+
62+
.. math::
5363
54-
q_{s}({\theta};{\tilde{x}}_{s}, {\tilde{y}}_{s}) = \sum_{i=1}^{m}w_{i}\left[{\tilde{y}}_{si} - g_{i}({\tilde{x}}_{s};{\theta})\right]^{2},
64+
q_{s}({\theta};{\tilde{x}}_{s}, {\tilde{y}}_{s}) =
65+
\sum_{i=1}^{m}\left(\frac{{\tilde{y}}_{s,i} - g_{i}({\tilde{x}}_{s};{\theta})}{w_i}\right)^{2}
5566
5667
i.e., the contribution of sample :math:`s` to :math:`Q`, where :math:`w
57-
\in \Re^{m}` is a vector of weights for the responses. For
58-
multi-dimensional :math:`y`, this is the squared weighted :math:`L_{2}`
59-
norm and for univariate :math:`y` the weighted squared deviation.
60-
Custom objectives can also be defined for parameter estimation.
68+
\in \Re^{m}` is a vector containing the standard deviation of the measurement
69+
errors of :math:`y`. Custom objectives can also be defined for parameter estimation.
6170

6271
In the applications of interest to us, the function :math:`g(\cdot)` is
6372
usually defined as an optimization problem with a large number of

0 commit comments

Comments
 (0)