@@ -10,7 +10,8 @@ for design optimization.
1010
1111Functionality in parmest includes:
1212
13- * Model based parameter estimation using experimental data
13+ * Model-based parameter estimation using experimental data
14+ * Covariance matrix estimation
1415* Bootstrap resampling for parameter estimation
1516* Confidence regions based on single or multi-variate distributions
1617* Likelihood ratio
@@ -21,61 +22,56 @@ Background
2122----------
2223
2324The goal of parameter estimation is to estimate values for
24- a vector, :math: `{\theta }`, to use in the functional form
25+ a vector, :math: `\boldsymbol {\theta }`, to use in the functional form
2526
2627.. math ::
2728
28- y = g(x; \theta )
29-
30- where :math: `x` is a vector containing measured data, typically in high
31- dimension, :math: `{\theta }` is a vector of values to estimate, in much
32- lower dimension, and the response vectors are given as :math: `y_{i},
33- i=1 ,\ldots ,m` with :math: `m` also much smaller than the dimension of
34- :math: `x`. This is done by collecting :math: `S` data points, which are
35- :math: `{\tilde {x}},{\tilde {y}}` pairs and then finding :math: `{\theta }`
36- values that minimize some function of the deviation between the values
37- of :math: `{\tilde {y}}` that are measured and the values of
38- :math: `g({\tilde {x}};{\theta })` for each corresponding
39- :math: `{\tilde {x}}`, which is a subvector of the vector :math: `x`. Note
40- that for most experiments, only small parts of :math: `x` will change
41- from one experiment to the next.
29+ \boldsymbol {y}_i & = \boldsymbol {f}\left (\boldsymbol {x}_{i}, \boldsymbol {\theta }\right ) +
30+ \boldsymbol {\varepsilon }_i \quad \forall \; i \in \left\{ 1 , \ldots , n}
31+
32+ where :math: `\boldsymbol {y}_{i} \in \mathbb {R}^m` are observations of the measured or output variables,
33+ :math: `\boldsymbol {f}` is the model function, :math: `\boldsymbol {x}_{i} \in \mathbb {R}^{q}` are the decision
34+ or input variables, :math: `\boldsymbol {\theta } \in \mathbb {R}^p` are the model parameters,
35+ :math: `\boldsymbol {\varepsilon }_{i} \in \mathbb {R}^m` are measurement errors, and :math: `n` is the number of
36+ experiments.
4237
4338The following least squares objective can be used to estimate parameter
4439values assuming Gaussian independent and identically distributed measurement
45- errors, where data points are indexed by :math: `s= 1 , \ldots ,S`
40+ errors:
4641
4742.. math ::
4843
49- \min _{{\theta }} Q({ \theta };{ \tilde {x}}, { \tilde {y}}) \equiv \sum _{s= 1 }^{S}q_{s}({ \theta };{ \tilde {x}}_{s}, { \tilde {y}}_{s }) \;\;
44+ \min _{\boldsymbol {\theta }} \, g( \boldsymbol {x}, \boldsymbol {y}; \boldsymbol { \theta }) \;\;
5045
51- where :math: `q_{s}({ \theta };{ \tilde {x}}_{s}, { \tilde {y}}_{s })` can be:
46+ where :math: `g( \boldsymbol {x}, \boldsymbol {y}; \boldsymbol { \theta })` can be:
5247
53481. Sum of squared errors
5449
5550 .. math ::
5651
57- q_{s}({\theta };{\tilde {x}}_{s}, {\tilde {y}}_{s}) =
58- \sum _{i=1 }^{m}\left ({\tilde {y}}_{s,i} - g_{i}({\tilde {x}}_{s};{\theta })\right )^{2 }
52+ g(\boldsymbol {x}, \boldsymbol {y};\boldsymbol {\theta }) =
53+ \sum _{i = 1 }^{n} \left (\boldsymbol {y}_{i} - \boldsymbol {f}(\boldsymbol {x}_{i};\boldsymbol {\theta })
54+ \right )^\text {T} \left (\boldsymbol {y}_{i} - \boldsymbol {f}(\boldsymbol {x}_{i};\boldsymbol {\theta })\right )
5955
6056 2. Weighted sum of squared errors
6157
6258 .. math ::
6359
64- q_{s}({\theta };{\tilde {x}}_{s}, {\tilde {y}}_{s}) =
65- \sum _{i=1 }^{m}\left (\frac {{\tilde {y}}_{s,i} - g_{i}({\tilde {x}}_{s};{\theta })}{w_i}\right )^{2 }
60+ g(\boldsymbol {x}, \boldsymbol {y};\boldsymbol {\theta }) =
61+ \frac {1 }{2 } \sum _{i = 1 }^{n} \left (\boldsymbol {y}_{i} - \boldsymbol {f}(\boldsymbol {x}_{i};\boldsymbol {\theta })
62+ \right )^\text {T} \boldsymbol {\Sigma }_{\boldsymbol {y}}^{-1 } \left (\boldsymbol {y}_{i} -
63+ \boldsymbol {f}(\boldsymbol {x}_{i};\boldsymbol {\theta })\right )
6664
67- i.e., the contribution of sample :math: `s` to :math: `Q`, where :math: `w
68- \in \Re ^{m}` is a vector containing the standard deviation of the measurement
69- errors of :math: `y`. Custom objectives can also be defined for parameter estimation.
65+ where :math: `\boldsymbol { \Sigma }_{ \boldsymbol {y}}` is the measurement error covariance matrix containing the
66+ standard deviation of the measurement errors of :math: ` \boldsymbol {y}`. Custom objectives can also be defined
67+ for parameter estimation.
7068
7169In the applications of interest to us, the function :math: `g(\cdot )` is
7270usually defined as an optimization problem with a large number of
7371(perhaps constrained) optimization variables, a subset of which are
74- fixed at values :math: `{ \tilde {x} }` when the optimization is performed.
75- In other applications, the values of :math: `{\theta }` are fixed
72+ fixed at values :math: `\boldsymbol {x }` when the optimization is performed.
73+ In other applications, the values of :math: `\boldsymbol {\theta }` are fixed
7674parameter values, but for the problem formulation above, the values of
77- :math: `{\theta }` are the primary optimization variables. Note that in
75+ :math: `\boldsymbol {\theta }` are the primary optimization variables. Note that in
7876general, the function :math: `g(\cdot )` will have a large set of
79- parameters that are not included in :math: `{\theta }`. Often, the
80- :math: `y_{is}` will be vectors themselves, perhaps indexed by time with
81- index sets that vary with :math: `s`.
77+ parameters that are not included in :math: `\boldsymbol {\theta }`.
0 commit comments