From f1dc25168c33200de099b18572edada75111e5e0 Mon Sep 17 00:00:00 2001 From: slilonfe5 Date: Tue, 25 Nov 2025 13:12:14 -0500 Subject: [PATCH 1/2] Updated covariance.rst file --- .../analysis/parmest/covariance.rst | 84 ++++++++++--------- 1 file changed, 46 insertions(+), 38 deletions(-) diff --git a/doc/OnlineDocs/explanation/analysis/parmest/covariance.rst b/doc/OnlineDocs/explanation/analysis/parmest/covariance.rst index fa6c001d868..7869e543e56 100644 --- a/doc/OnlineDocs/explanation/analysis/parmest/covariance.rst +++ b/doc/OnlineDocs/explanation/analysis/parmest/covariance.rst @@ -9,43 +9,45 @@ methods which have been implemented in parmest. 1. Reduced Hessian Method - When the objective function is the sum of squared errors (SSE): - :math:`\text{SSE} = \sum_{i = 1}^n \left(y_{i} - \hat{y}_{i}\right)^2`, + When the objective function is the sum of squared errors (SSE) defined as + :math:`\text{SSE} = \sum_{i = 1}^n + \left(\boldsymbol{y}_{i} - \hat{\boldsymbol{y}}_{i}\right)^2`, the covariance matrix is: .. math:: - V_{\boldsymbol{\theta}} = 2 \sigma^2 \left(\frac{\partial^2 \text{SSE}} + \boldsymbol{V}_{\boldsymbol{\theta}} = 2 \sigma^2 \left(\frac{\partial^2 \text{SSE}} {\partial \boldsymbol{\theta} \partial \boldsymbol{\theta}}\right)^{-1}_{\boldsymbol{\theta} - = \boldsymbol{\theta}^*} + = \hat{\boldsymbol{\theta}}} - When the objective function is the weighted SSE (WSSE): - :math:`\text{WSSE} = \frac{1}{2} \left(\mathbf{y} - f(\mathbf{x};\boldsymbol{\theta})\right)^\text{T} - \mathbf{W} \left(\mathbf{y} - f(\mathbf{x};\boldsymbol{\theta})\right)`, + Similarly, when the objective function is the weighted SSE (WSSE) defined as + :math:`\text{WSSE} = \frac{1}{2} \sum_{i=1}^{n} + \left(\boldsymbol{y}_{i} - \hat{\boldsymbol{y}}_{i}\right)^\text{T} + \boldsymbol{\Sigma}_{\boldsymbol{y}}^{-1} + \left(\boldsymbol{y}_{i} - \hat{\boldsymbol{y}}_{i}\right)`, the covariance matrix is: .. math:: - V_{\boldsymbol{\theta}} = \left(\frac{\partial^2 \text{WSSE}} + \boldsymbol{V}_{\boldsymbol{\theta}} = \left(\frac{\partial^2 \text{WSSE}} {\partial \boldsymbol{\theta} \partial \boldsymbol{\theta}}\right)^{-1}_{\boldsymbol{\theta} - = \boldsymbol{\theta}^*} - - Where :math:`V_{\boldsymbol{\theta}}` is the covariance matrix of the estimated - parameters, :math:`y` are the observed measured variables, :math:`\hat{y}` are the - predicted measured variables, :math:`n` is the number of data points, - :math:`\boldsymbol{\theta}` are the unknown parameters, :math:`\boldsymbol{\theta^*}` - are the estimates of the unknown parameters, :math:`\mathbf{x}` are the decision - variables, and :math:`\mathbf{W}` is a diagonal matrix containing the inverse of the - variance of the measurement error, :math:`\sigma^2`. When the standard - deviation of the measurement error is not supplied by the user, parmest - approximates the variance of the measurement error as - :math:`\sigma^2 = \frac{1}{n-l} \sum e_i^2` where :math:`l` is the number of - fitted parameters, and :math:`e_i` is the residual for experiment :math:`i`. + = \hat{\boldsymbol{\theta}}} + + Where :math:`\boldsymbol{V}_{\boldsymbol{\theta}}` is the covariance matrix of the estimated + parameters :math:`\hat{\boldsymbol{\theta}}`, :math:`\boldsymbol{y}` are observations of the measured variables, + :math:`\hat{\boldsymbol{y}}` are model predictions of the measured variables, + :math:`n` is the number of experiments, :math:`\boldsymbol{x}` are the decision variables, and + :math:`\boldsymbol{\Sigma}_{\boldsymbol{y}}` is the measurement error covariance matrix, whose leading diagonal + contains the inverse of the variance of the measurement errors, :math:`\sigma^2`. When the standard deviation of + the measurement error is not supplied by the user, parmest approximates :math:`\sigma^2` as: + :math:`\hat{\sigma}^2 = \frac{1}{n-l} \sum_{i=1}^{n} e_i^2`, where :math:`l` is the number of fitted parameters, + and :math:`e_i` is the residual between :math:`\boldsymbol{y}` and :math:`\hat{\boldsymbol{y}}` + for experiment :math:`i`. In parmest, this method computes the inverse of the Hessian by scaling the - objective function (SSE or WSSE) with a constant probability factor. + objective function (SSE or WSSE) with a constant probability factor, :math:`\frac{1}{n}`. 2. Finite Difference Method - In this method, the covariance matrix, :math:`V_{\boldsymbol{\theta}}`, is + In this method, the covariance matrix, :math:`\boldsymbol{V}_{\boldsymbol{\theta}}`, is calculated by applying the Gauss-Newton approximation to the Hessian, :math:`\frac{\partial^2 \text{SSE}}{\partial \boldsymbol{\theta} \partial \boldsymbol{\theta}}` or @@ -53,34 +55,40 @@ methods which have been implemented in parmest. leading to: .. math:: - V_{\boldsymbol{\theta}} = \left(\sum_{i = 1}^n \mathbf{G}_{i}^{\mathrm{T}} \mathbf{W} - \mathbf{G}_{i} \right)^{-1} + \boldsymbol{V}_{\boldsymbol{\theta}} = \left(\sum_{i = 1}^n \boldsymbol{G}_{i}^{\text{T}} + \boldsymbol{\Sigma}_{\boldsymbol{y}}^{-1} \boldsymbol{G}_{i} \right)^{-1} This method uses central finite difference to compute the Jacobian matrix, - :math:`\mathbf{G}_{i}`, for experiment :math:`i`, which is the sensitivity of - the measured variables with respect to the parameters, :math:`\boldsymbol{\theta}`. - :math:`\mathbf{W}` is a diagonal matrix containing the inverse of the variance - of the measurement errors, :math:`\sigma^2`. + :math:`\boldsymbol{G}_{i}`, for experiment :math:`i`, which is the sensitivity of + the measured variables :math:`\boldsymbol{y}` with respect to the parameters, :math:`\boldsymbol{\theta}`. + :math:`\boldsymbol{\Sigma}_{\boldsymbol{y}}` is the measurement error covariance matrix, whose leading diagonal + contains the inverse of the variance of the measurement errors, :math:`\sigma^2`. 3. Automatic Differentiation Method Similar to the finite difference method, the covariance matrix is calculated as: .. math:: - V_{\boldsymbol{\theta}} = \left( \sum_{i = 1}^n \mathbf{G}_{\text{kaug},\, i}^{\mathrm{T}} - \mathbf{W} \mathbf{G}_{\text{kaug},\, i} \right)^{-1} + \boldsymbol{V}_{\boldsymbol{\theta}} = \left( \sum_{i = 1}^n \boldsymbol{G}_{\text{kaug},\, i}^{\text{T}} + \boldsymbol{\Sigma}_{\boldsymbol{y}}^{-1} \boldsymbol{G}_{\text{kaug},\, i} \right)^{-1} - However, this method uses the model optimality (KKT) condition to compute the - Jacobian matrix, :math:`\mathbf{G}_{\text{kaug},\, i}`, for experiment :math:`i`. + However, this method uses the model optimality (KKT) condition to compute the Jacobian matrix, + :math:`\boldsymbol{G}_{\text{kaug},\, i}`, for experiment :math:`i`. The covariance matrix calculation is only supported with the built-in objective functions "SSE" or "SSE_weighted". -In parmest, the covariance matrix can be calculated after defining the -:class:`~pyomo.contrib.parmest.parmest.Estimator` object and estimating the unknown -parameters using :class:`~pyomo.contrib.parmest.parmest.Estimator.theta_est`. To -estimate the covariance matrix, with the default method being "finite_difference", call -the :class:`~pyomo.contrib.parmest.parmest.Estimator.cov_est` function, e.g., +In parmest, the covariance matrix can be computed after creating the +:class:`~pyomo.contrib.parmest.experiment.Experiment` class, +defining the :class:`~pyomo.contrib.parmest.parmest.Estimator` object, and estimating the unknown parameters using +:class:`~pyomo.contrib.parmest.parmest.Estimator.theta_est` (these steps were addressed in the +:ref:`driversection` Section). + +To estimate the covariance matrix, with the default method being "finite_difference", call +the :class:`~pyomo.contrib.parmest.parmest.Estimator.cov_est` function as follows (note that this call should be made +after creating the :class:`~pyomo.contrib.parmest.experiment.Experiment` class and the list of +:class:`~pyomo.contrib.parmest.experiment.Experiment` objects as shown in the :ref:`driversection` Section): +: .. testsetup:: * :skipif: not __import__('pyomo.contrib.parmest.parmest').contrib.parmest.parmest.parmest_available From f645ea400876c36d904c55bd6f36332b320f86b3 Mon Sep 17 00:00:00 2001 From: slilonfe5 Date: Tue, 25 Nov 2025 13:36:25 -0500 Subject: [PATCH 2/2] Minor updates to covariance.rst file --- .../explanation/analysis/parmest/covariance.rst | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/doc/OnlineDocs/explanation/analysis/parmest/covariance.rst b/doc/OnlineDocs/explanation/analysis/parmest/covariance.rst index 7869e543e56..93333b0980e 100644 --- a/doc/OnlineDocs/explanation/analysis/parmest/covariance.rst +++ b/doc/OnlineDocs/explanation/analysis/parmest/covariance.rst @@ -34,13 +34,12 @@ methods which have been implemented in parmest. Where :math:`\boldsymbol{V}_{\boldsymbol{\theta}}` is the covariance matrix of the estimated parameters :math:`\hat{\boldsymbol{\theta}}`, :math:`\boldsymbol{y}` are observations of the measured variables, :math:`\hat{\boldsymbol{y}}` are model predictions of the measured variables, - :math:`n` is the number of experiments, :math:`\boldsymbol{x}` are the decision variables, and - :math:`\boldsymbol{\Sigma}_{\boldsymbol{y}}` is the measurement error covariance matrix, whose leading diagonal - contains the inverse of the variance of the measurement errors, :math:`\sigma^2`. When the standard deviation of - the measurement error is not supplied by the user, parmest approximates :math:`\sigma^2` as: - :math:`\hat{\sigma}^2 = \frac{1}{n-l} \sum_{i=1}^{n} e_i^2`, where :math:`l` is the number of fitted parameters, - and :math:`e_i` is the residual between :math:`\boldsymbol{y}` and :math:`\hat{\boldsymbol{y}}` - for experiment :math:`i`. + :math:`n` is the number of experiments, and :math:`\boldsymbol{\Sigma}_{\boldsymbol{y}}` is the measurement error + covariance matrix, whose leading diagonal contains the inverse of the variance of the measurement errors, + :math:`\sigma^2`. When the standard deviation of the measurement error is not supplied by the user, parmest + approximates :math:`\sigma^2` as: :math:`\hat{\sigma}^2 = \frac{1}{n-l} \sum_{i=1}^{n} e_i^2`, where :math:`l` is + the number of fitted parameters, and :math:`e_i` is the residual between the data and model prediction for + experiment :math:`i`. In parmest, this method computes the inverse of the Hessian by scaling the objective function (SSE or WSSE) with a constant probability factor, :math:`\frac{1}{n}`.