Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Covariance matrices general least squares

The general least-squares treatment requires that the generalized sum of squares of the residuals, the variance a2, be minimized. This is, by the geometry of error space, tantamount to the requirement that the residual vector be orthogonal with respect to fit space, and this is guaranteed when the scalar products of all fit vectors (the rows of XT) with the residual vector vanish, XTM 1 = 0, where M 1 is the metric of error space. The successful least-squares treatment [34] yields the following minimum-variance linear unbiased estimators (A) for the variables, their covariance matrix, the variance of the fit, the residuals, and their covariance matrix ... [Pg.73]

In this section, we present an iterative algorithm in the spirit of the generalized least squares approach (Goodwin and Payne, 1977), for simultaneous estimation of an FSF process model and an autoregressive (AR) noise model. The unique features of our algorithm are the application of the PRESS statistic introduced in Chapter 3 for both process and noise model structure selection to ensure whiteness of the residuals, and the use of covariance matrix information to derive statistical confidence bounds for the final process step response estimates. An important assumption in this algorithm is that the noise term k) can be described by an AR time series model given by... [Pg.119]

The adjustment of measurements to compensate for random errors involves the resolution of a constrained minimization problem, usually one of constrained least squares. Balance equations are included in the constraints these may be linear but are generally nonlinear. The objective function is usually quadratic with respect to the adjustment of measurements, and it has the covariance matrix of measurements errors as weights. Thus, this matrix is essential in the obtaining of reliable process knowledge. Some efforts have been made to estimate it from measurements (Almasy and Mah, 1984 Darouach et al., 1989 Keller et al., 1992 Chen et al., 1997). The difficulty in the estimation of this matrix is associated with the analysis of the serial and cross correlation of the data. [Pg.25]

The methods of Chapter 6 are not appropriate for multiresponse investigations unless the responses have known relative precisions and independent, unbiased normal distributions of error. These restrictions come from the error model in Eq. (6.1-2). Single-response models were treated under these assumptions by Gauss (1809, 1823) and less completely by Legendre (1805), co-discoverer of the method of least squares. Aitken (1935) generalized weighted least squares to multiple responses with a specified error covariance matrix his method was extended to nonlinear parameter estimation by Bard and Lapidus (1968) and Bard (1974). However, least squares is not suitable for multiresponse problems unless information is given about the error covariance matrix we may consider such applications at another time. [Pg.141]

Meiler et al. [27] adopted the general method of optimizing nonlinear experimental designs by the minimization of the covariance matrix of the least-squares... [Pg.592]

The Poisson regression model is an example of the generalized linear model. The maximum likelihood estimates of the coefficients of the predictors can be found by iteratively reweighted least squares. This also finds the covariance matrix of the normal distribution that matches the curvature of the likelihood... [Pg.228]

Under quite general assumptions on the noise, some regularity conditions on the model F(uo(/),0) and the excitation (choice of uq 1)), consistency 2 of the least squares estimator is proven. Asymptotically (for the number of data points going to infinity) the covariance matrix C s of the estimated model parameters is given by ... [Pg.29]

The standard error values provide only approximate ranges of confidence intervals for general non-linear least squares data fitting. They are estimated from the so called covariance matrix. In this approach one parameter at a time is allowed to vary with the other parameters allowed to change so as to minimize the square error criteria with only the one parameter changed. This is the only easy calculation one can make in the general non-linear case with an unknown distribution of errors. For a detailed discussion of standard errors the reader is referred to the literature (such as Numerical Recipes in C, Cambridge Press, 1988). [Pg.382]


See other pages where Covariance matrices general least squares is mentioned: [Pg.355]    [Pg.307]    [Pg.74]    [Pg.582]    [Pg.184]    [Pg.64]    [Pg.73]    [Pg.94]    [Pg.77]    [Pg.188]    [Pg.22]    [Pg.203]    [Pg.279]    [Pg.299]    [Pg.333]    [Pg.70]    [Pg.1108]   
See also in sourсe #XX -- [ Pg.72 , Pg.73 , Pg.75 , Pg.77 ]




SEARCH



Covariance

Covariance matrix

Covariance matrix general

Covariant

Covariates

Covariation

General Least Squares

Generalized least squares

Least matrix

Matrices square matrix

Matrix least squares

Matrix, general

Matrix, generally

© 2024 chempedia.info