Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Variance, general least squares

The general least-squares treatment requires that the generalized sum of squares of the residuals, the variance a2, be minimized. This is, by the geometry of error space, tantamount to the requirement that the residual vector be orthogonal with respect to fit space, and this is guaranteed when the scalar products of all fit vectors (the rows of XT) with the residual vector vanish, XTM 1 = 0, where M 1 is the metric of error space. The successful least-squares treatment [34] yields the following minimum-variance linear unbiased estimators (A) for the variables, their covariance matrix, the variance of the fit, the residuals, and their covariance matrix ... [Pg.73]

The particular choice of a residual variance model should be based on the nature of the response function. Sometimes 4> is unknown and must be estimated from the data. Once a structural model and residual variance model is chosen, the choice then becomes how to estimate 0, the structural model parameters, and <, the residual variance model parameters. One commonly advocated method is the method of generalized least-squares (GLS). First it will be assumed that < is known and then that assumption will be relaxed. In the simplest case, assume that 0 is known, in which case the weights are given by... [Pg.132]

Understanding the distribution allows us to calculate the expected values of random variables that are normally and independently distributed. In least squares multiple regression, or in calibration work in general, there is a basic assumption that the error in the response variable is random and normally distributed, with a variance that follows a ) distribution. [Pg.202]

Ordinary least squares regression requires constant variance across the range of data. This has typically not been satisfied with chromatographic data ( 4,9,10 ). Some have adjusted data to constant variance by a weighted least squares method ( ) The other general adjustment method has been by transformation of data. The log-log transformation is commonly used ( 9,10 ). One author compares the robustness of nonweighted, weighted linear, and maximum likelihood estimation methods ( ). Another has... [Pg.134]

Least squares (LS) estimation minimizes the sum of squared deviations, comparing observed values to values predicted by a curve with particular parameter values. Weighted LS (WLS) can take into account differences in the variances of residuals generalized LS (GLS) can take into account covariances of residuals as well as differences in weights. Cases of LS estimation include the following ... [Pg.35]

Traditionally, the determination of a difference in costs between groups has been made using the Student s r-test or analysis of variance (ANOVA) (univariate analysis) and ordinary least-squares regression (multivariable analysis). The recent proposal of the generalized linear model promises to improve the predictive power of multivariable analyses. [Pg.49]

Analysis of variance appropriate for a crossover design on the pharmacokinetic parameters using the general linear models procedures of SAS or an equivalent program should be performed, with examination of period, sequence and treatment effects. The 90% confidence intervals for the estimates of the difference between the test and reference least squares means for the pharmacokinetic parameters (AUCo-t, AUCo-inf, Cmax should be calculated, using the two one-sided t-test procedure). [Pg.370]

The parameters A,k and b must be estimated from sr The general problem of parameter estimation is to estimate a parameter, 0, given a number of samples, x,-, drawn from a population that has a probability distribution P(x, 0). It can be shown that there is a minimum variance bound (MVB), known as the Cramer-Rao inequality, that limits the accuracy of any method of estimating 0 [55]. There are a number of methods that approach the MVB and give unbiased estimates of 0 for large sample sizes [55]. Among the more popular of these methods are maximum likelihood estimators (MLE) and least-squares estimation (LS). The MLE... [Pg.34]

General Aspects The third basic assumption in regression analysis is that the variance should be constant over the calibration range. This is called homoskedasticity. In analytical chemistry, the variance often increases with increasing concentration. When the variance is not constant over the calibration range, the regression parameters for the slope and the intercept are still unbiased and consistent, but the least squares solution is not efficient anymore. This means that the standard... [Pg.143]

The average component numbers from these unweighted fits are 1n close agreement with the results from the weighted least squares analysis as expected. The standard deviation, and hence the unreal lability of the results from a single series, 1s generally higher In the estimation of component number from the slope than from the Intercept. This observation Is attributed to the variances In slope and Intercept which. In turn, are functions of the peak capacities and the number of counted peaks (6). The variance In the estimation from the Intercept Increases with the value m, and a reversal of the trend Is observed In Set E. [Pg.22]

Equation (4.2) is called a residual variance model, but it is not a very general one. In this case, the model states that random, unexplained variability is a constant. Two methods are usually used to estimate 0 least-squares (LS) and maximum likelihood (ML). In the case where e N(0, a2), the LS estimates are equivalent to the ML estimates. This chapter will deal with the case for more general variance models when a constant variance does not apply. Unfortunately, most of the statistical literature deals with estimation and model selection theory for the structural model and there is far less theory regarding choice and model selection for residual variance models. [Pg.125]

Carroll and Ruppert (1988) and Davidian and Gil-tinan (1995) present comprehensive overviews of parameter estimation in the face of heteroscedasticity. In general, three methods are used to provide precise, unbiased parameter estimates weighted least-squares (WLS), maximum likelihood, and data and/or model transformations. Johnston (1972) has shown that as the departure from constant variance increases, the benefit from using methods that deal with heteroscedasticity increases. The difficulty in using WLS or variations of WLS is that additional burdens on the model are made in that the method makes the additional assumption that the variance of the observations is either known or can be estimated. In WLS, the goal is not to minimize the OLS objective function, i.e., the residual sum of squares,... [Pg.132]


See other pages where Variance, general least squares is mentioned: [Pg.307]    [Pg.74]    [Pg.78]    [Pg.277]    [Pg.206]    [Pg.355]    [Pg.232]    [Pg.220]    [Pg.428]    [Pg.722]    [Pg.45]    [Pg.582]    [Pg.99]    [Pg.245]    [Pg.73]    [Pg.94]    [Pg.197]    [Pg.268]    [Pg.23]    [Pg.245]    [Pg.5]    [Pg.59]    [Pg.126]    [Pg.245]    [Pg.52]    [Pg.137]    [Pg.201]    [Pg.24]    [Pg.706]    [Pg.99]    [Pg.346]    [Pg.34]    [Pg.384]    [Pg.411]    [Pg.19]   
See also in sourсe #XX -- [ Pg.73 , Pg.75 ]




SEARCH



General Least Squares

Generalized least squares

© 2024 chempedia.info