Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Residual variance model least-squares

Equation (4.2) is called a residual variance model, but it is not a very general one. In this case, the model states that random, unexplained variability is a constant. Two methods are usually used to estimate 0 least-squares (LS) and maximum likelihood (ML). In the case where e N(0, a2), the LS estimates are equivalent to the ML estimates. This chapter will deal with the case for more general variance models when a constant variance does not apply. Unfortunately, most of the statistical literature deals with estimation and model selection theory for the structural model and there is far less theory regarding choice and model selection for residual variance models. [Pg.125]

RESIDUAL VARIANCE MODEL PARAMETER ESTIMATION USING WEIGHTED LEAST-SQUARES... [Pg.132]

However, when the number of replicates is small, as is usually the case, the estimated variance can be quite erroneous and unstable. Nonlinear regression estimates using this approach are more variable than their unweighted least-squares counterparts, unless the number of replicates at each level is 10 or more. For this reason, this method cannot be supported and the danger of unstable variance estimates can be avoided if a parametric residual variance model can be found. [Pg.132]

The particular choice of a residual variance model should be based on the nature of the response function. Sometimes 4> is unknown and must be estimated from the data. Once a structural model and residual variance model is chosen, the choice then becomes how to estimate 0, the structural model parameters, and <, the residual variance model parameters. One commonly advocated method is the method of generalized least-squares (GLS). First it will be assumed that < is known and then that assumption will be relaxed. In the simplest case, assume that 0 is known, in which case the weights are given by... [Pg.132]

Another common fitting algorithm found in the pharmacokinetic literature is extended least-squares (ELS) wherein 0, the structural model parameters, and 4>, the residual variance model parameters, are estimated simultaneously (Sheiner and Beal, 1985). The objective function in ELS is the same as the objective function in PL... [Pg.134]

Certain assumptions underly least squares computations such as the independence of the unobservable errors ef, a constant error variance, and lack of error in the jc s (Draper and Smith, 1998). If the model represents the data adequately, the residuals should possess characteristics that agree with these basic assumptions. The analysis of residuals is thus a way of checking that one or more of the assumptions underlying least squares optimization is not violated. For example, if the model fits well, the residuals should be randomly distributed about the value of y predicted by the model. Systematic departures from randomness indicate that the model is unsatisfactory examination of the patterns formed by the residuals can provide clues about how the model can be improved (Box and Hill, 1967 Draper and Hunter, 1967). [Pg.60]

For fitting such a set of existing data, a much more reasonable approach has been used (P2). For the naphthalene oxidation system, major reactants and products are symbolized in Table III. In this table, letters in bold type represent species for which data were used in estimating the frequency factors and activation energies contained in the body of the table. Note that the rate equations have been reparameterized (Section III,B) to allow a better estimation of the two parameters. For the first entry of the table, then, a model involving only the first-order decomposition of naphthalene to phthalic anhydride and naphthoquinone was assumed. The parameter estimates obtained by a nonlinear-least-squares fit of these data, are seen to be relatively precise when compared to the standard errors of these estimates, s0. The residual mean square, using these best parameter estimates, is contained in the last column of the table. This quantity should estimate the variance of the experimental error if the model adequately fits the data (Section IV). The remainder of Table III, then, presents similar results for increasingly complex models, each of which entails several first-order decompositions. [Pg.119]

To model the relationship between PLA and PLR, we used each of these in ordinary least squares (OLS) multiple regression to explore the relationship between the dependent variables Mean PLR or Mean PLA and the independent variables (Berry and Feldman, 1985).OLS regression was used because data satisfied OLS assumptions for the model as the best linear unbiased estimator (BLUE). Distribution of errors (residuals) is normal, they are uncorrelated with each other, and homoscedastic (constant variance among residuals), with the mean of 0. We also analyzed predicted values plotted against residuals, as they are a better indicator of non-normality in aggregated data, and found them also to be homoscedastic and independent of one other. [Pg.152]

For the model in the previous exercise, what is the probability limit of s1 = (l/( -l))Z (y - y )2 Note that this is the least squares estimate of the residual variance. It is also n times the conventional estimator of the variance of the OLS estimator, Est.Var[ y ]=. vTX X) 1 = s1 In. How does this compare to the true value you found in part (b) of Exercise 1 Does the conventional estimator produce the coirect estimate of the true asymptotic variance of the least squares estimator ... [Pg.41]

The ordinary least squares estimates are given above. The estimates of the disturbance variances are based on the residuals from this regression. For the three models, the disturbance variances are estimated as follows ... [Pg.45]

For the model in Exercise 1, suppose, is normally distributed with mean zero and variance g2(1 + (yx)2). Show that g2 and y2 can be consistently estimated by a regression of the least squares residuals on a constant and x2. Is this estimator efficient ... [Pg.45]

The most commonly used PCA algorithm involves sequential determination of each principal component (or each matched pair of score and loading vectors) via an iterative least squares process, followed by subtraction of that component s contribution to the data. Each sequential PC is determined such that it explains the most remaining variance in the X-data. This process continues until the number of PCs (A) equals the number of original variables (M), at which time 100% of the variance in the data is explained. However, data compression does not really occur unless the user chooses a number of PCs that is much lower than the number of original variables (A M). This necessarily involves ignoring a small fraction of the variation in the original X-data which is contained in the PCA model residual matrix E. [Pg.245]

Our choice of model differs from that of Tschernitz et al. (1946), who preferred Model d over Model h on the basis of a better fit. The difference lies in the weightings used. Tschernitz et al. transformed each model to get a linear least-squares problem (a necessity for their desk calculations) but inappropriately used weights of 1 for the transformed observations and response functions. For comparison, we refitted the data with the same linearized models, but with weights Wu derived for each model and each event according to the variance expression in Eq. (6.8-1) for In 7. The residual sums of squares thus found were comparable to those in Table 6.5. confirming the superiority of Model h among those tested. [Pg.122]

The results of the study are summarized in Table 7.5, along with a brief account of the features of each pj-parameter model. Each model was fitted by least squares to 283 observations of the functions InA iu, where Niuz is the measured axial flux of species i in the wth event, in g-moles per second per cm of particle cross section. This corresponds to using the same variance for each response function lnA j 2. Lacking replicates, we compare the models according to Eq. (7.5-16) with a variance estimate = 0.128/(283 — 6), the residual mean-square deviation of the observations... [Pg.160]

The objective of any modeling exercise is to place a calculated line (based on some relevant mathematical model) as close to the data collected as possible. The difference between individual data points and the calculated line (in a vertical direction— no error in the x terms) is called the residual. The sum of residuals could be zero even with very large residuals for individual points if the negative residuals canceled the positive values. An absolute residual might solve this problem, but more usefully, the squared residual will also achieve the desired result. This is the least-squares criterion. An extension of this is to weight each data point by the inverse of the estimated variance. This term is the objective function, WSS, shown in Eq. (1), calculated for n data points ... [Pg.2758]

Full factorial designs Such designs are the best choice when the number of variables is four, or less. A full four-variable factorial design gives estimates of all main effects and two-variable interaction effects, and also an estimate of the experimental enor variance. This is obtained firom the residual sum of squares after a least squares fit of a second-order interaction model, see (Example Catalytic hydrogenation, p. 112). A full factoral design should be used if individual estimates of the interaction effects are desired. Otherwise, it is recommended first to run a half fraction 2 " (I = 1234), and then run the complementary fraction, if necessary, (see Example Synthesis of a semicarbazide, p. 135). [Pg.203]

Let and Um, m = 1,..., M, be M complete supplemented data estimates and their associated variances for a parameter 0, calculated from the M data sets completed by repeated supplementations under one model. For instance, Q = j3, is the least squares estimate of p, and 11 is the weighted residual mean square error. The repeated supplementation estimate of 0 is the mean of the complete data estimates ... [Pg.834]

The fact that the same result was obtained with the OLS estimates is dependent on the assumption of normality and that the residual variance does not depend on the model parameters. Different assumptions or a variance model that depends on the value of the observation would lead to different ML estimates. Least squares estimates focus completely on the structural model in finding the best parameter estimates. However, ML estimates are a compromise between finding a good fit to both the structural model and the variance model. ML estimates are desirable because they have the following properties (among others) ... [Pg.60]

Carroll and Ruppert (1988) and Davidian and Gil-tinan (1995) present comprehensive overviews of parameter estimation in the face of heteroscedasticity. In general, three methods are used to provide precise, unbiased parameter estimates weighted least-squares (WLS), maximum likelihood, and data and/or model transformations. Johnston (1972) has shown that as the departure from constant variance increases, the benefit from using methods that deal with heteroscedasticity increases. The difficulty in using WLS or variations of WLS is that additional burdens on the model are made in that the method makes the additional assumption that the variance of the observations is either known or can be estimated. In WLS, the goal is not to minimize the OLS objective function, i.e., the residual sum of squares,... [Pg.132]

A determinant criterion is used to obtain least-squares estimates of model parameters. This entails minimizing the determinant of the matrix of cross products of the various residuals. The maximum likehhood estimates of the model parameters are thus obtained without knowledge of the variance-covariance matrix. The residuals e, , and correspond to the difference between predicted and actual values of the dependent variables at the different values of the Mth independent variable (m = to to u = tn), for the ith, 7th, and kth experiments (A, B, and C), respectively. It is possible to constmct an error covariance matrix with elements v,y ... [Pg.30]


See other pages where Residual variance model least-squares is mentioned: [Pg.137]    [Pg.139]    [Pg.108]    [Pg.400]    [Pg.400]    [Pg.48]    [Pg.61]    [Pg.130]    [Pg.151]    [Pg.36]    [Pg.131]    [Pg.27]    [Pg.234]    [Pg.154]    [Pg.58]    [Pg.258]    [Pg.48]    [Pg.61]    [Pg.264]    [Pg.327]    [Pg.331]    [Pg.411]    [Pg.157]    [Pg.393]    [Pg.126]   
See also in sourсe #XX -- [ Pg.132 ]




SEARCH



Least squares models

Least squares residual

Least-squares modeling

Residual Variance Model Parameter Estimation Using Weighted Least-Squares

Residual variance model

Residuals squares

Variance model

© 2024 chempedia.info