Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Assumption least-squares

Understanding the distribution allows us to calculate the expected values of random variables that are normally and independently distributed. In least squares multiple regression, or in calibration work in general, there is a basic assumption that the error in the response variable is random and normally distributed, with a variance that follows a ) distribution. [Pg.202]

The generalized inverse method represents another formulation of multilinear least-squares analysis. All the usual assumptions involved with least squares apply. [Pg.428]

It is usually advisable to plot the observed pairs of y versus r, to support the linearity assumption and to detect potential outhers. Suspected outliers can be omitted from the least-squares Tit and then subsequently tested on the basis of the least-squares fit. [Pg.502]

The influenee eoeffieient method examines relative displaeements rather than absolute displaeements. No assumptions about perfeet balaneing eonditions are made. Its effeetiveness is not influeneed by damping, by motions of the loeations at whieh readings are taken, or by initially bent rotors. The least-square teehnique for data proeessing is applied to find an... [Pg.595]

The validity of least squares model fitting is dependent on four prineipal assumptions eoneerning the random error term , whieh is inherent in the use of least squares. The assumptions as illustrated by Baeon and Downie [6] are as follows ... [Pg.174]

Assumption 3 The variance of the random error term is constant over the ranges of the operating variables used to collect the data. When the variance of the random error term varies over the operating range, then either weighted least squares must be used or a transformation of the data must be made. However, this may be violated by certain transformations of the model. [Pg.175]

The standard way to answer the above question would be to compute the probability distribution of the parameter and, from it, to compute, for example, the 95% confidence region on the parameter estimate obtained. We would, in other words, find a set of values h such that the probability that we are correct in asserting that the true value 0 of the parameter lies in 7e is 95%. If we assumed that the parameter estimates are at least approximately normally distributed around the true parameter value (which is asymptotically true in the case of least squares under some mild regularity assumptions), then it would be sufficient to know the parameter dispersion (variance-covariance matrix) in order to be able to compute approximate ellipsoidal confidence regions. [Pg.80]

However, the requirement of exact knowledge of all covariance matrices (E i=l,2,...,N) is rather unrealistic. Fortunately, in many situations of practical importance, we can make certain quite reasonable assumptions about the structure of E, that allow us to obtain the ML estimates using Equation 2.21. This approach can actually aid us in establishing guidelines for the selection of the weighting matrices Q, in least squares estimation. [Pg.17]

The determinant criterion is very powerful and it should be used to refine the parameter estimates obtained with least squares estimation if our assumptions about the covariance matrix are suspect. [Pg.19]

The converged parameter values represent the Least Squares (LS), Weighted LS or Generalized LS estimates depending on the choice of the weighting matrices Q,. Furthermore, if certain assumptions regarding the statistical distribution of the residuals hold, these parameter values could also be the Maximum Likelihood (ML) estimates. [Pg.53]

The major disadvantage of the integral method is the difficulty in computing an estimate of the standard error in the estimation of the specific rates. Obviously, all linear least squares estimation routines provide automatically the standard error of estimate and other statistical information. However, the computed statistics are based on the assumption that there is no error present in the independent variable. [Pg.125]

This assumption can be relaxed when the experimental error in the independent variable is much smaller compared to the error present in the measurements of the dependent variable, In our case the assumption of simple linear least squares implies that Xvdt is known precisely. Although we do know that there are errors in the measurement of Xv, the polynomial fitting and the subsequent integration provides a certain amount of data filtering which could allows us to assume that experimental error in Jxvdt is negligible compared to that present in S(t,) or P(t,). [Pg.126]

This is equivalent to assuming a constant standard error in the measurement of the j response variable, and at the same time the standard errors of different response variables are proportional to the average value of the variables. This is a "safe" assumption when no other information is available, and least squares estimation pays equal attention to the errors from different response variables (e.g., concentration, versus pressure or temperature measurements). [Pg.147]

This is equivalent to assuming that the standard error in the i1 1 measurement of the response variable is proportional to its value, again a rather "safe" assumption as it forces least squares to pay equal attention to all data points. [Pg.148]

The unweighted least squares analysis is based on the assumption that the best value of the rate constant k is the one t,hat minimizes the sum of the squares of the residuals. In the general case one should regard the zero time point as an adjustable constant in order to avoid undue weighting of the initial point. An analysis of this type gives the following expressions for first-and second-order rate constants... [Pg.55]

For frequency calculations one usually starts out with a set of approximate existent force constants (e.g. taken over from similar, already treated molecules under the preliminary tentative assumption of transferability), and subsequently varies the force constants in a systematic way by means of a least-squares procedure until the calculated frequencies (square roots of the eigenvalues of Eq. (10)) agree satisfactorily with the experimental values. Clearly, if necessary, the analytical form of the force field is also to be modified in the course of this fitting process. [Pg.172]

Certain assumptions underly least squares computations such as the independence of the unobservable errors ef, a constant error variance, and lack of error in the jc s (Draper and Smith, 1998). If the model represents the data adequately, the residuals should possess characteristics that agree with these basic assumptions. The analysis of residuals is thus a way of checking that one or more of the assumptions underlying least squares optimization is not violated. For example, if the model fits well, the residuals should be randomly distributed about the value of y predicted by the model. Systematic departures from randomness indicate that the model is unsatisfactory examination of the patterns formed by the residuals can provide clues about how the model can be improved (Box and Hill, 1967 Draper and Hunter, 1967). [Pg.60]

The assumption that all variables are measured is usually not true, as in practice some of them are not measured and must be estimated. In the previous section the decomposition of the linear data reconciliation problem involving only measured variables was discussed, leading to a reduced least squares problem. In the following section,... [Pg.99]

As was shown, the conventional method for data reconciliation is that of weighted least squares, in which the adjustments to the data are weighted by the inverse of the measurement noise covariance matrix so that the model constraints are satisfied. The main assumption of the conventional approach is that the errors follow a normal Gaussian distribution. When this assumption is satisfied, conventional approaches provide unbiased estimates of the plant states. The presence of gross errors violates the assumptions in the conventional approach and makes the results invalid. [Pg.218]

If in addition to the binary assumption on P x, sensor errors are normally distributed and independent across the data sets, the problem becomes our typical nonlinear least squares data reconciliation problem ... [Pg.220]

The principle of using least squares may still be applicable in fitting the best curve, if the assumptions of normality, independence, and reasonably error-free measurement of response are valid. [Pg.936]


See other pages where Assumption least-squares is mentioned: [Pg.2109]    [Pg.2966]    [Pg.63]    [Pg.429]    [Pg.373]    [Pg.83]    [Pg.609]    [Pg.225]    [Pg.23]    [Pg.3]    [Pg.89]    [Pg.150]    [Pg.408]    [Pg.558]    [Pg.644]    [Pg.647]    [Pg.648]    [Pg.87]    [Pg.330]    [Pg.28]    [Pg.38]    [Pg.65]    [Pg.37]    [Pg.47]    [Pg.18]    [Pg.158]    [Pg.256]   
See also in sourсe #XX -- [ Pg.131 ]




SEARCH



Least squares methods basic assumption

© 2024 chempedia.info