Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Properties of the Least-Squares Estimation

The expected value of bQ = E[bo =j3o- The expected value of bi =E[bi] =j3i-The least-squares estimators of bo and b are unbiased estimators and have the minimum variance of all other possible linear combinations. [Pg.31]

Example 2.1 An experimenter challenges a benzalkonium chloride disinfectant with 1 X 10 Staphylococcus aureus bacteria in a series of timed exposures. As noted earlier, exponential microbial colony counts are customarily linearized via a logio scale transformation, which has been performed in this example. The resultant data are presented in Table 2.1. [Pg.31]

The researcher would like to perform regression analysis on the data to construct a chemical microbial inactivation curve, where x is the exposure time in seconds and y is the logio colony-forming units recovered. [Pg.31]

Note that the data are replicated in triplicate for each exposure time, x. First, we compute the slope of the data [Pg.31]

The negative sign of bi means the regression line estimated by y is descending, from the y intercept  [Pg.32]


Finite-Sample Properties of the Least Squares Estimator... [Pg.7]

Since a and b are statistical estimates based on a sample, they have an error term (the standard error of the estimate) associated with them. Among all estimates A and B, the least squares estimates have the smallest error term. This is the third important property of the least squares estimates. [Pg.392]

The first three assumptions are required to obtain an understanding of the properties of the least-squares estimate, while the last assumption allows for hypothesis testing to be performed on the estimates, as well as making the regression estimates equal to the maximum-likelihood parameter estimates. [Pg.94]

The statistical properties for the least squares estimate 6 of the FSF model parameters given in Equation (4.26) are used in this section to develop statistical confidence bounds for frequency response and step response models estimated using the FSF model structme. We begin by stating the key assumptions on which all of this analysis is based and then summarize the key properties of the least squares estimator, which may be found in Ljung (1987). [Pg.115]

Based on these assumptions, we can state the following properties of the least squares estimate 6 obtained from Equation (4.26) ... [Pg.115]

The estimates L1-HB9)1 or L1-KS2.1)1 have standard errors less than 9 percent larger than the least squares estimate if the errors had a Gaussian distribution and have much smaller standard errors than least squares for the more typical longer tailed distributions. For very large data sets where the LI estimate may be prohibitively expensive, the estimates LS-KB9)6 or LS-KS2.1)6 might be useful alternatives. While the properties of these fitting techniques have not yet been explored for situations of any substantial complexity, their performance is, so far, very encouraging. It would thus seem sensible to try them in more complex situations and see how they perform. [Pg.43]

Based on the assumptions that the data (y s) have a normal distribution and that the standard deviation of the y s is the same as each x, the least squares estimates a and b of A and B are unique and unbiased. It is these properties which make the least squares estimates of A and B attractive. [Pg.392]

The least-square solution itself does not depend on the probability distribution of y it is simply a minimum-distance estimate. Later in this Chapter, it will be shown, however, that its sampling properties are most easily described when the measurements are normally distributed. [Pg.250]

The dimensionality of the model, a, is estimated so as to give the model as good predictive properties as possible. Geometrically, this corresponds to the fitting of an a-dimensional hyperplane to the object points in the measurement space. The fitting is made using the least squares criterion, i.e. the sum of squared residuals is minimized for the class data set. [Pg.85]

Though the maximum likelihood principle is not less intuitive than the least squares method itself, it enables the statisticans to derive estimation criteria for any known distribution, and to generally prove that the estimates have nice properties such as asymptotic unbiasedness (ref. 1). In particular, the method of least absolute deviations introduced in Section 1.8.2 is also a maximum likelihood estimator assuming a different distribution for the error. [Pg.142]

Since the final form of a maximum likelihood estimator depends on the assumed error distribution, we partially answered the question why there are different criteria in use, but we have to go further. Maximum likelihood estimates are only guaranteed to have their expected properties if the error distribution behind the sample is the one assumed in the derivation of the method, but in many cases are relatively insensitive to deviations. Since the error distribution is known only in rare circumstances, this property of robustness is very desirable. The least squares method is relatively robust, and hence its use is not restricted to normally distributed errors. Thus, we can drop condition (vi) when talking about the least squares method, though then it is no more associated with the maximum likelihood principle. There exist, however, more robust criteria that are superior for errors with distributions significantly deviating from the normal one, as we will discuss... [Pg.142]

In order to compute a two step GLS estimate, we can use either the original variance estimates based on the separate least squares estimates or those obtained above in doing the LM test. Since both pairs are consistent, both FGLS estimators will have all of the desirable asymptotic properties. For our estimator, we... [Pg.59]

Even if the hypotheses which have allowed us to deduce the method of least squares from the method of maximum likelihood are not verified, it may be proved that least squares estimates possess very interesting properties (see, for example, ref. 202). [Pg.311]

A least squares estimate is no guarantee whatsoever, that the model parameters will have good properties, i.e. that they will measure what we want them to do, viz. the influence of the variables. The quality of the model parameters is governed by the properties of the dispersion matrix (X X)" and hence it depends ultimately on the experimental design used to determine the model. The requirements for a good design will be discussed in Chapter 5. [Pg.58]

The fact that the same result was obtained with the OLS estimates is dependent on the assumption of normality and that the residual variance does not depend on the model parameters. Different assumptions or a variance model that depends on the value of the observation would lead to different ML estimates. Least squares estimates focus completely on the structural model in finding the best parameter estimates. However, ML estimates are a compromise between finding a good fit to both the structural model and the variance model. ML estimates are desirable because they have the following properties (among others) ... [Pg.60]


See other pages where Properties of the Least-Squares Estimation is mentioned: [Pg.31]    [Pg.31]    [Pg.38]    [Pg.241]    [Pg.541]    [Pg.36]    [Pg.39]    [Pg.27]    [Pg.445]    [Pg.74]    [Pg.465]    [Pg.429]    [Pg.382]    [Pg.384]    [Pg.260]    [Pg.224]    [Pg.168]    [Pg.478]    [Pg.18]    [Pg.277]    [Pg.189]    [Pg.665]    [Pg.343]    [Pg.246]    [Pg.1883]    [Pg.444]    [Pg.444]    [Pg.20]   


SEARCH



Estimate least squares

Least estimate

Property estimation

© 2024 chempedia.info