Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Least squares estimate

The fact that the same result was obtained with the OLS estimates is dependent on the assumption of normality and that the residual variance does not depend on the model parameters. Different assumptions or a variance model that depends on the value of the observation would lead to different ML estimates. Least squares estimates focus completely on the structural model in finding the best parameter estimates. However, ML estimates are a compromise between finding a good fit to both the structural model and the variance model. ML estimates are desirable because they have the following properties (among others) ... [Pg.60]

Another interesting item worth commenting on is the significant difference between the Weibull modulus my for the three-point flexure bar rupture data (my = 11.96, oey = 612.7 MPa) and that of the thermal shocked disk (my = 6.91, (j0y = 345.9 MPa) experimental rupture data (shown in Figure 5) as determined by CAKES/Life maximum likelihood parameter estimation. Least squares regression (using an Excel spreadsheet) on the CARES/Life disk predictions... [Pg.458]

For this example, the model given by eqn (3.11) was estimated (least-squares method) as ... [Pg.192]

Constants A, and c are estimated by the least squares fitting procedure. [Pg.50]

Let us consider that the niunber of echoes M and the incident wavelet (/) (e.g., a normalized comer echo) are known. Least Squares approach for estimating parameter vectors x and requires the solution to the nonlinear least squares problem ... [Pg.175]

If the source fingerprints, for each of n sources are known and the number of sources is less than or equal to the number of measured species (n < m), an estimate for the solution to the system of equations (3) can be obtained. If m > n, then the set of equations is overdetermined, and least-squares or linear programming techniques are used to solve for L. This is the basis of the chemical mass balance (CMB) method (20,21). If each source emits a particular species unique to it, then a very simple tracer technique can be used (5). Examples of commonly used tracers are lead and bromine from mobile sources, nickel from fuel oil, and sodium from sea salt. The condition that each source have a unique tracer species is not often met in practice. [Pg.379]

A straight line may be fitted to the (X, Y) or (X, Y) pairs of data when plotted on log-log graph paper from which the slope N and the intercept log K with X = 1 may be read. Alternatively, the method of least squares may be used to estimate the values of K and N, giving the best fit to the available data. [Pg.819]

A weighted least-squares analysis is used for a better estimate of rate law parameters where the variance is not constant throughout the range of measured variables. If the error in measurement is corrected, then the relative error in the dependent variable will increase as the independent variable increases or decreases. [Pg.173]

The weighted least-squares analysis is important for estimating parameter involving exponents. Examples are the eoneentration time data... [Pg.174]

Marquadt, D. W., An algorithm for least-squares estimation of non-linear parameters, J. Soc. Indust. Appl. Math., 11, 431, 1963. [Pg.909]

The size-dependent agglomeration kernels suggested by both Smoluchowski and Thompson fit the experimental data very well. For the case of a size-independent agglomeration kernel and the estimation without disruption (only nucleation, growth and agglomeration), the least square fits substantially deviate from the experimental data (not shown). For this reason, further investigations are carried out with the theoretically based size-dependent kernel suggested by Smoluchowski, which fitted the data best ... [Pg.185]

A reading of Section 2.2 shows that all of the methods for determining reaction order can lead also to estimates of the rate constant, and very commonly the order and rate constant are determined concurrently. However, the integrated rate equations are the most widely used means for rate constant determination. These equations can be solved analytically, graphically, or by least-squares regression analysis. [Pg.31]

Xjj is the ith observation of variable Xj. yi is the ith observation of variable y. y, is the ith value of the dependent variable calculated with the model function and the final least-squares parameter estimates. [Pg.42]

Least-squares regression of In c on t then gives estimates of In Cq and k. Because time is usually measured with much greater accuracy than is concentration, we need only consider the uncertainty in the dependent variable. [Pg.45]

It can be argued that the main advantage of least-squares analysis is not that it provides the best fit to the data, but rather that it provides estimates of the uncertainties of the parameters. Here we sketch the basis of the method by which variances of the parameters are obtained. This is an abbreviated treatment following Bennett and Franklin.We use the normal equations (2-73) as an example. Equation (2-73a) is solved for <2o-... [Pg.46]

Thus, a can be calculated (it is sometimes negligible), and the rate constants are evaluated graphically or by least-squares analysis the estimates of k and k must be consistent with the known stability constant. [Pg.151]

Examination of Table 6-1 reveals how the weighting treatment takes into account the reliability of the data. The intermediate point, which has the poorest precision, is severely discounted in the least-squares fit. The most interesting features of Table 6-2 are the large uncertainties in the estimates of A and E. These would be reduced if more data points covering a wider temperature range were available nevertheless it is common to find the uncertainty in to be comparable to RT. The uncertainty of A is a consequence of the extrapolation to 1/7" = 0, which, in effect, is how In A is determined. In this example, the data cover the range 0.003 23 to 0.003 41 in 1/r, and the extrapolation is from 0.003 23 to zero thus about 95% of the line constitutes an extrapolation over unstudied tempertures. Estimates of A and E are correlated, as are their uncertainties. ... [Pg.249]

Conductivity at 298K calculated from least-squares-fitted parameters given in reference. ° Conductivity estimated from graphical data provided in... [Pg.62]

FIGURE 11.10 Removal of outliers points to achieve curve fits, (a) The least squares fitting procedure cannot fit a sigmoidal curve to the data points due to the ordinate value at 20j.iM. Removal of this point allows an estimate of the curve, (b) The outlier point at 2jiM causes a capricious and obviously errant fit to the complete data set. Removal of this point indicates a clearer view of the relationship between concentration and response. [Pg.239]

Once a linear relationship has been shown to have a high probability by the value of the correlation coefficient (r), then the best straight line through the data points has to be estimated. This can often be done by visual inspection of the calibration graph but in many cases it is far better practice to evaluate the best straight line by linear regression (the method of least squares). [Pg.145]

We now use CLS to generate calibrations from our two training sets, A1 and A2. For each training set, we will get matrices, Kl and K2, respectively, containing the best least-squares estimates for the spectra of pure components 1-3, and matrices, Kl i and K2cnl, each containing 3 rows of calibration coefficients, one row for each of the 3 components we will predict. First, we will compare the estimated pure component spectra to the actual spectra we started with. Next, we will see how well each calibration matrix is able to predict the concentrations of the samples that were used to generate that calibration. Finally, we will see how well each calibration is able to predict the... [Pg.54]

The use of a computer is very helpful to carry out a direct processing of the raw experimental data and to calculate the correlation coefficient and the least squares estimate of the rate constant. [Pg.59]

The reliability factor B was 0276 after the first refinement and 0-211 after the fourth refinement. The parameters from the third and fourth refinements differed very little from one another. The final values are given in Table 1. As large systematic errors were introduced in the refinement process by the unavoidable use of very poor atomic form factors, the probable errors in the parameters as obtained in the refinement were considered to be of questionable significance. For this reason they are not given in the table. The average error was, however, estimated to be 0-001 for the positional parameters and 5% for the compositional parameters. The scattering power of the two atoms of type A was given by the least-squares refinement as only 0-8 times that of aluminum (the fraction... [Pg.608]

If we consider the relative merits of the two forms of the optimal reconstructor, Eq. s 16 and 17, we note that both require a matrix inversion. Computationally, the size of the matrix inversion is important. Eq. 16 inverts an M x M (measurements) matrix and Eq. 17 a P x P (parameters) matrix. In a traditional least squares system there are fewer parameters estimated than there are measurements, ie M > P, indicating Eq. 16 should be used. In a Bayesian framework we are hying to reconstruct more modes than we have measurements, ie P > M, so Eq. 17 is more convenient. [Pg.380]

If we decide to only estimate a finite number of basis modes we implicitly assume the coefficients of all the other modes are zero and that the covariance of the modes estimated is very large. Thus QN Q becomes large relative to C and in this case Eq. 16 simplifies to a weighted least squares formula... [Pg.381]

As a simple rule of thumb if a simple least squares estimate is employed the number of modes estimated should be half the number of measurements. If a Bayesian approach is employed the number of modes estimated should be at least the number of measurements. [Pg.393]

Overdetermination of the system of equations is at the heart of regression analysis, that is one determines more than the absolute minimum of two coordinate pairs (xj/yi) and xzjyz) necessary to calculate a and b by classical algebra. The unknown coefficients are then estimated by invoking a further model. Just as with the univariate data treated in Chapter 1, the least-squares model is chosen, which yields an unbiased best-fit line subject to the restriction ... [Pg.95]


See other pages where Least squares estimate is mentioned: [Pg.552]    [Pg.250]    [Pg.552]    [Pg.250]    [Pg.2109]    [Pg.118]    [Pg.426]    [Pg.743]    [Pg.140]    [Pg.142]    [Pg.174]    [Pg.51]    [Pg.51]    [Pg.52]    [Pg.73]    [Pg.445]    [Pg.73]    [Pg.373]    [Pg.361]    [Pg.41]    [Pg.179]    [Pg.597]    [Pg.601]    [Pg.23]    [Pg.139]    [Pg.440]   
See also in sourсe #XX -- [ Pg.79 , Pg.80 ]




SEARCH



Constrained Least Squares Estimation

Explicit Least Squares Estimation

Finite-Sample Properties of the Least Squares Estimator

Generalized Least Squares (GLS) Estimation

Implicit Least Squares Estimation

Least estimate

Least squares estimate statistical properties

Least squares estimation

Least squares estimation neural networks

Least squares estimation principal component analysis

Least squares method estimated standard deviation

Linear Least Squares Estimation

Ordinary least-squares estimated using

Parameter estimation weighted least-squares method

Properties of the Least-Squares Estimation

Residual Variance Model Parameter Estimation Using Weighted Least-Squares

Sample Properties of the Least Squares and Instrumental Variables Estimators

Simplified Constrained Least Squares Estimation

Variances and covariances of the least-squares parameter estimates

Weighted Least Squares (WLS) Estimation

Weighted Linear Least Squares Estimation (WLS)

Weighted least-squares estimator

© 2024 chempedia.info