Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Least-square estimation

To bring to light the principles underlying the well-known least squares method, this method will be deduced from the powerful statistical [Pg.309]

This means that measurements are made without systematic errors. If such is not the case, a correct model could not account for the experimental results or, similarly, a wrong model could fit the experimental results. This point is, of course, of capital importance and, alas, can only be detected by a careful physico-chemical analysis of results and not by statistical procedures. [Pg.310]

A normal distribution for experimental errors is frequently encountered in physical chemistry and can be shown to be expected theoretically by means of the central limit theorem of Liapounov, assuming that the errors result from several small factors, which is often the case. [Pg.310]

With these hypotheses, the density of probability of yu is written as [Pg.310]

The maximum of L is obtained when the objective function or performance criterion, [Pg.310]


Marquadt, D. W., An algorithm for least-squares estimation of non-linear parameters, J. Soc. Indust. Appl. Math., 11, 431, 1963. [Pg.909]

We now use CLS to generate calibrations from our two training sets, A1 and A2. For each training set, we will get matrices, Kl and K2, respectively, containing the best least-squares estimates for the spectra of pure components 1-3, and matrices, Kl i and K2cnl, each containing 3 rows of calibration coefficients, one row for each of the 3 components we will predict. First, we will compare the estimated pure component spectra to the actual spectra we started with. Next, we will see how well each calibration matrix is able to predict the concentrations of the samples that were used to generate that calibration. Finally, we will see how well each calibration is able to predict the... [Pg.54]

The use of a computer is very helpful to carry out a direct processing of the raw experimental data and to calculate the correlation coefficient and the least squares estimate of the rate constant. [Pg.59]

As a simple rule of thumb if a simple least squares estimate is employed the number of modes estimated should be half the number of measurements. If a Bayesian approach is employed the number of modes estimated should be at least the number of measurements. [Pg.393]

If the graph y vs. x suggests a certain functional relation, there are often several alternative mathematical formulations that might apply, e.g., y - /x, y = a - - exp(b (x + c))), and y = a-(l- l/(x + b)) choosing one over the others on sparse data may mean faulty interpretation of results later on. An interesting example is presented in Ref. 115 (cf. Section 2,3.1). An important aspect is whether a function lends itself to linearization (see Section 2.3.1), to direct least-squares estimation of the coefficients, or whether iterative techniques need to be used. [Pg.129]

Points are experimental results ( .) Line Is the least squares estimation of Equation 12 with 6. = 1391, = 0.78,... [Pg.408]

The ordinary least-squares estimate (OLSE) 0 of 0 minimizes (globally over 0 6 0)... [Pg.79]

Supposing D = Cov(e) to be known, we would possibly improve our estimation procedure by weighting more those points of which we are more certain, that is, those whose associated errors have the least variance, taking also into account the correlations among the errors. We may then indicate with 0 the weighted least-squares estimator (WLSE), which is the value of 0 minimizing... [Pg.79]

The CLS method hinges on accurately modelling the calibration spectra as a weighted sum of the spectral contributions of the individual analytes. For this to work the concentrations of all the constituents in the calibration set have to be known. The implication is that constituents not of direct interest should be modelled as well and their concentrations should be under control in the calibration experiment. Unexpected constituents, physical interferents, non-linearities of the spectral responses or interaction between the various components all invalidate the simple additive, linear model underlying controlled calibration and classical least squares estimation. [Pg.356]

The P-matrix is chosen to fit best, in a least-squares sense, the concentrations in the calibration data. This is called inverse regression, since usually we fit a random variable prone to error (y) by something we know and control exactly x). The least-squares estimate P is given by... [Pg.357]

The regression coefficients Q follow immediately by least-squares estimation... [Pg.359]

Another class of methods such as Maximum Entropy, Maximum Likelihood and Least Squares Estimation, do not attempt to undo damage which is already in the data. The data themselves remain untouched. Instead, information in the data is reconstructed by repeatedly taking revised trial data fx) (e.g. a spectrum or chromatogram), which are damaged as they would have been measured by the original instrument. This requires that the damaging process which causes the broadening of the measured peaks is known. Thus an estimate g(x) is calculated from a trial spectrum fx) which is convoluted with a supposedly known point-spread function h(x). The residuals e(x) = g(x) - g(x) are inspected and compared with the noise n(x). Criteria to evaluate these residuals are Maximum Entropy (see Section 40.7.2) and Maximum Likelihood (Section 40.7.1). [Pg.557]

The amounts of EMA and HEMA (derivatized as MEMA) are determined based upon external standard calibration. A nonweighted linear least-squares estimate of... [Pg.359]

The amount of NIPA is determined based upon external standard calibration. A non-weighted linear least-squares estimate of the calibration curve is used to calculate the amount of NIPA in the unknowns. The response of any given sample must not exceed the response of the most concentrated standard. If this occurs, dilution of the sample will be necessary. [Pg.367]

The calibration curve is generated by plotting the peak area of each analyte in a calibration standard against its concentration. Least-squares estimates of the data points are used to define the calibration curve. Linear, exponential, or quadratic calibration curves may be used, but the analyte levels for all the samples from the same protocol must be analyzed with the same curve fit. In the event that analyte responses exceed the upper range of the standard calibration curve by more than 20%, the samples must be reanalyzed with extended standards or diluted into the existing calibration range. [Pg.383]

Mumane RJ, Cochran JK, Sarmiento JL (1994) Estimates of particle- and thorium-cycling rates in the northwest Atlantic Ocean. J Geophys Res 99 3373-3392 Mumane RJ, Cocliran JK, Buesseler KO, Bacon MP (1996) Least-squares estimates of thorium, particle and nutrient cycling rate constants from the JGOFS North Atlantic Bloom Experiment. Deep-Sea Res 1 43(2) 239-258... [Pg.491]

However, the requirement of exact knowledge of all covariance matrices (E i=l,2,...,N) is rather unrealistic. Fortunately, in many situations of practical importance, we can make certain quite reasonable assumptions about the structure of E, that allow us to obtain the ML estimates using Equation 2.21. This approach can actually aid us in establishing guidelines for the selection of the weighting matrices Q, in least squares estimation. [Pg.17]

The determinant criterion is very powerful and it should be used to refine the parameter estimates obtained with least squares estimation if our assumptions about the covariance matrix are suspect. [Pg.19]

Furthermore, as a first approximation one can use implicit least squares estimation to obtain very good estimates of the parameters (Englezos et al., 1990). Namely, the parameters are obtained by minimizing the following Implicit Least Squares (ILS) objective function,... [Pg.21]

Solution of the above linear equation yields the least squares estimates of the parameter vector, k, ... [Pg.28]

The least squares estimator has several desirable properties. Namely, the parameter estimates are normally distributed, unbiased (i.e., (k )=k) and their covariance matrix is given by... [Pg.32]

By constructing a plot of S(t,) versus Xvdt, we can visually identify distinct time periods during the culture where the specific uptake rate (qs) is "constant" and estimates of qs are to be determined. Thus, by using the linear least squares estimation capabilities of any spreadsheet calculation program, we can readily estimate the specific uptake rate over any user-specified time period. The estimated... [Pg.124]

The major disadvantage of the integral method is the difficulty in computing an estimate of the standard error in the estimation of the specific rates. Obviously, all linear least squares estimation routines provide automatically the standard error of estimate and other statistical information. However, the computed statistics are based on the assumption that there is no error present in the independent variable. [Pg.125]

Instead of a detailed presentation of the effect of extreme values and outliers on least squares estimation, the following common sense approach is recommended in the analysis of engineering data ... [Pg.134]

A suitable transformation of the model equations can simplify the structure of the model considerably and thus, initial guess generation becomes a trivial task. The most interesting case which is also often encountered in engineering applications, is that of transformably linear models. These are nonlinear models that reduce to simple linear models after a suitable transformation is performed. These models have been extensively used in engineering particularly before the wide availability of computers so that the parameters could easily be obtained with linear least squares estimation. Even today such models are also used to reveal characteristics of the behavior of the model in a graphical form. [Pg.136]

Initial estimates for the parameters can be readily obtained using linear least squares estimation with the transformed model. [Pg.137]

In engineering we often encounter conditionally linear systems. These were defined in Chapter 2 and it was indicated that special algorithms can be used which exploit their conditional linearity (see Bates and Watts, 1988). In general, we need to provide initial guesses only for the nonlinear parameters since the conditionally linear parameters can be obtained through linear least squares estimation. [Pg.138]

SELECTION OF WEIGHTING MATRIX Q IN LEAST SQUARES ESTIMATION... [Pg.147]

This is equivalent to assuming a constant standard error in the measurement of the j response variable, and at the same time the standard errors of different response variables are proportional to the average value of the variables. This is a "safe" assumption when no other information is available, and least squares estimation pays equal attention to the errors from different response variables (e.g., concentration, versus pressure or temperature measurements). [Pg.147]

The above model equation now satisfies all the criteria for least squares estimation. As an initial guess for k we can use our estimated parameter values when it was assumed that no correlation was present. Of course, in the second step we have to include p in the list of the parameters to be determined. [Pg.156]

Let us consider constrained least squares estimation of unknown parameters in algebraic equation models first. The problem can be formulated as follows ... [Pg.159]

The error in variables method can be simplified to weighted least squares estimation if the independent variables are assumed to be known precisely or if they have a negligible error variance compared to those of the dependent variables. In practice however, the VLE behavior of the binary system dictates the choice of the pairs (T,x) or (T,P) as independent variables. In systems with a... [Pg.233]

It is well known that cubic equations of state may predict erroneous binary vapor liquid equilibria when using interaction parameter estimates from an unconstrained regression of binary VLE data (Schwartzentruber et al.. 1987 Englezos et al. 1989). In other words, the liquid phase stability criterion is violated. Modell and Reid (1983) discuss extensively the phase stability criteria. A general method to alleviate the problem is to perform the least squares estimation subject to satisfying the liquid phase stability criterion. In other... [Pg.236]

Copp and Everet (1953) have presented 33 experimental VLE data points at three temperatures. The diethylamine-water system demonstrates the problem that may arise when using the simplified constrained least squares estimation due to inadequate number of data. In such case there is a need to interpolate the data points and to perform the minimization subject to constraint of Equation 14.28 instead of Equation 14.26 (Englezos and Kalogerakis, 1993). First, unconstrained LS estimation was performed by using the objective function defined by Equation 14.23. The parameter values together with their standard deviations that were obtained are shown in Table 14.5. The covariances are also given in the table. The other parameter values are zero. [Pg.250]

Figure 14.6 The stability function calculated with interaction parameters from unconstrained least squares estimation. Figure 14.6 The stability function calculated with interaction parameters from unconstrained least squares estimation.
If the estimated best set of interaction parameters is found to be the same for each type of data then use the entire database and perform least squares estimation. [Pg.257]


See other pages where Least-square estimation is mentioned: [Pg.188]    [Pg.80]    [Pg.91]    [Pg.374]    [Pg.24]    [Pg.27]    [Pg.138]    [Pg.178]    [Pg.218]    [Pg.233]    [Pg.236]    [Pg.236]    [Pg.237]    [Pg.247]   


SEARCH



Estimate least squares

Least estimate

© 2024 chempedia.info