Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Least squares methods criterion

While you will use the least squares method in most cases, do not forget that selecting an estimation criterion you make assumptions on the error structure, even without a real desire to be involved with this problem. Therefore, it is better to be explicit on this issue, for the sake of consistency in tlie further steps of the estimation. [Pg.143]

It may be tempted to believe that the NFM reduces to a simple least-squares method, where only the relative weights (are used to rank the entire Pareto domain, when the three thresholds Qu, Pk, and 14) are either made all equal to zero or to very high values. This is not the case and, in fact, the three threshold values play an important role in the ranking of the Pareto domain over the whole range of threshold values. The role of thresholds is to use the distance between two values of a given criterion to create a zone of preference around each solution of the Pareto domain and to identify the solutions that are systematically better than the other solutions. [Pg.201]

From the point-by-point solution for the data provided by the nonlinear least-squares method it was estimated that the error in determining the optical rotation (a) for the foregoing run was constant and about 0-013°. This corresponds to an average observational error of 0-06% in a. Excellent agreement was achieved by three different methods of calculation, and thus the criterion suggested by Collins and Lietzke was satisfied. Two separate runs gave similar results. Of the fourteen separate kinetic runs, only one was discarded, and this was one of four determinations of 4 for 115. The results of the calculations of 4 for the discarded run are ... [Pg.70]

The minimization process use ICP algorithm (Iterative Closest Point) described by Greespan [8] and Besl et al. [9]. Defining D the set of data points of the surface Sj and M the set of points of the model or surface S2, this method establish a matching of D and Mpoints. Thus for each point of D there is a point (the nearest) of the model M. By the correspondence established above, the transformation that minimizes the distance criterion is calculated and applied to the points of the set D and the overall error is calculated using least squares method. [Pg.11]

The KirchhofFs equations for the circuit in Figure 7.14 and the material constants obtained for resonator material and castor oil allowed us to calculate the frequency dependencies of the real and imaginary parts of the electrical impedance of the resonator loaded by the film under study for given values C(-, and L°. Here L° is the thickness of oil layer. Then by changing these parameters with the help of the least-squares method, we found the minimum of the criterion function for which the theoretical and experimental data were closest. Figures 7.12 and 7.13 show by the... [Pg.178]

Amplitude variable in variational analysis Parameter in least squares method minimization statement Ratio of eddy size to the bubble size, e = A/J,- (—) Represent small threshold value in convergence criterion Surface roughness of pipe (m)... [Pg.1587]

The simplest procedure is merely to assume reasonable values for A and to make plots according to Eq. (2-52). That value of A yielding the best straight line is taken as the correct value. (Notice how essential it is that the reaction be accurately first-order for this method to be reliable.) Williams and Taylor have shown that the standard deviation about the line shows a sharp minimum at the correct A . Holt and Norris describe an efficient search strategy in this procedure, using as their criterion minimization of the weighted sum of squares of residuals. (Least-squares regression is treated later in this section.)... [Pg.36]

The purpose of Partial Least Squares (PLS) regression is to find a small number A of relevant factors that (i) are predictive for Y and (u) utilize X efficiently. The method effectively achieves a canonical decomposition of X in a set of orthogonal factors which are used for fitting Y. In this respect PLS is comparable with CCA, RRR and PCR, the difference being that the factors are chosen according to yet another criterion. [Pg.331]

We have seen that PLS regression (covariance criterion) forms a compromise between ordinary least squares regression (OLS, correlation criterion) and principal components regression (variance criterion). This has inspired Stone and Brooks [15] to devise a method in such a way that a continuum of models can be generated embracing OLS, PLS and PCR. To this end the PLS covariance criterion, cov(t,y) = s, s. r, is modified into a criterion T = r. (For... [Pg.342]

Table 2.3 is used to classify the differing systems of equations, encountered in chemical reactor applications and the normal method of parameter identification. As shown, the optimal values of the system parameters can be estimated using a suitable error criterion, such as the methods of least squares, maximum likelihood or probability density function. [Pg.112]

It is well known that cubic equations of state may predict erroneous binary vapor liquid equilibria when using interaction parameter estimates from an unconstrained regression of binary VLE data (Schwartzentruber et al.. 1987 Englezos et al. 1989). In other words, the liquid phase stability criterion is violated. Modell and Reid (1983) discuss extensively the phase stability criteria. A general method to alleviate the problem is to perform the least squares estimation subject to satisfying the liquid phase stability criterion. In other... [Pg.236]

Firstly, it has been found that the estimation of all of the amplitudes of the LI spectrum cannot be made with a standard least-squares based fitting scheme for this ill-conditioned problem. One of the solutions to this problem is a numerical procedure called regularization [55]. In this method, the optimization criterion includes the misfit plus an extra term. Specifically in our implementation, the quantity to be minimized can be expressed as follows [53] ... [Pg.347]

Figure 12.8 displays an organization chart of various quantitative methods, in an effort to better understand their similarities and differences. Note that the first discriminator between these methods is the direct versus inverse property. Inverse methods, such as MLR and partial least squares (PLS), have had a great deal of success in PAT over the past few decades. However, direct methods, such as classical least squares (CLS) and extensions thereof, have seen a recent resurgence [46-51]. The criterion used to distinguish between a direct and an inverse method is the general form of the model, as shown below ... [Pg.377]

The IR methods have progressed from hand-drawn baselines and peak height or area for quantitation, to spectral subtraction, to leastsquares methods. Least-squares analysis eliminates the reliance on single peaks for quantitation and the subjectivity of spectral subtraction. However, negative concentration coefficients are a problem with least-squares analysis, since they have no physical meaning. Negative components can be omitted according to some criterion and the least-squares process iterated until only... [Pg.49]

Lang and Laakso, 1994] Lang, M. and Laakso, T.I. (1994). Simple and robust method for the design of allpass filters using least-squares phase error criterion. IEEE Trans. Circuits and Systems, 41(l) 40-48. [Pg.551]

This comparison is performed on the basis of an optimality criterion, which allows one to adapt the model to the data by changing the values of the adjustable parameters. Thus, the optimality criteria and the objective functions of maximum likelihood and of weighted least squares are derived from the concept of conditioned probability. Then, optimization techniques are discussed in the cases of both linear and nonlinear explicit models and of nonlinear implicit models, which are very often encountered in chemical kinetics. Finally, a short account of the methods of statistical analysis of the results is given. [Pg.4]

The actual noise distribution in Y is often unknown, but generally a normal distribution is assumed. White noise signifies that all experimental standard deviations, individual measurements, y, are the same and uncorrelated. The least-squares criterion applied to the residuals delivers the most likely parameters only under the condition of so-called white noise. However, even if this prerequisite is not fulfilled, it is usually still useful to perform the least-squares fit. This makes it the most commonly applied method for data fitting. [Pg.237]

The convergence criterion in the alternating least-squares optimization is based on the comparison of the fit obtained in two consecutive iterations. When the relative difference in fit is below a threshold value, the optimization is finished. Sometimes a maximum number of iterative cycles is used as the stop criterion. This method is very flexible and can be adapted to very diverse real examples, as shown in Section 11.7. [Pg.440]

In the original version of the r0-method, ground state inertial moments calculated for all isotopomers in terms of internal coordinates are least-squares fitted to the experimental moments 1°. The internal coordinates represent a reference system which is identical for all isotopomers and the resulting restructure is obtained as the final set of internal coordinates determined by the criterion of optimum fit. All atomic positions must either be included in the list of those to be determined, or estimated values must be supplied and then kept fixed in the fit. The result depends on these assumed values. Schwendeman has suggested a useful r0-derived variant [6], the p-Kr method , where the isotopic differences between the calculated inertial moments of the isotopomers and the parent species are fitted to the respective experimental differences in the attempt to compensate (the isotopomer-independent) part of the rovib contribution. The same result is achieved explicitly by the r/e-method, a r0-derived variant which is presented later in this chapter, where the calculated inertial moments plus three isotopomer-independent rovib contributions eg are fitted to the experimental ground state moments I°g. ... [Pg.66]

The maximum entropy method (MEM) is based on the philosophy of using a number of trial spectra generated by the computer to fit the observed FID by a least squares criterion. Because noise is present, there may be a number of spectra that provide a reasonably good fit, and the distinction is made within the computer program by looking for the one with the maximum entropy as defined in information theory, which means the one with the minimum information content. This criterion ensures that no extraneous information (e.g., additional spectral... [Pg.74]

Fortunately, there is such a method, which is both simple and generally applicable, even to mixtures of polyprotic acids and bases. It is based on the fact that we have available a closed-form mathematical expression for the progress of the titration. We can simply compare the experimental data with an appropriate theoretical curve in which the unknown parameters (the sample concentration, and perhaps also the dissociation constant) are treated as variables. By trial and error we can then find values for those variables that will minimize the sum of the squares of the differences between the theoretical and the experimental curve. In other words, we use a least-squares criterion to fit a theoretical curve to the experimental data, using the entire data set. Here we will demonstrate this method for the same system that we have used so far the titration of a single monoprotic acid with a single, strong monoprotic base. [Pg.142]

In the least squares local energy method the criterion that serves as a basis for this procedure applies equally as well to sums over finite sets of points as to integrals. Theoretically, the local energy method can be applied using... [Pg.56]


See other pages where Least squares methods criterion is mentioned: [Pg.514]    [Pg.183]    [Pg.389]    [Pg.184]    [Pg.187]    [Pg.187]    [Pg.132]    [Pg.183]    [Pg.199]    [Pg.91]    [Pg.343]    [Pg.375]    [Pg.96]    [Pg.198]    [Pg.361]    [Pg.170]    [Pg.223]    [Pg.210]    [Pg.337]    [Pg.154]    [Pg.186]    [Pg.372]    [Pg.106]    [Pg.245]   
See also in sourсe #XX -- [ Pg.57 ]




SEARCH



Least-squared method

Least-squares method

© 2024 chempedia.info