Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Least-squares coefficients

The coefficients and the exponents are found by least-squares fitting, in which the overlap between the Slater type function and the Gaussian expansion is maximised. Thus, for the Is Slater type orbital we seek to maximise the following integral ... [Pg.88]

Dichromate-permanganate determination is an artificial problem because the matrix of coefficients can be obtained as the slopes of A vs. x from four univariate least squares regression treatments, one on solutions containing only at... [Pg.84]

Fig. 2. Least-squares plot showing the determinants for the coefficient of determination. A, total deviation y — y) , B, unexplained deviation y — y) , and... Fig. 2. Least-squares plot showing the determinants for the coefficient of determination. A, total deviation y — y) , B, unexplained deviation y — y) , and...
The least-squares technique can be extended to any number of variables as long as the equation is linear in its coefficients. The linear correlation ofj vs X can be extended to the correlation ofj vs multiple independent variables generating an equation of the form ... [Pg.245]

If the UCKRON expression is simplified to the form recommended for reactions controlled by adsorption of reactant, and if the original true coefficients are used, it results in about a 40% error. If the coefficients are selected by a least squares approach the approximation improves significantly, and the numerical values lose their theoretical significance. In conclusion, formalities of classical kinetics are useful to retain the basic character of kinetics, but the best fitting coefficients have no theoretical significance. [Pg.121]

Once a linear relationship has been shown to have a high probability by the value of the correlation coefficient (r), then the best straight line through the data points has to be estimated. This can often be done by visual inspection of the calibration graph but in many cases it is far better practice to evaluate the best straight line by linear regression (the method of least squares). [Pg.145]

We now use CLS to generate calibrations from our two training sets, A1 and A2. For each training set, we will get matrices, Kl and K2, respectively, containing the best least-squares estimates for the spectra of pure components 1-3, and matrices, Kl i and K2cnl, each containing 3 rows of calibration coefficients, one row for each of the 3 components we will predict. First, we will compare the estimated pure component spectra to the actual spectra we started with. Next, we will see how well each calibration matrix is able to predict the concentrations of the samples that were used to generate that calibration. Finally, we will see how well each calibration is able to predict the... [Pg.54]

The use of a computer is very helpful to carry out a direct processing of the raw experimental data and to calculate the correlation coefficient and the least squares estimate of the rate constant. [Pg.59]

The coefficients listed were determined by nonlinear least-squares fitting of the data of Ref 79, and have dimensions appropriate for xj in mm, tj in usee, and Px in kbar... [Pg.583]

Thermodynamic Functions of the Gases. To apply Eqs. (1-10), the free energies of formation, Ag , for all gaseous species as a function of temperature are required. Tabulated data were fit by a least-squares procedure to derive an analytical equation for AG° of each vapor species. For the plutonium oxide vapor species, the data calculated from spectroscopic data (3 ) were used for 0(g) and 02(g), the JANAF data (.5) were used and for Pu(g), data from the compilation of Oetting et al. (6) were used. The coefficients of the equations for AG° of the gaseous species are included in Table I. [Pg.130]

If we decide to only estimate a finite number of basis modes we implicitly assume the coefficients of all the other modes are zero and that the covariance of the modes estimated is very large. Thus QN Q becomes large relative to C and in this case Eq. 16 simplifies to a weighted least squares formula... [Pg.381]

In the next section we derive the Taylor expansion of the coupled cluster cubic response function in its frequency arguments and the equations for the required expansions of the cluster amplitude and Lagrangian multiplier responses. For the experimentally important isotropic averages 7, 7i and yx we give explicit expressions for the A and higher-order coefficients in terms of the coefficients of the Taylor series. In Sec. 4 we present an application of the developed approach to the second hyperpolarizability of the methane molecule. We test the convergence of the hyperpolarizabilities with respect to the order of the expansion and investigate the sensitivity of the coefficients to basis sets and correlation treatment. The results are compared with dispersion coefficients derived by least square fits to experimental hyperpolarizability data or to pointwise calculated hyperpolarizabilities of other ab inito studies. [Pg.114]

Our results indicate that dispersion coefficients obtained from fits of pointwise given frequency-dependent hyperpolarizabilities to low order polynomials can be strongly affected by the inclusion of high-order terms. A and B coefficients derived from a least square fit of experimental frequency-dependent hyperpolarizibility data to a quadratic function in ijf are therefore not strictly comparable to dispersion coefficients calculated by analytical differentiation or from fits to higher-order polynomials. Ab initio calculated dispersion curves should therefore be compared with the original frequency-dependent experimental data. [Pg.142]

When comparisons are to be drawn among scales derived with different criteria of physical validity, we believe this point to be especially appropriate. The SD is the explicit variable in the least-squares procedure, after all, while the correlation coefficient is a derivative providing at best a non linear acceptability scale, with good and bad correlations often crowded in the range. 9-1.0. The present work further provides strong confirmation of this conclusion. [Pg.16]

Overdetermination of the system of equations is at the heart of regression analysis, that is one determines more than the absolute minimum of two coordinate pairs (xj/yi) and xzjyz) necessary to calculate a and b by classical algebra. The unknown coefficients are then estimated by invoking a further model. Just as with the univariate data treated in Chapter 1, the least-squares model is chosen, which yields an unbiased best-fit line subject to the restriction ... [Pg.95]

If the graph y vs. x suggests a certain functional relation, there are often several alternative mathematical formulations that might apply, e.g., y - /x, y = a - - exp(b (x + c))), and y = a-(l- l/(x + b)) choosing one over the others on sparse data may mean faulty interpretation of results later on. An interesting example is presented in Ref. 115 (cf. Section 2,3.1). An important aspect is whether a function lends itself to linearization (see Section 2.3.1), to direct least-squares estimation of the coefficients, or whether iterative techniques need to be used. [Pg.129]

The experiments were carried out in random order and the responses analyzed with the program X-STAT(11) which runs on an IBM PC computer. The model was the standard quadratic polynomial, and the coefficients were determined by a linear least-squares regression. [Pg.78]

Compression may be achieved if some regions of the time-frequency space in which the data are decomposed do not contain much information. The square of each wavelet coefficient is proportional to the least-squares error of approximation incurred by neglecting that coefficient in the reconstruction ... [Pg.249]


See other pages where Least-squares coefficients is mentioned: [Pg.166]    [Pg.153]    [Pg.156]    [Pg.714]    [Pg.722]    [Pg.725]    [Pg.244]    [Pg.256]    [Pg.327]    [Pg.328]    [Pg.504]    [Pg.504]    [Pg.211]    [Pg.849]    [Pg.445]    [Pg.171]    [Pg.71]    [Pg.83]    [Pg.179]    [Pg.225]    [Pg.129]    [Pg.139]    [Pg.716]    [Pg.739]    [Pg.131]    [Pg.40]    [Pg.308]    [Pg.96]    [Pg.224]    [Pg.260]   
See also in sourсe #XX -- [ Pg.345 ]




SEARCH



Least squares correlation coefficients

Ordinary least-squares linear regression coefficients

Partial least squares coefficient matrix

Partial least squares regression coefficients

© 2024 chempedia.info