Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Least squared differences

In this least squares method example the object is to calculate the terms /30, A and /J2 which produce a prediction model yielding the smallest or least squared differences or residuals between the actual analyte value Cj, and the predicted or expected concentration y To calculate the multiplier terms or regression coefficients /3j for the model we can begin with the matrix notation ... [Pg.30]

Since the chosen blends varied considerably about the lines of intended correspondence, mathematical optimization by obtaining the least squares difference from a chosen gradation was employed thereafter. [Pg.148]

Significance of the data was evaluated by analysis of variance with appropriate contrasts, and least square difference techniques. A probability value of less than 0.05 was judged to be statistically significant. [Pg.306]

Numerical simulations of the data were conducted with the algorithms discussed above, with the added twist of optimizing the model to fit the data collected in the laboratory by adjusting the collision efficiency and the fractal dimension (no independent estimate of fractal dimension was made). Thus, a numerical solution was produced, then compared with the experimental data via a least squares approach. The best fit was achieved by minimizing the least squared difference between model solution and experimental data, and estimating the collision efficiency and fractal dimension in the process. The best model fit achieved for the data in Fig. 10a is plotted in Fig. 10b, and that for Fig. 11a is shown in Fig. lib. The collision efficiencies estimated were 1 x 10-4 and 2 x 10-4, and the fractal dimensions were 1.5 and 1.4, respectively. As expected, collision efficiency and fractal dimension were inversely correlated. However, the values of the estimates are, in both cases, lower than might be expected. The lower values were attributed to the following ... [Pg.537]

Note Values are expressed as mean sem. One-way analysis of variance indicated a highly significant difference between groups, both for the TLS and TGS (p < 0.001). Subsequent multiple range tests, using the least-squares difference procedure, demonstrated that, with respect to the TLS, EPA, DHA, and LA differed from OA and control. With respect to the TGS, EPA and DHA differed from LA and OA and the latter two, in turn, from the control. [Pg.73]

As these calculations provide relative not absolute values for the midpoints, the calculated midpoints have been adjusted by addition of a constant to minimize the least squares differences with the experimental data. The contributions of the individual Interactions have also been adjusted. The difference between the calculated midpoints and the sum of the 3 contributors shown represents interactions of the hemes with uncharged, polar residues and the protein backbone. ... [Pg.52]

In the program CVFIT a tolerance must be stipulated. The tolerance is a criterion for ending the fitting procedure. It is defined here as the largest difference between the least-squares difference between any two simulations based on the Np -I- 1 parameter sets that are continually generated by the simplex procedure (Np = the numb6 of parameters to be fitted). The tolerance is based on the current data sets, which are in amperes. The tolerance required for a reasonable fit is somewhat a matter of trial and error, and it depends on the... [Pg.138]

Automatic fitting of EPR spectra is possible, although most techniques have problems with false minima and some degree of manual steering is necessary. One of the frequently encountered problems is that the least-squares difference is rarely an adequate measure of goodness of fit. In practice it is usually necessary to obtain a reasonable fit manually before automatic fitting is capable of further optimizing the spin Hamiltonian parameters. [Pg.167]

Sections 9A.2-9A.6 introduce different multivariate data analysis methods, including Multiple Linear Regression (MLR), Principal Component Analysis (PCA), Principal Component Regression (PCR) and Partial Least Squares regression (PLS). [Pg.444]

This method, because it involves minimizing the sum of squares of the deviations xi — p, is called the method of least squares. We have encountered the principle before in our discussion of the most probable velocity of an individual particle (atom or molecule), given a Gaussian distr ibution of particle velocities. It is ver y powerful, and we shall use it in a number of different settings to obtain the best approximation to a data set of scalars (arithmetic mean), the best approximation to a straight line, and the best approximation to parabolic and higher-order data sets of two or more dimensions. [Pg.61]

If X is the dependent variable, the definition is modified by considering horizontal instead of vertical deviations. In general these two definitions lead to different least square curves. [Pg.208]

Once a significant difference has been demonstrated by an analysis of variance, a modified version of the f-test, known as Fisher s least significant difference, can be used to determine which analyst or analysts are responsible for the difference. The test statistic for comparing the mean values Xj and X2 is the f-test described in Chapter 4, except that Spool is replaced by the square root of the within-sample variance obtained from an analysis of variance. [Pg.696]

Once the form of the correlation is selected, the values of the constants in the equation must be determined so that the differences between calculated and observed values are within the range of assumed experimental error for the original data. However, when there is some scatter in a plot of the data, the best line that can be drawn representing the data must be determined. If it is assumed that all experimental errors (s) are in thejy values and the X values are known exacdy, the least-squares technique may be appHed. In this method the constants of the best line are those that minimise the sum of the squares of the residuals, ie, the difference, a, between the observed values,jy, and the calculated values, Y. In general, this sum of the squares of the residuals, R, is represented by... [Pg.244]

The response produced by Eq. (8-26), c t), can be found by inverting the transfer function, and it is also shown in Fig. 8-21 for a set of model parameters, K, T, and 0, fitted to the data. These parameters are calculated using optimization to minimize the squarea difference between the model predictions and the data, i.e., a least squares approach. Let each measured data point be represented by Cj (measured response), tj (time of measured response),j = 1 to n. Then the least squares problem can be formulated as ... [Pg.724]

Different calibration models, such as classical least squares and multivariate calibration approaches have been considered. [Pg.141]

Most of the 2D QSAR methods are based on graph theoretic indices, which have been extensively studied by Randic [29] and Kier and Hall [30,31]. Although these structural indices represent different aspects of molecular structures, their physicochemical meaning is unclear. Successful applications of these topological indices combined with multiple linear regression (MLR) analysis are summarized in Ref. 31. On the other hand, parameters derived from various experiments through chemometric methods have also been used in the study of peptide QSAR, where partial least square (PLS) [32] analysis has been employed [33]. [Pg.359]

Figure 7 shows Eq for GaAs and Ga 82 0.18 function of temperature T to about 900 K. Additional measurements on samples having differing A1 contents would generate a family of curves. The solid line is a least-squares fit to a semi-empirical relation that describes the temperature variation of semiconductor energy gaps ... [Pg.397]

To extract the agglomeration kernels from PSD data, the inverse problem mentioned above has to be solved. The population balance is therefore solved for different values of the agglomeration kernel, the results are compared with the experimental distributions and the sums of non-linear least squares are calculated. The calculated distribution with the minimum sum of least squares fits the experimental distribution best. [Pg.185]


See other pages where Least squared differences is mentioned: [Pg.68]    [Pg.312]    [Pg.259]    [Pg.535]    [Pg.273]    [Pg.428]    [Pg.146]    [Pg.126]    [Pg.137]    [Pg.591]    [Pg.267]    [Pg.68]    [Pg.312]    [Pg.259]    [Pg.535]    [Pg.273]    [Pg.428]    [Pg.146]    [Pg.126]    [Pg.137]    [Pg.591]    [Pg.267]    [Pg.1944]    [Pg.2109]    [Pg.429]    [Pg.722]    [Pg.723]    [Pg.724]    [Pg.726]    [Pg.18]    [Pg.459]    [Pg.164]    [Pg.168]    [Pg.505]    [Pg.360]    [Pg.142]    [Pg.366]    [Pg.211]    [Pg.849]    [Pg.52]    [Pg.72]   
See also in sourсe #XX -- [ Pg.30 ]

See also in sourсe #XX -- [ Pg.30 ]




SEARCH



Squared difference

© 2024 chempedia.info