Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Squared difference

Unfortunately, many commonly used methods for parameter estimation give only estimates for the parameters and no measures of their uncertainty. This is usually accomplished by calculation of the dependent variable at each experimental point, summation of the squared differences between the calculated and measured values, and adjustment of parameters to minimize this sum. Such methods routinely ignore errors in the measured independent variables. For example, in vapor-liquid equilibrium data reduction, errors in the liquid-phase mole fraction and temperature measurements are often assumed to be absent. The total pressure is calculated as a function of the estimated parameters, the measured temperature, and the measured liquid-phase mole fraction. [Pg.97]

The sum of the squared differences between calculated and measures pressures is minimized as a function of model parameters. This method, often called Barker s method (Barker, 1953), ignores information contained in vapor-phase mole fraction measurements such information is normally only used for consistency tests, as discussed by Van Ness et al. (1973). Nevertheless, when high-quality experimental data are available. Barker s method often gives excellent results (Abbott and Van Ness, 1975). [Pg.97]

The standard deviation is the square root of the average squared differences between the individual observations and the population mean ... [Pg.196]

Another method for determining rate law parameters is to employ a search for those parameter values that minimize the sum of the squared difference of measured reaction rate and the calculated reaction rate. In performing N experiments, the parameter values can be determined (e.g., E, Cg, Cj, and C2) that minimize the quantity ... [Pg.173]

The phase structure function for a separation r is defined as the mean value of the square difference of phase for all points with that separation. For wave-fronts affected by Kolmogorov turbulence it is given by... [Pg.185]

Figure 12. The relationship between the logarithm of the relative hydrogenation rate over CFP-supported rhodium nanoclusters, with respect to the polymer-stabilized nanostructured catalyst, for a number of a number of alkenes, as a function of their affinity to the support (expressed as the square difference of the solubility parameter of the support and of the substrate). (Reprinted from Ref [33], 1991, with permission from the American Chemical Society.)... Figure 12. The relationship between the logarithm of the relative hydrogenation rate over CFP-supported rhodium nanoclusters, with respect to the polymer-stabilized nanostructured catalyst, for a number of a number of alkenes, as a function of their affinity to the support (expressed as the square difference of the solubility parameter of the support and of the substrate). (Reprinted from Ref [33], 1991, with permission from the American Chemical Society.)...
These weights depend on several characteristics of the data. To understand which ones, let us first consider the univariate case (Fig. 33.7). Two classes, K and L, have to be distinguished using a single variable, Jt,. It is clear that the discrimination will be better when the distance between and (i.e. the mean values, or centroids, of 3 , for classes K and L) is large and the width of the distributions is small or, in other words, when the ratio of the squared difference between means to the variance of the distributions is large. Analytical chemists would be tempted to say that the resolution should be as large as possible. [Pg.216]

Prior work on the use of critical point data to estimate binary interaction parameters employed the minimization of a summation of squared differences between experimental and calculated critical temperature and/or pressure (Equation 14.39). During that minimization the EoS uses the current parameter estimates in order to compute the critical pressure and/or the critical temperature. However, the initial estimates are often away from the optimum and as a consequence, such iterative computations are difficult to converge and the overall computational requirements are significant. [Pg.261]

A central concept of statistical analysis is variance,105 which is simply the average squared difference of deviations from the mean, or the square of the standard deviation. Since the analyst can only take a limited number n of samples, the variance is estimated as the squared difference of deviations from the mean, divided by n - 1. Analysis of variance asks the question whether groups of samples are drawn from the same overall population or from different populations.105 The simplest example of analysis of variance is the F-test (and the closely related t-test) in which one takes the ratio of two variances and compares the result with tabular values to decide whether it is probable that the two samples came from the same population. Linear regression is also a form of analysis of variance, since one is asking the question whether the variance around the mean is equivalent to the variance around the least squares fit. [Pg.34]

Figure 7-3. Active site properties of CAII from SCC-DFTB/MM-GSBP simulations [91]. (a) The root mean square differences between the RMSFs calculated from GSBP simulations (WT-20 and WT-25 have an inner radius of 20 and 25 A respectively) and those from Ewald simulation, for atoms within a certain distance from the zinc, plotted as functions of distance from the zinc ion that die center of die sphere in GSBP simulations is the position of the zinc ion in the starting (crystal) structure, (b) The diffusion constant for TIP3P water molecules as a function of the distance from the zinc ion in different simulations... Figure 7-3. Active site properties of CAII from SCC-DFTB/MM-GSBP simulations [91]. (a) The root mean square differences between the RMSFs calculated from GSBP simulations (WT-20 and WT-25 have an inner radius of 20 and 25 A respectively) and those from Ewald simulation, for atoms within a certain distance from the zinc, plotted as functions of distance from the zinc ion that die center of die sphere in GSBP simulations is the position of the zinc ion in the starting (crystal) structure, (b) The diffusion constant for TIP3P water molecules as a function of the distance from the zinc ion in different simulations...
In this least squares method example the object is to calculate the terms /30, A and /J2 which produce a prediction model yielding the smallest or least squared differences or residuals between the actual analyte value Cj, and the predicted or expected concentration y To calculate the multiplier terms or regression coefficients /3j for the model we can begin with the matrix notation ... [Pg.30]

Equation 59-9 is the square root of the ratio comprised of the sum of the squared differences between each predicted X and the mean of all X, to the sum of the squared differences between all individual X values and the mean of all X.]... [Pg.387]

The mathematics of fitting a polynomial by least squares are relatively straightforward, and we present a derivation here, one that follows Arden, but is rather generic, as we shall see Starting from equation 66-4, we want to find coefficients (the at) that minimize the sum-squared difference between the data and the function s estimate of that data, given a set of values of X. Therefore we first form the differences ... [Pg.442]

Relating factor Absolute differences (I) Squared differences (II) Meaning... [Pg.267]

Summation of absolute differences (I) results in an ME in which all differences have the same statistical weight. Summation of squared differences (II) is the more common practice and gives an MSE in which large deviations have higher weight than small ones. In order to make the metric independent of the number N of observations, the error sum must be related to N or an equivalent sum of the observations ... [Pg.267]

There are several reasons why the sum of squares, i.e. the sum of squared differences between the measured and modelled data, is used to define the quality of a fit and thus is minimised as a function of the parameters. It is instructive to consider alternatives to the sum of squares, (a) Minimal sum of differences - is not an option, as positive and negative differences cancel each other out. Huge deviations in both directions can result in zero sums. [Pg.102]

We repeat, the task of linear regression is to determine those values of the vector a for which the product vector yCaic=Fa is as close as possible to the actual measurements y. Closeness of course is defined by the sum of the squared differences between y and yCaic. [Pg.115]

PRESS, the prediction sum of squares, is the measure for the accuracy of the prediction. It is the sum over all squared differences between cross-validation predicted and true known qualities. [Pg.305]

Distance measures were already discussed in Section 2.4. The most widely used distance measure for cluster analysis is the Euclidean distance. The Manhattan distance would be less dominated by far outlying objects since it is based on absolute rather than squared differences. The Minkowski distance is a generalization of both measures, and it allows adjusting the power of the distances along the coordinates. All these distance measures are not scale invariant. This means that variables with higher scale will have more influence to the distance measure than variables with smaller scale. If this effect is not wanted, the variables need to be scaled to equal variance. [Pg.268]

The variogram function, V(j), is defined as Vi times the average squared difference in heterogeneity contributions between the sum of pairs of increments, as a function of j ... [Pg.67]

Blanco ° proposed the use of the mean square difference between two consecutive spectra plotted against the blending time in order to identify the time that mixture homogeneity was reached. [Pg.480]

The semi-variogram is one-half the expected squared difference of an increment, [ztjc ) - Z(2<2)] that is... [Pg.205]

Semi-variogram Models. The semi-variogram is a function of distance (h). That is, the semi-variogram at h is one half the expected squared difference between a pair of observations Z ) ) that are separated by a distance h (see Equation 1). This function (or model) must be conditionally positive definite so that the variance of the linear functional of Z()c) is greater than or equal to zero. Five of the common semi-variogram models which satisfy this condition are ... [Pg.212]

We estimate Y as follows the mean square difference <6x > of the x coordinates of a pair of points whose y coordinates differ by y is calculated for two cases y Y and y Y. Y is then estimated as the value of y for which these two estimates are equal. A more rigorous calculation based on the autocorrelation of x coordinates as a function of y, described in ref 12, gives essentially the same result. It is assumed that the elongation is very large compared with the typical value so that Kh 1. [Pg.76]

There is a relationship between the overlap of two normalized functions and their mean-squared difference ... [Pg.77]

Take the average of these squared differences, but with the average calculated by dividing by — 1, not n — the resulting quantity is called the variance (with units mmol/P for the data in our example). [Pg.28]

Since the chosen blends varied considerably about the lines of intended correspondence, mathematical optimization by obtaining the least squares difference from a chosen gradation was employed thereafter. [Pg.148]


See other pages where Squared difference is mentioned: [Pg.517]    [Pg.183]    [Pg.609]    [Pg.372]    [Pg.405]    [Pg.91]    [Pg.272]    [Pg.59]    [Pg.88]    [Pg.129]    [Pg.320]    [Pg.68]    [Pg.441]    [Pg.203]    [Pg.133]    [Pg.96]    [Pg.170]    [Pg.159]    [Pg.246]    [Pg.68]   
See also in sourсe #XX -- [ Pg.53 , Pg.60 ]




SEARCH



Deviation root-mean-square difference

Least squared differences

Partial least-squares analysis between different

Root mean square difference RMSD)

Root-mean-square difference

Squares of difference

© 2024 chempedia.info