Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Variance of parameters

W and V are values for sample weights and variances of parameter measurements at states 1 and 2 respectively. [Pg.1757]

The art of experimental design is made richer by a knowledge of how the placement of experiments in factor space affects the quality of information in the fitted model. The basic concepts underlying this interaction between experimental design and information quality were introduced in Chapters 7 and 8. Several examples showed the effect of the location of one experiment (in an otherwise fixed design) on the variance and co-variance of parameter estimates in simple single-factor models. [Pg.279]

The primary purpose for expressing experimental data through model equations is to obtain a representation that can be used confidently for systematic interpolations and extrapolations, especially to multicomponent systems. The confidence placed in the calculations depends on the confidence placed in the data and in the model. Therefore, the method of parameter estimation should also provide measures of reliability for the calculated results. This reliability depends on the uncertainties in the parameters, which, with the statistical method of data reduction used here, are estimated from the parameter variance-covariance matrix. This matrix is obtained as a last step in the iterative calculation of the parameters. [Pg.102]

The diagonal elements of this matrix approximate the variances of the corresponding parameters. The square roots of these variances are estimates of the standard errors in the parameters and, in effect, are a measure of the uncertainties of those parameters. [Pg.102]

This sum, when divided by the number of data points minus the number of degrees of freedom, approximates the overall variance of errors. It is a measure of the overall fit of the equation to the data. Thus, two different models with the same number of adjustable parameters yield different values for this variance when fit to the same data with the same estimated standard errors in the measured variables. Similarly, the same model, fit to different sets of data, yields different values for the overall variance. The differences in these variances are the basis for many standard statistical tests for model and data comparison. Such statistical tests are discussed in detail by Crow et al. (1960) and Brownlee (1965). [Pg.108]

The form of the equations here is given to provide good accuracy when many terms are used and to provide the variances of the parameters. Another form of the equations for a and b is simpler, but is sometimes inaccurate unless many significant digits are kept in the calculations. The minimization ofx" when Gj is the same for all i gives the following equations for a and b... [Pg.502]

If the normalized method is used in addition, the value of Sjj is 3.8314 X 10 /<3 , where <3 is the variance of the measurement of y. The values of a and h are, of course, the same. The variances of a and h are <3 = 0.2532C , cf = 2.610 X 10" <3 . The correlation coefficient is 0.996390, which indicates that there is a positive correlation between x and y. The small value of the variance for h indicates that this parameter is determined very well by the data. The residuals show no particular pattern, and the predictions are plotted along with the data in Fig. 3-58. If the variance of the measurements of y is known through repeated measurements, then the variance of the parameters can be made absolute. [Pg.502]

Solving this set of equations gives the parameters ( 7 ), which maximize the hkelihood. The variance of cij is... [Pg.502]

It can be argued that the main advantage of least-squares analysis is not that it provides the best fit to the data, but rather that it provides estimates of the uncertainties of the parameters. Here we sketch the basis of the method by which variances of the parameters are obtained. This is an abbreviated treatment following Bennett and Franklin.We use the normal equations (2-73) as an example. Equation (2-73a) is solved for <2o-... [Pg.46]

The calculated values y of the dependent variable are then found, for jc, corresponding to the experimental observations, from the model equation (2-71). The quantity ct, the variance of the observations y is calculated with Eq. (2-90), where the denominator is the degrees of freedom of a system with n observations and four parameters. [Pg.47]

Both the mean and variance of the Poisson distribution are equal to the parameter A. [Pg.122]

The last example brings out very clearly that knowledge of only the mean and variance of a distribution is often not sufficient to tell us much about the shape of the probability density function. In order to partially alleviate this difficulty, one sometimes tries to specify additional parameters or attributes of the distribution. One of the most important of these is the notion of the modality of the distribution, which is defined to be the number of distinct maxima of the probability density function. The usefulness of this concept is brought out by the observation that a unimodal distribution (such as the gaussian) will tend to have its area concentrated about the location of the maximum, thus guaranteeing that the mean and variance will be fairly reasdnable measures of the center and spread of the distribution. Conversely, if it is known that a distribution is multimodal (has more than one... [Pg.123]

Finally we need to compare the variance of our estimator with the best attainable. It can be shown that The Cramer-Rao lower bound (CRLB) is a lower bound on the variance of an unbiased estimator (Kay, 1993). The quantities estimated can be fixed parameters with unknown values, random variables or a signal and essentially we are finding the best estimate we can possibly make. [Pg.389]

Figure 7 shows that for the maximum likelihood estimator the variance in the slope estimate decreases as the telescope aperture size increases. For the centroid estimator the variance of the slope estimate also decreases with increasing aperture size when the telescope aperture is less than the Fried parameter, ro (Fried, 1966), but saturates when the aperture size is greater than this value. [Pg.391]

There are two problems with the above procedure, however. The first is that it is not efficient, because the intersubject parameter variance it computes is actually the variance of the parameters between subjects plus the variance of the estimate of a single-subject parameter. The second drawback is that often, in real-life applications, a complete data set, with sufficiently many points to reliably estimate all model parameters, is not available for each experimental subject. A frequent situation is that observations are available in a haphazard, scattered fashion, are often expensive to gather, and for a number of reasons (availability of manpower, cost, environmental constraints, etc.) are usually much fewer than we would like. [Pg.96]

Statistical testing of model adequacy and significance of parameter estimates is a very important part of kinetic modelling. Only those models with a positive evaluation in statistical analysis should be applied in reactor scale-up. The statistical analysis presented below is restricted to linear regression and normal or Gaussian distribution of experimental errors. If the experimental error has a zero mean, constant variance and is independently distributed, its variance can be evaluated by dividing SSres by the number of degrees of freedom, i.e. [Pg.545]

A difficulty with Hansch analysis is to decide which parameters and functions of parameters to include in the regression equation. This problem of selection of predictor variables has been discussed in Section 10.3.3. Another problem is due to the high correlations between groups of physicochemical parameters. This is the multicollinearity problem which leads to large variances in the coefficients of the regression equations and, hence, to unreliable predictions (see Section 10.5). It can be remedied by means of multivariate techniques such as principal components regression and partial least squares regression, applications of which are discussed below. [Pg.393]

The expression x (J)P(j - l)x(j) in eq. (41.4) represents the variance of the predictions, y(j), at the value x(j) of the independent variable, given the uncertainty in the regression parameters P(/). This expression is equivalent to eq. (10.9) for ordinary least squares regression. The term r(j) is the variance of the experimental error in the response y(J). How to select the value of r(j) and its influence on the final result are discussed later. The expression between parentheses is a scalar. Therefore, the recursive least squares method does not require the inversion of a matrix. When inspecting eqs. (41.3) and (41.4), we can see that the variance-covariance matrix only depends on the design of the experiments given by x and on the variance of the experimental error given by r, which is in accordance with the ordinary least-squares procedure. [Pg.579]

By way of illustration, the regression parameters of a straight line with slope = 1 and intercept = 0 are recursively estimated. The results are presented in Table 41.1. For each step of the estimation cycle, we included the values of the innovation, variance-covariance matrix, gain vector and estimated parameters. The variance of the experimental error of all observations y is 25 10 absorbance units, which corresponds to r = 25 10 au for all j. The recursive estimation is started with a high value (10 ) on the diagonal elements of P and a low value (1) on its off-diagonal elements. [Pg.580]


See other pages where Variance of parameters is mentioned: [Pg.424]    [Pg.314]    [Pg.346]    [Pg.126]    [Pg.424]    [Pg.314]    [Pg.346]    [Pg.126]    [Pg.45]    [Pg.99]    [Pg.426]    [Pg.488]    [Pg.1513]    [Pg.161]    [Pg.46]    [Pg.241]    [Pg.373]    [Pg.184]    [Pg.5]    [Pg.45]    [Pg.337]    [Pg.342]    [Pg.367]    [Pg.441]    [Pg.575]    [Pg.579]    [Pg.579]    [Pg.582]    [Pg.585]    [Pg.600]   
See also in sourсe #XX -- [ Pg.46 ]

See also in sourсe #XX -- [ Pg.46 ]




SEARCH



Parameter variance

Variances and covariances of the least-squares parameter estimates

© 2024 chempedia.info