Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Variance of observations

To obtain some information on the magnitude of turbidity measurement uncertainty, the analysis of variance (ANOVA) method [11] was used to identify individual random effects in measurement so that they could be properly taken into account. The first estimate was the within-instrument component of variance (that is the variance of observations made on the same instrument) which was denoted as si. The second estimate, si, was the pooled estimate of variance obtained... [Pg.62]

Vi being the variance of observation i. More details about weighted least-squares procedures can be found in the monograph by Bevington and Robinson [1]. [Pg.599]

With the help of variance analysis, it is determined how far the variance of observed variable X can be traced back to suspected influence factors. These influence factors may be qualitative or quantitative variables. Variance analysis is based on the assumption that, in addition to data of the observed variable X, data on other suspected influence factors are also present in a measuring series, whereby these influence factors can be classified in such a way that each observed value of X can be associated to a class i. In the case of a simple variance analysis with one additional influence quantity, the following equation will result for X ... [Pg.33]

Cramer s coefficient for the variance in the activity F ratio between the variances of observed and calculated activities... [Pg.183]

In all QSAR equations reported here, n is the number of data points, r is the correlation coefficient, s is the standard deviation, q is Cramers coefficient to account for the variance in the activity [69] and the data within the parentheses are 95% confidence intervals. F is the F-ratio between the variances of observed and calculated activities. [Pg.192]

Seismic isolation systems are ideally suited for implementation within a performance-based framework because (a) robust characterizations of their behavior can be made through experimentation, (b) the variance of observed behavior from expected is often low relative to conventional structural elements, and (c) it can be challenging or even impossible to achieve an enhanced performance objective without the use of seismic isolatirai. Compared to conventional strucmral system for seismic resistance, isolation provided a unique and reliable means of simultaneously reducing earthquake damage in both... [Pg.419]

Let u be a vector valued stochastic variable with dimension D x 1 and with covariance matrix Ru of size D x D. The key idea is to linearly transform all observation vectors, u , to new variables, z = W Uy, and then solve the optimization problem (1) where we replace u, by z . We choose the transformation so that the covariance matrix of z is diagonal and (more importantly) none if its eigenvalues are too close to zero. (Loosely speaking, the eigenvalues close to zero are those that are responsible for the large variance of the OLS-solution). In order to liiid the desired transformation, a singular value decomposition of /f is performed yielding... [Pg.888]

The t test can be applied to differences between pairs of observations. Perhaps only a single pair can be performed at one time, or possibly one wishes to compare two methods using samples of differing analytical content. It is still necessary that the two methods possess the same inherent standard deviation. An average difference d calculated, and individual deviations from d are used to evaluate the variance of the differences. [Pg.199]

It is possible to compare the means of two relatively small sets of observations when the variances within the sets can be regarded as the same, as indicated by the F test. One can consider the distribution involving estimates of the true variance. With sj determined from a group of observations and S2 from a second group of N2 observations, the distribution of the ratio of the sample variances is given by the F statistic ... [Pg.204]

Consider the problem of assessing the accuracy of a series of measurements. If measurements are for independent, identically distributed observations, then the errors are independent and uncorrelated. Then y, the experimentally determined mean, varies about E y), the true mean, with variance C /n, where n is the number of observations in y. Thus, if one measures something several times today, and each day, and the measurements have the same distribution, then the variance of the means decreases with the number of samples in each day s measurement, n. Of course, other fac tors (weather, weekends) may make the observations on different days not distributed identically. [Pg.505]

The calculated values y of the dependent variable are then found, for jc, corresponding to the experimental observations, from the model equation (2-71). The quantity ct, the variance of the observations y is calculated with Eq. (2-90), where the denominator is the degrees of freedom of a system with n observations and four parameters. [Pg.47]

The weight of the ith observation is inversely proportional to the variance of the observation we will use Eq. (2-82) for this quantity, n being the number of observations. [Pg.248]

Tlie mean )i and tlie variance a" of a random variable are constants cliaracterizing die random variable s average value and dispersion about its mean. The mean and variance can be derived from die pdf of the random variable. If die pdf is miknown, however, the mean and die variance can be estimated on die basis of a random sample of observations on die random variable. Let X, Xj,. X, denote a random sample of n observations on X. [Pg.562]

Variance The mean square of deviations, or errors, of a set of observations the sum of square deviations, or errors, of individual observations with respect to their arithmetic mean divided by the number of observations less one (degree of freedom) the square of the standard deviation, or standard error. [Pg.645]

The last example brings out very clearly that knowledge of only the mean and variance of a distribution is often not sufficient to tell us much about the shape of the probability density function. In order to partially alleviate this difficulty, one sometimes tries to specify additional parameters or attributes of the distribution. One of the most important of these is the notion of the modality of the distribution, which is defined to be the number of distinct maxima of the probability density function. The usefulness of this concept is brought out by the observation that a unimodal distribution (such as the gaussian) will tend to have its area concentrated about the location of the maximum, thus guaranteeing that the mean and variance will be fairly reasdnable measures of the center and spread of the distribution. Conversely, if it is known that a distribution is multimodal (has more than one... [Pg.123]

Nonanalytic Nonlinearities.—A somewhat different kind of nonlinearity has been recognized in recent years, as the result of observations on the behavior of control systems. It was observed long ago that control systems that appear to be reasonably linear, if considered from the point of view of their differential equations, often exhibit self-excited oscillations, a fact that is at variance with the classical theory asserting that in linear systems self-excited oscillations are impossible. Thus, for instance, in the van der Pol equation... [Pg.389]

There are two problems with the above procedure, however. The first is that it is not efficient, because the intersubject parameter variance it computes is actually the variance of the parameters between subjects plus the variance of the estimate of a single-subject parameter. The second drawback is that often, in real-life applications, a complete data set, with sufficiently many points to reliably estimate all model parameters, is not available for each experimental subject. A frequent situation is that observations are available in a haphazard, scattered fashion, are often expensive to gather, and for a number of reasons (availability of manpower, cost, environmental constraints, etc.) are usually much fewer than we would like. [Pg.96]

By way of illustration, the regression parameters of a straight line with slope = 1 and intercept = 0 are recursively estimated. The results are presented in Table 41.1. For each step of the estimation cycle, we included the values of the innovation, variance-covariance matrix, gain vector and estimated parameters. The variance of the experimental error of all observations y is 25 10 absorbance units, which corresponds to r = 25 10 au for all j. The recursive estimation is started with a high value (10 ) on the diagonal elements of P and a low value (1) on its off-diagonal elements. [Pg.580]

Equations (41.15) and (41.19) for the extrapolation and update of system states form the so-called state-space model. The solution of the state-space model has been derived by Kalman and is known as the Kalman filter. Assumptions are that the measurement noise v(j) and the system noise w(/) are random and independent, normally distributed, white and uncorrelated. This leads to the general formulation of a Kalman filter given in Table 41.10. Equations (41.15) and (41.19) account for the time dependence of the system. Eq. (41.15) is the system equation which tells us how the system behaves in time (here in j units). Equation (41.16) expresses how the uncertainty in the system state grows as a function of time (here in j units) if no observations would be made. Q(j - 1) is the variance-covariance matrix of the system noise which contains the variance of w. [Pg.595]

Uncertainties relating to the determination of accurate quantitative results are not relevant in these experiments. The observed experimental variance of the INAA results is a summation of the variances of homogeneity and the relevant analytical components as shown in Equation (4.8) ... [Pg.135]

The observed elemental variances of each measurement experiment and the components of the analytical variances discussed above are used to calculate variances due to heterogeneity for each element which are converted to relative uncertainties Rj. These relative uncertainties then provide the input in the two relevant Equations (4.3 and 4.4) that are commonly used to express elemental homogeneity of a sample as a function of sample mass (w). [Pg.136]


See other pages where Variance of observations is mentioned: [Pg.283]    [Pg.97]    [Pg.126]    [Pg.104]    [Pg.283]    [Pg.97]    [Pg.126]    [Pg.104]    [Pg.2840]    [Pg.40]    [Pg.197]    [Pg.107]    [Pg.492]    [Pg.542]    [Pg.182]    [Pg.34]    [Pg.240]    [Pg.87]    [Pg.40]    [Pg.281]    [Pg.44]    [Pg.548]    [Pg.174]    [Pg.290]    [Pg.441]    [Pg.503]    [Pg.579]    [Pg.579]    [Pg.582]    [Pg.600]    [Pg.131]    [Pg.390]   
See also in sourсe #XX -- [ Pg.47 ]

See also in sourсe #XX -- [ Pg.47 ]




SEARCH



Observation of

© 2024 chempedia.info