Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Nonrandom error

Caused by the presence of a single outlier in the data which is probably the result of nonrandom error. [Pg.301]

Reduces selection bias and observer bias (nonrandom error). [Pg.66]

Errors of measurement can be classified as determinate or indeterminate. The latter is random and can be treated statistically (Gaussian statistics) the former is not, and the source of the nonrandom error should be found and eliminated. [Pg.73]

Ruf, H., B. J. Gould, and W. Haase. 2000. Effect of nonrandom errors on the results from regularized inversions of dynamic light scattering data. Langmuir 16, no. 2 471M-80. [Pg.413]

Initial attempts to model these data using a simple citric acid cycle network gave large nonrandom errors and poorly determined values for the flux parameters since the model predicted a common steady state value for YG4, YGj, YGj, YA3 and YAj. In order to allow convergence with a reasonable standard error, it was necessary to assume the presence of a small isotopically non-exchangeable pool of aspartate. The size of this pool was allowed to be an unknown parameter,... [Pg.396]

The accuracy of the results of a test can be determined analytically from an application of random statistics, provided the variations in data are the result of small random independent effects. The methods of the test program and the resultant data were studied for errors of method and nonrandom errors. In tests 6 and 7, the pressure differential across the tank constituted a nonrandom error. As a result, the data from these tests were discarded from the accuracy analysis. [Pg.531]

Since the data containing any known source of significant nonrandom error were eliminated from consideration, the scatter of the remaining data was considered to be random. In Fig. 3, each ckjta point is the arithmetic mean, T, of the data taken during a test run. The curve represents the mean of these arithmetic means and is referred to as the grand mean, The curve represents the best estimate of the mean, T, which would have been found had an infinite number of data points been taken. The object of the accuracy analysis is to determine the interval about the curve in which it can be stated with a certain confidence that T exists. [Pg.531]

At low pressures, it is often permissible to neglect nonidealities of the vapor phase. If these nonidealities are not negligible, they can have the effect of introducing a nonrandom trend into the plotted residuals similar to that introduced by systematic error. Experience here has shown that application of vapor-phase corrections for nonidealities gives a better representation of the data by the model, oven when these corrections... [Pg.106]

An apparent systematic error may be due to an erroneous value of one or both of the pure-component vapor pressures as discussed by several authors (Van Ness et al., 1973 Fabries and Renon, 1975 Abbott and Van Ness, 1977). In some cases, highly inaccurate estimates of binary parameters may occur. Fabries and Renon recommend that when no pure-component vapor-pressure data are given, or if the given values appear to be of doubtful validity, then the unknown vapor pressure should be included as one of the adjustable parameters. If, after making these corrections, the residuals again display a nonrandom pattern, then it is likely that there is systematic error present in the measurements. ... [Pg.107]

By proper design of experiments, guided by a statistical approach, the effects of experimental variables may be found more efficiently than by the traditional approach of holding all variables constant but one and systematically investigating each variable in turn. Trends in data may be sought to track down nonrandom sources of error. [Pg.191]

In the previous development it was assumed that only random, normally distributed measurement errors, with zero mean and known covariance, are present in the data. In practice, process data may also contain other types of errors, which are caused by nonrandom events. For instance, instruments may not be adequately compensated, measuring devices may malfunction, or process leaks may be present. These biases are usually referred as gross errors. The presence of gross errors invalidates the statistical basis of data reconciliation procedures. It is also impossible, for example, to prepare an adequate process model on the basis of erroneous measurements or to assess production accounting correctly. In order to avoid these shortcomings we need to check for the presence of gross systematic errors in the measurement data. [Pg.128]

To invesigate whether run order influenced the results for this example, the compOBHit A concentration residuals are plotted versus the order in which the aeference values for component A were measured (Figure 5.17). The first sai Ie is the one with the unusually hi error, which may indicate problems wih the startup of the instrument used to determine the reference concentraticffi of component A. What appears to be a nonrandom pattern over time may innate instrumental fluctuations in the reference method determination. [Pg.105]

Measurement Residuals Plot (Sample and Variable Diagnostic) There is nonrandom behavior in the spectral residuals, indicating inadequacies in the model (see Figure 5-31). This is consistent with the statistical prediction errors being an order of magnitude larger than the ideal value. Several preprocessing... [Pg.292]

Calibration Measurement Residuals Plot (Model Diagnostic) The calibration spectral residuals shown in Figure 5-53 are still structured, but are a factor of 4 smaller than the residuals when temperature was not part of the model Comparing with Figure 5-51, the residuals structure resembles the estimated pure spectrum of temperature. Recall that the calibration spectral residuals are a function of model error as well as errors in the concentration matrix (see Equation 5.18). Either of these errors can cause nonrandom features in the spectral residuals. The temperature measurement is less precise relative to the chemical concentrations and, therefore, the hypothesis is that the structure in the residuals is due to temperature errors rather than an inadequacy in the model. [Pg.301]

The error model used in the minimization is based on the hypothesis that the residuals have zero mean and are normally distributed. The first is easily checked, the latter is only possible when sufficient data points are available and a distribution histogram can be constructed. An adequate model also follows the experimental data well, so if the residuals are plotted as a function of the dependent or independent variable(s) a random distribution around zero should be observed. Nonrandom trends in the residuals mean that systematic deviations exist and indicate that the model is not completely able to follow the course of the experimental data, as a good model should do. This residual trending can also be evaluated numerically be correlation calculations, but visual inspection is much more powerful. An example is given in Fig. 12 for the initial rate data of the metathesis of propene into ethene and 2-butenc [60], One expression was based on a dual-site Langmuir-Hinshelwood model, whereas the other... [Pg.318]

We continue to call y the observations, and p the variables. The Jacobian X is a rectangular, in general high , matrix in > m). For further treatment it has to have maximum rank (= m), which requires that the be independent variables. The columns of X, the fit vectors , span the m-dimensional fitspace , a subspace of the n-dimensional space of the observations and their errors. The Jacobian X is a constant (nonrandom) matrix which depends on the functional type but not on the measured value of each of the observations. [Pg.73]

While the relations between the inertial and planar moments are strictly linear and constant, the relation between the increments of the rotational constants Bg and the moments, say Ig, is a truncated series expression and only approximately linear, Alg = ( f/B2g)ABg. Also, the transformation coefficients f/B2g are not strictly constant (nonrandom), although usually afflicted with only a very small relative error. The approximations are, in general, good enough to satisfy the requirements for Eq. 22, and for the above statement rt = rp = rB to be true for all practical purposes. [Pg.94]

We have seen that two sources of sampling error are the variation of the material as a short-range or localized phenomenon the FE, and the GSE. Variations, such as cycles, long-range trends, and nonrandom changes, result from differences in the material over time. Changes in the process, either intentional or incidental, result in variation, and samples taken sufficiently far apart in time may differ from each other substantially in the properties of interest. If we do not characterize the process variation relative to the material variation, our ability to understand and control the process or to reduce its variation will be limited and, in some cases, futile. [Pg.58]

The precision of a result is its reproducibility the accuracy is its nearness to the truth. A systematic error causes a loss of accuracy it may or may not impair the precision, depending on whether the error is constant or variable. A random error causes a lowering of precision, but with sufficient observations the scatter can (within limits) be overcome, so that the accuracy is not necessarily affected. Statistical treatment can be applied properly only to random errors. The objection might be raised that it is not known in advance whether the errors are truly random but, here again, the laws of probability can be applied to determine whether nonrandomness (trends, discontinuities, clustering, or the like) is a factor. If it is, an effort should be made to locate and correct or make allowance for the systematic causes. [Pg.534]

The previous errors addressed heterogeneity on a small scale. Now we examine heterogeneity on a large scale the scale of the lot over time or space. The long-range nonperiodic heterogeneity fluctuation error is nonrandom and results in trends or shifts in the measured characteristic of interest as we track it over time or over the extent of the lot in space. For example, measured characteristics of a chemical product may decrea.se due to catalyst deterioration. Particle size distribution may be altered due to machine wear. Samples from different parts of the lot may show trends due to lack of mixing. [Pg.25]

The long-range periodic heterogeneity fluctuation error is the result of nonrandom periodic changes in the critical component as we track it over lime or over the entirety of the lot in space. For instance certain measurements may show periodic fluctuations due to different control of the manufacturing process by various shifts of operators. Batch processes that alternate raw materials from two different suppliers may show periodicity in measurements. [Pg.26]

Determinate or systematic errors ai e nonrandom and occur when something is wrong with the measurement. [Pg.66]


See other pages where Nonrandom error is mentioned: [Pg.520]    [Pg.520]    [Pg.211]    [Pg.190]    [Pg.255]    [Pg.480]    [Pg.503]    [Pg.409]    [Pg.576]    [Pg.25]    [Pg.264]    [Pg.364]    [Pg.215]    [Pg.35]    [Pg.649]    [Pg.86]    [Pg.144]    [Pg.449]    [Pg.73]    [Pg.230]    [Pg.162]    [Pg.389]    [Pg.490]    [Pg.84]    [Pg.552]    [Pg.710]    [Pg.79]    [Pg.54]   
See also in sourсe #XX -- [ Pg.531 ]




SEARCH



© 2024 chempedia.info