Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Random component

Precision is defined as the closeness of agreement between independent test results obtained under stipulated conditions (Fleming et al. [1996b] Prichard et al. [2001]). Precision characterizes the random component of the measurement error and, therefore, it does not relate to the true value. [Pg.203]

In random functions, each realization of the function can be conceived as the sum of a structured component and an erratic or stochastic component. The structured component is the one assuring that the observed values have a systematic variation or, in other words, that if we are, for instance, in an area where different measurements above a normal value have been obtained, there exist a high probability for additional measurements to be high as well. On the other hand, the random component is the one making difficult the exact prediction of these hypothetical measurements, since unpredictable fluctuations exist. [Pg.344]

For the simulations we use a 2D TFM as described in the previous sections. The simulation conditions are specified in Table V. The gas flow enters at the bottom through a porous distributor. The initial gas volume fraction in each fluid cell is set to an average value of 0.4 and with a random variation of + 5%. Also for the boundary condition at the bottom, we use a uniform gas velocity with a superimposed random component (10%), following Goldschmidt et al. (2004). [Pg.128]

Estimate x given that the previous measurement model holds and, further, that the set of linear constraints is satisfied with an additive random component w ... [Pg.119]

Because the velocity u contains the random component u, the concentration c is a stochastic function since, by virtue of Eq. (2.2), c is a function of u. The mean value of c, as expressed in Eq. (2.5), is an ensemble mean formed by averaging c over the entire ensemble of identical experiments. Temporal and spatial mean values, by contrast, are obtained by averaging v ues from a single member of the ensemble over a period or area, respectively. The ensemble mean, which we have denoted by the angle brackets ( ), is the easiest to deal with mathematically. Unfortunately, ensemble means are not measurable quantities, although under the conditions of the ergodic theorem they can be related to observable temporal or spatial averages. In Eq. (2.7) the mean concentration (c) represents a true ensemble mean, whereas if we decompose c as... [Pg.216]

The random component of the particle velocity in stationary, homogeneous turbulence is normally distributed with mean, (1 - a) u, and mean square, (1 - a ) u -, ... [Pg.290]

To implement the method, it is necessary to decide how to handle the case when a particle encounters either vertical boundary. Following Fig. 11, if a particle is predicted to cross the boundary, its position can be reflected. At the next step, n + 1 to n + 2, the velocity at step n + 1 can be taken as the reflected value (line a in Fig. 11), as the previous value unreflected (line b), or as zero with only the random component (case c). In Fig. 12, dimensionless surface concentrations are shown corresponding to each of these three options. As expected, the lowest concentrations... [Pg.292]

Fig. 11. Three ways of treating particles that intersect a boundary in a Monte Carlo calculation. In case a the velocity is reflected at step n + 2. In case b the velocity is unaltered at step n 2. In case c the velocity at step n -i- 2 is assumed to have a random component only. Fig. 11. Three ways of treating particles that intersect a boundary in a Monte Carlo calculation. In case a the velocity is reflected at step n + 2. In case b the velocity is unaltered at step n 2. In case c the velocity at step n -i- 2 is assumed to have a random component only.
The remaining errors in the data are usually described as random, their properties ultimately attributable to the nature of our physical world. Random errors do not lend themselves easily to quantitative correction. However, certain aspects of random error exhibit a consistency of behavior in repeated trials under the same experimental conditions, which allows more probable values of the data elements to be obtained by averaging processes. The behavior of random phenomena is common to all experimental data and has given rise to the well-known branch of mathematical analysis known as statistics. Statistical quantities, unfortunately, cannot be assigned definite values. They can only be discussed in terms of probabilities. Because (random) uncertainties exist in all experimentally measured quantities, a restoration with all the possible constraints applied cannot yield an exact solution. The best that may be obtained in practice is the solution that is most probable. Actually, whether an error is classified as systematic or random depends on the extent of our knowledge of the data and the influences on them. All unaccounted errors are generally classified as part of the random component. Further knowledge determines many errors to be systematic that were previously classified as random. [Pg.263]

The error of an analytical result is related to the (in)accuracy of an analytical method and consists of a systematic component and a random component [14]. Precision and bias studies form the basis for evaluation of the accuracy of an analytical method [18]. The accuracy of results only relates to the fitness for purpose of an analytical system assessed by method validation. Reliability of results however has to do with more than method validation alone. MU is more than just a singlefigure expression of accuracy. It covers all sources of errors which are relevant for all analyte concentration levels. MU is a key indicator of both fitness for purpose and reliability of results, binding together the ideas of fitness for purpose and quality control (QC) and thus covering the whole QA system [4,37]. [Pg.751]

The model form implies that variations in X, and X2 cause variations in Y but that some variation in Y Is due to a random component (e) of Y, Since "e" has an expected value of zero, it is ordinarily not referred to in listing the estimated equation. [Pg.299]

The ISO Guide 3534-1 [57] defines accuracy as "the closeness of agreement between a test result and the accepted reference value", with a note stating that the term accuracy, when applied to a set of test results, involves a combination of random components and a common systematic error or bias component". Accuracy is expressed, then, as two components trueness and precision. Trueness is defined as the closeness of agreement between the average value obtained from a large set of test results and an accepted reference value" and it is normally expressed in terms of bias. Finally, precision is defined as the closeness of agreement between independent test results obtained under stipulated conditions". [Pg.225]

Expressivity. Here we define expressivity as the variation of the spectrum and time evolution of a signal for musical purposes. That variation is usually considered to have two components, a deterministic component and a random component. The deterministic element of expressivity is the change in spectrum and time evolution controlled by the user during performance. For example, hitting a piano key harder makes the note louder and brighter (more high frequency content). The random component is the change from note to note that is not possible to control by the musician. Two piano notes played in succession, for example, are never identical no matter how hard the musician attempts to create duplicate notes. While the successive waveforms will always be identified as a piano note, careful examination shows that the waveform details are different from note to note, and that the differences are perceivable. [Pg.173]

There is, however, a constraint on the types of samples that will give reasonable results for this abundance estimation approach. For example, in this particular analysis, there is no presumption of spatial organization or sample orientation. A measurement along any axis is assumed to be equivalent, resulting in an equivalent random component distribution. For example, this analysis would not deliver sensible results for samples that may have drug cores, coatings or other time-release mechanisms and other approaches would be required. Even with a randomly... [Pg.41]

Overall uncertainty can be estimated by identifying all factors which contribute to the uncertainty. Their contributions are estimated as standard deviations, either from repeated observations (for random components), or from other sources of information (for systematic components). The combined standard uncertainty is calculated by combining the variances of the uncertainty components, and is expressed as a standard deviation. The combined standard uncertainty is multiplied by a coverage factor of 2 to give a 95% level of confidence (approximately). [Pg.297]

For the motion of a gas-solid suspension in the riser, both the gas and particle velocities have local averaged and random components. Thus, it is desirable to develop a mechanistic model which incorporates a variety of interactive effects due to both the gas and particle velocity components (see Chapter 5) as given in the following [Sinclair and Jackson, 1989] ... [Pg.452]

Equation (1.35) describes a pulse of coherent light, where E(z, t) is represented f an analytic function. In cases of partially coherent light either the phase or i laipplitude acquires a random component, and an analytic expression for E(z, t) no i tsrtgef Exists. Appropriate descriptions of partially incoherent fight interacting with fndlecules are discussed in Section 5.3. [Pg.7]

An alternate source of decoherence is in the nature of the laser used to irradiate the system. Specifically, if the laser has random components, then it inputs a degree of randomness into the system, reducing the phase information content and hence decohering the system.. t... [Pg.106]

This thus requires a sampling plan that reflects the data quality objectives and analytical measurement subjected to the laboratory quality system (Swyngedouw and Lessard, 2007). The measurement uncertainty can be controlled and evaluated (Eurachem, 2000). The sampling variance may contain systematic and random components of error from population representation and sampling protocol. Note that the errors are separate and additive. This means that the laboratory cannot compensate for sampling errors. [Pg.24]

It is important to note that the displacement of sample components in all the above processes is described by the basic mass transport equations developed earlier. However, some special considerations are needed to properly account for the random component of transport. Thus the basic equation of flow and transport, Eq. 3.35, is now expressed as... [Pg.95]


See other pages where Random component is mentioned: [Pg.419]    [Pg.61]    [Pg.388]    [Pg.45]    [Pg.126]    [Pg.69]    [Pg.262]    [Pg.560]    [Pg.120]    [Pg.61]    [Pg.289]    [Pg.204]    [Pg.40]    [Pg.149]    [Pg.165]    [Pg.165]    [Pg.50]    [Pg.92]    [Pg.126]    [Pg.240]    [Pg.35]    [Pg.57]    [Pg.133]    [Pg.324]    [Pg.452]    [Pg.306]    [Pg.276]    [Pg.100]    [Pg.63]    [Pg.65]    [Pg.23]   
See also in sourсe #XX -- [ Pg.181 ]




SEARCH



Velocity component, random

© 2024 chempedia.info