Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Statistical notions

In this geometry the points are considered not as primary entities but rather as lumps of primordial elements that are not further resolvable. Here, the concept of probability is introduced so that the same two objects are sometimes treated as identical and sometimes as distinguishable. In this way Menger solved the Poincare dilemma of distinguishing between transitive mathematical and intransitive physical relations of equality. These lumps may be the seat of elementary particles or the size of the strings. In this geometry we have two basic notions (1) the concept of hazy or fuzzy lumps and (2) the statistical notion. [Pg.611]

The term hierarchy will be used here for the statistical notion of nestedness, and the term nestedness will be used in the strict (chemometric) sense. In this terminology PCA models are nested in the number of components, because the first R components remain exactly the same upon going to an R + 1 component model. PARAFAC models are hierarchical because an I -component model can be derived from an R + 1 component model in the sense of the true parameters by setting the true (R + l)th component parameters to zero. PARAFAC models are not nested in general the estimates of the R components change when going to an (R + 1) component model. [Pg.90]

From a mathematical standpoint, McLean notes that there is little danger of actually running into your parallel-world doppelganger, as the number of sub-Earths where you are not bom so vasdy outweighs the ones where you are bom, (Still, an infinite number of your equivalents exist on an Infinity World.) And, although the infinity effect means that some extremely bizarre sub-Earths will exist, some version of the central limit theorem (the statistical notion that repeated combinations of random observations tend to cluster around a single mean) will cause Earths to cluster around some average behavior. [Pg.31]

A more rigorous definition of uncertainty (Type A) relies on the statistical notion of confidence intervals and the Central Limit Theorem. The confidence interval is based on the calculation of the standard error of the mean, Sx, which is derived from a random sample of the population. The entire population has a mean /x and a variance a. A sample with a random distribution has a sample mean and a sample standard deviation of x and s, respectively. The Central Limit Theorem holds that the standard error of the mean equals the sample standard deviation divided by the square root of the number of samples ... [Pg.33]

So basic is the notion of a statistical estimate of a physical parameter that statisticians use Greek letters for the parameters and Latin letters for the estimates. For many purposes, one uses the variance, which for the sample is s and for the entire populations is cr. The variance s of a finite sample is an unbiased estimate of cr, whereas the standard deviation 5- is not an unbiased estimate of cr. [Pg.197]

In the light of the preceding discussion, it is tempting to extend the notion of statistical independence by calling a group of three or more events statistically independent if... [Pg.153]

The static properties of an isolated chain constitute a good starting point to study polymer dynamics many of the features of the chain in a quiescent fluid could be extrapolated to the kinetics theories of molecular coil deformation. As a matter of fact, it has been pointed out that the equations of chain statistics and chain dynamics are intimately related through the simplest notions of graph theory [16]. [Pg.78]

Another example of slight conceptual inaccuracy is given by the Wigner function(12) and Feynman path integral(13). Both are useful ways to look at the wave function. However, because of the prominence of classical particles in these concepts, they suggest the view that QM is a variant of statistical mechanics and that it is a theory built on top of NM. This is unfortunate, since one wants to convey the notion that NM can be recovered as an integral part of QM pertaining to for macroscopic systems. [Pg.26]

A measurement technique such as titration is employed that provides a single result that, on repetition, scatters somewhat around the expected value. If the difference between expected and observed value is so large that a deviation must be suspected, and no other evidence such as gross operator error or instrument malfunction is available to reject this notion, a statistical test is applied. (Note under GMP, a deviant result may be rejected if and when there is sufficient documented evidence of such an error.)... [Pg.45]

Visualizing Data, the reader may have guessed from previous sections that graphical display contributes much toward understanding the data and the statistical analysis. This notion is correct, and graphics become more important as the dimensionality of the data rises, especially to three and more dimensions. Bear in mind that ... [Pg.133]

The first theoretical attempts in the field of time-resolved X-ray diffraction were entirely empirical. More precise theoretical work appeared only in the late 1990s and is due to Wilson et al. [13-16]. However, this theoretical work still remained preliminary. A really satisfactory approach must be statistical. In fact, macroscopic transport coefficients like diffusion constant or chemical rate constant break down at ultrashort time scales. Even the notion of a molecule becomes ambiguous at which interatomic distance can the atoms A and B of a molecule A-B be considered to be free Another element of consideration is that the electric field of the laser pump is strong, and that its interaction with matter is nonlinear. What is needed is thus a statistical theory reminiscent of those from time-resolved optical spectroscopy. A theory of this sort was elaborated by Bratos and co-workers and was published over the last few years [17-19]. [Pg.265]

This is a law about the equilibrium state, when macroscopic change has ceased it is the state, according to the law, of maximum entropy. It is not really a law about nonequilibrium per se, not in any quantitative sense, although the law does introduce the notion of a nonequilibrium state constrained with respect to structure. By implication, entropy is perfectly well defined in such a nonequilibrium macrostate (otherwise, how could it increase ), and this constrained entropy is less than the equilibrium entropy. Entropy itself is left undefined by the Second Law, and it was only later that Boltzmann provided the physical interpretation of entropy as the number of molecular configurations in a macrostate. This gave birth to his probability distribution and hence to equilibrium statistical mechanics. [Pg.2]

Because the focus is on a single, albeit rather general, theory, only a limited historical review of the nonequilibrium field is given (see Section IA). That is not to say that other work is not mentioned in context in other parts of this chapter. An effort has been made to identify where results of the present theory have been obtained by others, and in these cases some discussion of the similarities and differences is made, using the nomenclature and perspective of the present author. In particular, the notion and notation of constraints and exchange with a reservoir that form the basis of the author s approach to equilibrium thermodynamics and statistical mechanics [9] are used as well for the present nonequilibrium theory. [Pg.3]

The concept of affine deformation is central to the theory of rubber elasticity. The foundations of the statistical theory of rubber elasticity were laid down by Kuhn (JJ, by Guth and James (2) and by Flory and Rehner (3), who introduced the notion of affine deformation namely, that the values of the cartesian components of the end-to-end chain vectors in a network vary according to the same strain tensor which characterizes the macroscopic bulk deformation. To account for apparent deviations from affine deformation, refinements have been proposed by Flory (4) and by Ronca and Allegra (5) which take into account effects such as chain-junction entanglements. [Pg.279]

For a metal, the negative of the work function gives the position of the Fermi level with respect to the vacuum outside the metal. Similarly, the negative of the work function of an electrochemical reaction is referred to as the Fermi level Ep (redox) of this reaction, measured with respect to the vacuum in this context Fermi level is used as a synonym for electrochemical potential. If the same reference point is used for the metal s,nd the redox couple, the equilibrium condition for the redox reaction is simply Ep (metal)= Ep(redox). So the notion of a Fermi level for a redox couple is a convenient concept however, this terminology does not imply that there are free electrons in the solution which obey Fermi-Dirac statistics, a misconception sometimes found in the literature. [Pg.17]

A more precise definition would include conditioning on the random initial velocity and compositions /li, , x Uo,. o.Y Vb XIY), V o, y 0- However, only the conditioning on initial location is needed in order to relate the Lagrangian and Eulerian PDFs. Nevertheless, the initial conditions (Uo, o) for a notional particle must have the same one-point statistics as the random variables U(Y, to) and (V. to). [Pg.307]

The superscript used in the coefficient matrices in (6.192) is a reminder that the statistics must be evaluated at the notional-particle location. For example, e = e(X r), and the scalar standard-deviation matrix and scalar correlation matrix p are computed from the location-conditioned scalar second moments X )(X, t). [Pg.316]

We have seen that Lagrangian PDF methods allow us to express our closures in terms of SDEs for notional particles. Nevertheless, as discussed in detail in Chapter 7, these SDEs must be simulated numerically and are non-linear and coupled to the mean fields through the model coefficients. The numerical methods used to simulate the SDEs are statistical in nature (i.e., Monte-Carlo simulations). The results will thus be subject to statistical error, the magnitude of which depends on the sample size, and deterministic error or bias (Xu and Pope 1999). The purpose of this section is to present a brief introduction to the problem of particle-field estimation. A more detailed description of the statistical error and bias associated with particular simulation codes is presented in Chapter 7. [Pg.317]

Ideally, one would like to choose Np and M large enough that e is dominated by statistical error (X ), which can then be reduced through the use of multiple independent simulations. In any case, for fixed Np and M, the relative magnitudes of the errors will depend on the method used to estimate the mean fields from the notional-particle data. We will explore this in detail below after introducing the so-called empirical PDF. [Pg.319]


See other pages where Statistical notions is mentioned: [Pg.144]    [Pg.9]    [Pg.285]    [Pg.24]    [Pg.86]    [Pg.144]    [Pg.9]    [Pg.285]    [Pg.24]    [Pg.86]    [Pg.548]    [Pg.318]    [Pg.238]    [Pg.626]    [Pg.154]    [Pg.8]    [Pg.224]    [Pg.109]    [Pg.68]    [Pg.124]    [Pg.163]    [Pg.212]    [Pg.170]    [Pg.338]    [Pg.340]    [Pg.186]    [Pg.155]    [Pg.2]    [Pg.157]    [Pg.268]    [Pg.268]    [Pg.49]    [Pg.146]    [Pg.317]   


SEARCH



Notion

Statistical notions population

Statistical notions standard deviation

© 2024 chempedia.info