Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Probability distribution definition

Finally, each coefficient were standardized by the division of the sum of all coefficients(2). This definition allows also to regard as the co-occurrence matrix as a function of probability distribution, it can be represented by an image of KxK dimensions. [Pg.232]

Let a(t) denote a time dependent random process. a(t) is a random process because at time t the value of a(t) is not definitely known but is given instead by a probability distribution fimction W (a, t) where a is the value a (t) can have at time t with probability detennmed by W (a, t). W a, t) is the first of an infinite collection of distribution fimctions describing the process a(t) [7, H]. The first two are defined by... [Pg.692]

The two exponential tenns are complex conjugates of one another, so that all structure amplitudes must be real and their phases can therefore be only zero or n. (Nearly 40% of all known structures belong to monoclinic space group Pl c. The systematic absences of (OlcO) reflections when A is odd and of (liOl) reflections when / is odd identify this space group and show tiiat it is centrosyimnetric.) Even in the absence of a definitive set of systematic absences it is still possible to infer the (probable) presence of a centre of synnnetry. A J C Wilson [21] first observed that the probability distribution of the magnitudes of the structure amplitudes would be different if the amplitudes were constrained to be real from that if they could be complex. Wilson and co-workers established a procedure by which the frequencies of suitably scaled values of F could be compared with the tlieoretical distributions for centrosymmetric and noncentrosymmetric structures. (Note that Wilson named the statistical distributions centric and acentric. These were not intended to be synonyms for centrosyimnetric and noncentrosynnnetric, but they have come to be used that way.)... [Pg.1375]

The appropriate quantum mechanical operator fomi of the phase has been the subject of numerous efforts. At present, one can only speak of the best approximate operator, and this also is the subject of debate. A personal historical account by Nieto of various operator definitions for the phase (and of its probability distribution) is in [27] and in companion articles, for example, [130-132] and others, that have appeared in Volume 48 of Physica Scripta T (1993), which is devoted to this subject. (For an introduction to the unitarity requirements placed on a phase operator, one can refer to [133]). In 1927, Dirac proposed a quantum mechanical operator tf), defined in terms of the creation and destruction operators [134], but London [135] showed that this is not Hermitean. (A further source is [136].) Another candidate, e is not unitary. [Pg.103]

The conditional probability distribution function of the random variables fa, , fa given that the random variables fa, , fa+m have assumed the values xn+1, , xn+m respectively, can be defined, in most cases of interest to us, by means of the following procedure. To simplify the discussion, we shall only present the details of the derivation for the case of two random variables fa and fa. We begin by using the definition, Eq. (3-159), to write... [Pg.151]

There is thus assumed to be a one-to-one correspondence between the most probable distribution and the thermodynamic state. The equilibrium ensemble corresponding to any given thermodynamic state is then used to compute averages over the ensemble of other (not necessarily thermodynamic) properties of the systems represented in the ensemble. The first step in developing this theory is thus a suitable definition of the probability of a distribution in a collection of systems. In classical statistics we are familiar with the fact that the logarithm of the probability of a distribution w[n is — J(n) w n) In w n, and that the classical expression for entropy in the ensemble is20... [Pg.466]

In physical chemistry, entropy has been introduced as a measure of disorder or lack of structure. For instance the entropy of a solid is lower than for a fluid, because the molecules are more ordered in a solid than in a fluid. In terms of probability it means also that in solids the probability distribution of finding a molecule at a given position is narrower than for fluids. This illustrates that entropy has to do with probability distributions and thus with uncertainty. One of the earliest definitions of entropy is the Shannon entropy which is equivalent to the definition of Shannon s uncertainty (see Chapter 18). By way of illustration we... [Pg.558]

It should be noted that besides being widely used in the literature definition of characteristic timescale as integral relaxation time, recently intrawell relaxation time has been proposed [42] that represents some effective averaging of the MFPT over steady-state probability distribution and therefore gives the slowest timescale of a transition to a steady state, but a description of this approach is not within the scope of the present review. [Pg.359]

We assume a knowledge of the possible state covariances P generated by the tracking system. This knowledge is statistical and is represented by a probability distribution F(P) over the space of all positive definite matrices. [Pg.279]

The information entropy of a probability distribution is defined as S[p(] = — p, In ph where p, forms the set of probabilities of a distribution. For continuous probability distributions such as momentum densities, the information entropy is given by. S yj = - Jy(p) In y(p) d3p, with an analogous definition in position space... [Pg.68]

We start by considering an arbitrary measurable10 one-point11 scalar function of the random fields U and 0 Q U, 0). Note that, based on this definition, Q is also a random field parameterized by x and t. For each realization of a turbulent flow, Q will be different, and we can define its expected value using the probability distribution for the ensemble of realizations.12 Nevertheless, the expected value of the convected derivative of Q can be expressed in terms of partial derivatives of the one-point joint velocity, composition PDF 13... [Pg.264]

Definition The information I(S) of a statement S about probability distributions is the greatest lower bound of the set of informational integrals... [Pg.45]

Remark. A logician might raise the following objection. In section 1 stochastic variables were defined as objects consisting of a range and a probability distribution. Algebraic operations with such objects are therefore also matters of definition rather than to be derived. He is welcome to regard the addition in this section and the transformations in the next one as definitions, provided that he then shows that the properties of these operations that were obvious to us are actually consequences of these definitions. [Pg.15]

According to our definition the end-to-end distances of the network chains have an a priori probability distribution which is Gaussian. The effect of finite extensibility of the chains will be postponed to Chapter IV, because it is a special aspect of non-Gaussian behaviour. [Pg.33]

The average free volume per particle is (fifree) = Vfree/N = k/0. Therefore, comparison of the average free volume to its definition from a probability distribution,... [Pg.231]

In the general approach to classical statistical mechanics, each particle is considered to occupy a point in phase space, i.e., to have a definite position and momentum, at a given instant. The probability that the point corresponding to a particle will fall in any small volume of the phase space is taken proportional to die volume. The probability of a specific arrangement of points is proportional to the number of ways that the total ensemble of molecules could be permuted to achieve the arrangement. When this is done, and it is further required that the number of molecules and their total energy remain constant, one can obtain a description of the most probable distribution of the molecules in phase space. Tlie Maxwell-Boltzmann distribution law results. [Pg.1539]

Consequently R(t) is positive definite. It follows from Bochner s theorem that the normalized memory function can be regarded as the characteristic function of the probability distribution, P(coi), such that... [Pg.56]

Taking the squared absolute value of Eq. (6.12) and using the definition of the parabolic coordinates given in Eq. (6.4), we can write an expression for the electron probability distribution in spherical coordinates,1... [Pg.73]

Use Equation 6.11 to show that changing the definition of the zero point of energy (which is arbitrary, because potential energy is included) by an amount A changes i// (t) by a factor e lAt. Also show that this arbitrary choice has no effect on the probability distribution, or on the expectation values of position, momentum, or kinetic energy. [Pg.146]

There are several technical details in a rigorous definition of the autocorrelation function for velocity. First, one has to remember the vectorial character of velocity, because clearly the direction in which the particle is knocked is important to its subsequent dynamic history. Then, according to the way it is defined, one has to take the product of the velocity at f = 0, Vg, and that at the later chosen time, v,. However, it is not as simple as just multiplying together the two veetors, Vg and v,. One has to allow for the distribution of positions and momenta of the particle in the system at the beginning, that is, at i = 0. To allow for this, one can introduce symbolically a probability distribution coefficient, g. Therefore, the expression for the autocorrelation function will involve the product gVgV,. [Pg.416]

Observe Figure 1.2. This figure definitely shows some form of pattern, but is not of such a character that meaningful values can be obtained directly for design purposes. If enough data of this pattern is available, however, they may be subjected to a statistical analysis to predict design values, or probability distribution analysis, which uses the tools of probability. Only two rules of probability apply to our present problem the addition rule and the multiplication rule. [Pg.95]

A very useful criterion in this respect is given by the maximum entropy principle in the sense of Jaynes." The ingredients of the maximum entropy principle are (i) some reference probability distribution on the pure states and (ii) a way to estimate the quality of some given probability distribution p. on the pure states with respect to the reference distribution. As our reference probability distribution, we shall take the equidistribution defined in Eq. (30), for a two-level system (this definition of equipartition can be generalized to arbitrary dxd matrices, being the canonical measure on the d-dimensional complex projective plane - " ). The relative entropy of some probability distribution pf [see Eq. (35)] with respect to yXgqp is defined as... [Pg.125]

In the work of Zachmann et al. new approaches to the quantification of surface flexibility have been suggested. The basis data for these approaches are supplied by molecular dynamics (MD) simulations. The methods have been applied to two proteins (PTI and ubiquitin). The calculation and visualization of the local flexibility of molecular surfaces is based on the notion of the solvent accessible surface (SAS), which was introduced by Connolly. For every point on this surface a probability distribution p(r) is calculated in the direction of the surface normal, i.e., the rigid surface is replaced by a soft surface. These probability distributions are well suited for the interactive treatment of molecular entities because the former can be visualized as color coded on the molecular surface although they cannot be directly used for quantitative shape comparisons. In Section IV we show that the p values can form the basis for a fuzzy definition of vaguely defined surfaces and their quantitative comparison. [Pg.234]

Figure 5.8. Stokes shift a) definition and b) dependence on the difference in equilibrium geometries of ground and excited states. Shown is the probability distribution in various vibrational levels, which is proportional to the square of the vibrational wave function (adapted from Philips and Salisbury, 1976). Figure 5.8. Stokes shift a) definition and b) dependence on the difference in equilibrium geometries of ground and excited states. Shown is the probability distribution in various vibrational levels, which is proportional to the square of the vibrational wave function (adapted from Philips and Salisbury, 1976).

See other pages where Probability distribution definition is mentioned: [Pg.2246]    [Pg.458]    [Pg.425]    [Pg.382]    [Pg.397]    [Pg.325]    [Pg.195]    [Pg.50]    [Pg.155]    [Pg.14]    [Pg.288]    [Pg.11]    [Pg.14]    [Pg.54]    [Pg.516]    [Pg.5]    [Pg.99]    [Pg.29]    [Pg.177]    [Pg.106]    [Pg.199]    [Pg.21]    [Pg.447]   
See also in sourсe #XX -- [ Pg.1055 ]




SEARCH



Distribution definition

Probability distributions

© 2024 chempedia.info