Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Distributions Entropy

Uncertainly estimates are made for the total CDF by assigning probability distributions to basic events and propagating the distributions through a simplified model. Uncertainties are assumed to be either log-normal or "maximum entropy" distributions. Chi-squared confidence interval tests are used at 50% and 95% of these distributions. The simplified CDF model includes the dominant cutsets from all five contributing classes of accidents, and is within 97% of the CDF calculated with the full Level 1 model. [Pg.418]

It should be indicated that a probability density function has been derived on the basis of maximum entropy formalism for the prediction of droplet size distribution in a spray resulting from the breakup of a liquid sheet)432 The physics of the breakup process is described by simple conservation constraints for mass, momentum, surface energy, and kinetic energy. The predicted, most probable distribution, i.e., maximum entropy distribution, agrees very well with corresponding empirical distributions, particularly the Rosin-Rammler distribution. Although the maximum entropy distribution is considered as an ideal case, the approach used to derive it provides a framework for studying more complex distributions. [Pg.252]

There are two ways to implement this program. One is to directly discuss the distribution of final states. This is known as surprisal analysis . In this simpler procedure one does not ask how this distribution came about Instead, one seeks the coarsest or most statistical" (= of maximal entropy) distribution of final states, subject to constraints. The last proviso is, of course, essential. If no constraints axe imposed, one will obtain as an answer the prior distribution. It is the constraints that generate a distribution which contains just that minimal dynamical detail as is necessary to generate the answers of interest. Few simple and physically obvious constraints are often sufficient [1, 3, 23] to account for even extreme deviations from the prior distribution. [Pg.215]

Only for t > r do we have the right to expect that the shock wave amplitude will approach the self-similar value which we have found. Due to the adiabatic character of the motion, the entropy of the mass m0 always remains constant, and differs from the value which is obtained by extrapolation of the self-similar entropy distribution to this mass. [Pg.111]

Fig. 13. The predicted entropy distribution for subtilisin E as determined by a mean-field treatment of the structural model. When all amino acids are equally allowed at a position, ij = In 20 3.0. The red lines are the positions at which mutations discovered by directed evolution improved the thermostability. The blue lines are for mutations that improved the activity (hydrolysis of a peptide substrate) in aqueous dimethylformamide. The bars indicate the average and standard deviation of the structural entropies. Fig. 13. The predicted entropy distribution for subtilisin E as determined by a mean-field treatment of the structural model. When all amino acids are equally allowed at a position, ij = In 20 3.0. The red lines are the positions at which mutations discovered by directed evolution improved the thermostability. The blue lines are for mutations that improved the activity (hydrolysis of a peptide substrate) in aqueous dimethylformamide. The bars indicate the average and standard deviation of the structural entropies.
A very interesting result of the maximum entropy distribution is that at higher velocities of impact, there is a steep onset of copious production of electronically excited and of charged species as shown in Fig. 37. This behavior is also seen experimentally and is the reason why an MD simulation involving only ground state species may be realistic only at impact... [Pg.69]

Figure 3. Statistical analysis of the intensity distribution of the high-resolution spectrum of C2 H2 at about 26,500 cm 1, including nearly 4000 lines. (Adapted from Ref. 55.) The solid line is the maximum entropy distribution (cf. Ref. 56) given by Eq. (3) with v = 3.2. Figure 3. Statistical analysis of the intensity distribution of the high-resolution spectrum of C2 H2 at about 26,500 cm 1, including nearly 4000 lines. (Adapted from Ref. 55.) The solid line is the maximum entropy distribution (cf. Ref. 56) given by Eq. (3) with v = 3.2.
Finite element modelling of the effects of a 14% volume fraction of glass fibre with an aspect ratio of 30 (Fig. 4.29b), used fibre orientations that fitted a maximum entropy distribution. It predicted that the longitudinal Young s modulus increased non-linearly with cos B, and that constant strain conditions applied for averaging the properties of the unidirectional composite. [Pg.130]

Another useful special case is that the random variable lies within a finite interval [a, b] without knowing the mean, variance or other moments. The above method can be applied and the Lagrange function in Equation (2.26) will be modified to exclude the terms with kx and X3. It turns out that the maximum entropy distribution is the uniform distribution in [a, b. ... [Pg.24]

Entropy distributes more or less quickly over the entire body from the point where it is created. This process is also connected with the generation of entropy even if it is not directly obvious (see Sect. 3.14). All of these entropy generating processes are irreversible. If entropy was created in this way we will not get rid of it again, unless we could transfer it in the surroundings. But this is inhibited by the thermal insulation. [Pg.58]

Estimating uncertainties in maximum entropy distribution parameters from small-sample observations... [Pg.1651]

ABSTRACT In this paper we consider nncertainties in the distribution of random variables due to small-sample observations. Based on the maximum entropy distribution we assume the first four stochastic moments of a random variable as uncertain stochastic parameters. Their uncertainty is estimated by the bootstrap approach from the initial sample set and later considered in estimating the variation of probabilistic measures. [Pg.1651]

Based on the entropy principle proposed by Shannon (1948) entropy distributions are defined to be those which maximize the information entropy measure... [Pg.1651]

From this standardized constraints the distribution parameters can he obtained very efficiently as shown in Rockinger and Jondeau (2002) and van Erp and van Gelder (2008). The final maximum entropy distribution is than obtained for the standardized random variable Y... [Pg.1652]

Figure 1. Log-normal and corresponding maximum entropy distributions for different standard deviations. Figure 1. Log-normal and corresponding maximum entropy distributions for different standard deviations.
Special types of the maximum entropy distribution are the uniform distribution (u) = = Vj = V4 = 0,... [Pg.1652]

For the presented maximum entropy distribution the cumulative distribution function is given as... [Pg.1653]

For a single random variable the maximum entropy distribution is obtained by considering only the moment constraints. For the multivariate distribution the correlations between each pair of random variables has to be taken into account as well. This would lead to an optimization problem with 4 optimization parameters with 4 constraints from the marginal moment conditions and ( 4-1)/2 constraints from the correlation conditions, where n is number of random variables. This concept was recently apphed in Soize (2008) to determine the joint density function. For a... [Pg.1653]

Figure 5. Maximum entropy distributions of the modified soil parameter samples and obtained distributions of the uncertain stochastic parameters from the bootstrap approach. [Pg.1655]

The required derivatives canbe obtained forthe FORM approach very efficiently as reported in Most and Knabe (2009). Based on this Taylor series approximation the failure probability can be calculated quite accurate for each specific sample of the stochastic parameters close to the mean value vector po- This calculation is performed here for all 10000 bootstrap samples. The resulting histogram of the reliability index is shown in Figure 6 using log-normally distributed

maximum entropy distributions. [Pg.1656]

Figure 7. Variation of the reliability index based on maximum entropy distributions. Figure 7. Variation of the reliability index based on maximum entropy distributions.
Gaussian estimate uncorrelated Maximum entropy distributions 3.840 0.336 3.287 - 4.393... [Pg.1657]

Basu, P. and A. Templeman (1984). An efficient algorithm to generate maximum entropy distributions. International. JminmjJorNmnericaJ e 1039-... [Pg.1658]

Zellner, A. and R. Highfield (1988). Calculation of maximum entropy distributions and approximation of marginal posterior distributions. Journal of Econometrics 37, 195-2009. [Pg.1658]

What is transported One of the peculiarities of the thermal energy variety is to think of the thermal conduction in terms of energy transported, when other domains consider entities as the transported quantity (charges, momenta, etc.). According to their definition, as entities bear energy, there is no physical consequence due to this disparity. There are naturally historical reasons for this, but also a conceptual difficulty in our modern minds to view the entropy as a quantity able to be transported. This is certainly due to the influence of the statistical definition of entropy as a measure of order/disorder in the system, considered as a whole with implicitly a uniform entropy distribution. [Pg.442]

Assumes a probabilistic relationship between intensities, which are defined in terms of entropies of the entropy distribution, H, where p = probability of intensity. Image-based registration using different modalities where there is a nonlinear intensity relationship between images (i.e., CT to MRI) [26,27]. [Pg.40]


See other pages where Distributions Entropy is mentioned: [Pg.13]    [Pg.15]    [Pg.147]    [Pg.186]    [Pg.130]    [Pg.14]    [Pg.99]    [Pg.517]    [Pg.399]    [Pg.72]    [Pg.128]    [Pg.22]    [Pg.1651]    [Pg.1651]    [Pg.1651]    [Pg.1651]    [Pg.1651]    [Pg.1652]    [Pg.1656]    [Pg.1657]    [Pg.1658]    [Pg.399]   
See also in sourсe #XX -- [ Pg.14 ]

See also in sourсe #XX -- [ Pg.17 ]

See also in sourсe #XX -- [ Pg.14 ]




SEARCH



Distribution of entropy production

Distribution of maximal entropy

Distributional entropy

Distributional entropy

Entropy Effects in Phase Distribution Porous Media

Entropy and distribution of probability

Entropy driven distribution

Entropy theory distribution computation

Maximal entropy distribution

Maximal entropy distribution calculation

Maximal entropy distribution determination

Maximum entropy distribution

Probability density distribution function for the maximum information entropy

Probability distribution, entropy-sampling

Spectrum distribution entropy

© 2024 chempedia.info