Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Maximum entropy distribution

Uncertainly estimates are made for the total CDF by assigning probability distributions to basic events and propagating the distributions through a simplified model. Uncertainties are assumed to be either log-normal or "maximum entropy" distributions. Chi-squared confidence interval tests are used at 50% and 95% of these distributions. The simplified CDF model includes the dominant cutsets from all five contributing classes of accidents, and is within 97% of the CDF calculated with the full Level 1 model. [Pg.418]

It should be indicated that a probability density function has been derived on the basis of maximum entropy formalism for the prediction of droplet size distribution in a spray resulting from the breakup of a liquid sheet)432 The physics of the breakup process is described by simple conservation constraints for mass, momentum, surface energy, and kinetic energy. The predicted, most probable distribution, i.e., maximum entropy distribution, agrees very well with corresponding empirical distributions, particularly the Rosin-Rammler distribution. Although the maximum entropy distribution is considered as an ideal case, the approach used to derive it provides a framework for studying more complex distributions. [Pg.252]

A very interesting result of the maximum entropy distribution is that at higher velocities of impact, there is a steep onset of copious production of electronically excited and of charged species as shown in Fig. 37. This behavior is also seen experimentally and is the reason why an MD simulation involving only ground state species may be realistic only at impact... [Pg.69]

Figure 3. Statistical analysis of the intensity distribution of the high-resolution spectrum of C2 H2 at about 26,500 cm 1, including nearly 4000 lines. (Adapted from Ref. 55.) The solid line is the maximum entropy distribution (cf. Ref. 56) given by Eq. (3) with v = 3.2. Figure 3. Statistical analysis of the intensity distribution of the high-resolution spectrum of C2 H2 at about 26,500 cm 1, including nearly 4000 lines. (Adapted from Ref. 55.) The solid line is the maximum entropy distribution (cf. Ref. 56) given by Eq. (3) with v = 3.2.
Finite element modelling of the effects of a 14% volume fraction of glass fibre with an aspect ratio of 30 (Fig. 4.29b), used fibre orientations that fitted a maximum entropy distribution. It predicted that the longitudinal Young s modulus increased non-linearly with cos B, and that constant strain conditions applied for averaging the properties of the unidirectional composite. [Pg.130]

Another useful special case is that the random variable lies within a finite interval [a, b] without knowing the mean, variance or other moments. The above method can be applied and the Lagrange function in Equation (2.26) will be modified to exclude the terms with kx and X3. It turns out that the maximum entropy distribution is the uniform distribution in [a, b. ... [Pg.24]

Estimating uncertainties in maximum entropy distribution parameters from small-sample observations... [Pg.1651]

ABSTRACT In this paper we consider nncertainties in the distribution of random variables due to small-sample observations. Based on the maximum entropy distribution we assume the first four stochastic moments of a random variable as uncertain stochastic parameters. Their uncertainty is estimated by the bootstrap approach from the initial sample set and later considered in estimating the variation of probabilistic measures. [Pg.1651]

From this standardized constraints the distribution parameters can he obtained very efficiently as shown in Rockinger and Jondeau (2002) and van Erp and van Gelder (2008). The final maximum entropy distribution is than obtained for the standardized random variable Y... [Pg.1652]

Figure 1. Log-normal and corresponding maximum entropy distributions for different standard deviations. Figure 1. Log-normal and corresponding maximum entropy distributions for different standard deviations.
Special types of the maximum entropy distribution are the uniform distribution (u) = = Vj = V4 = 0,... [Pg.1652]

For the presented maximum entropy distribution the cumulative distribution function is given as... [Pg.1653]

For a single random variable the maximum entropy distribution is obtained by considering only the moment constraints. For the multivariate distribution the correlations between each pair of random variables has to be taken into account as well. This would lead to an optimization problem with 4 optimization parameters with 4 constraints from the marginal moment conditions and ( 4-1)/2 constraints from the correlation conditions, where n is number of random variables. This concept was recently apphed in Soize (2008) to determine the joint density function. For a... [Pg.1653]

Figure 5. Maximum entropy distributions of the modified soil parameter samples and obtained distributions of the uncertain stochastic parameters from the bootstrap approach. [Pg.1655]

The required derivatives canbe obtained forthe FORM approach very efficiently as reported in Most and Knabe (2009). Based on this Taylor series approximation the failure probability can be calculated quite accurate for each specific sample of the stochastic parameters close to the mean value vector po- This calculation is performed here for all 10000 bootstrap samples. The resulting histogram of the reliability index is shown in Figure 6 using log-normally distributed

maximum entropy distributions. [Pg.1656]

Figure 7. Variation of the reliability index based on maximum entropy distributions. Figure 7. Variation of the reliability index based on maximum entropy distributions.
Gaussian estimate uncorrelated Maximum entropy distributions 3.840 0.336 3.287 - 4.393... [Pg.1657]

Basu, P. and A. Templeman (1984). An efficient algorithm to generate maximum entropy distributions. International. JminmjJorNmnericaJ e 1039-... [Pg.1658]

Zellner, A. and R. Highfield (1988). Calculation of maximum entropy distributions and approximation of marginal posterior distributions. Journal of Econometrics 37, 195-2009. [Pg.1658]

Here we illustrate how to predict the maximum entropy distribution when an average score is known. Suppose a die has t = 6 faces and the scores equal the face indices, f(i) = i. Let x = Then Equation (6.17) gives q = X + and Equation (6.16) gives... [Pg.87]

The maximum entropy distribution is Gaussian when the second moment is given. Pro e that the probability distribution p, that maximizes the entropy for die rolls subject to a constant value of the second moment (t ) is a Gaussian function. Lfse / = i. [Pg.103]

Information-theoretic distributions constracted in the above-described marmer are identical in form to the distributions for the various quantum and classical Gibbsian ensembles discussed earlier. This can be traced to the formal eqtrivalence of the missing information / and the Gibbs entropy S. As with information-theoretic distributions, the Gibbsian distributions are maximum entropy distributions consistent with the information defining the macroscopic state of a system. [Pg.247]

One can readily verify that p does indeed represent the maximum entropy distribution consistent with the available information (C>y) y = 0,..., n). [Pg.248]


See other pages where Maximum entropy distribution is mentioned: [Pg.99]    [Pg.128]    [Pg.22]    [Pg.1651]    [Pg.1651]    [Pg.1651]    [Pg.1652]    [Pg.1656]    [Pg.1657]    [Pg.103]   
See also in sourсe #XX -- [ Pg.22 ]




SEARCH



Distributional entropy

Entropy distribution

Maximum entropy

Probability density distribution function for the maximum information entropy

© 2024 chempedia.info