Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Cross-entropy

The Gini index and the cross entropy measure are differentiable which is of advantage for optimization. Moreover, the Gini index and the deviance are more sensitive to changes in the relative frequencies than the misclassification error. Which criterion is better depends on the data set, some authors prefer Gini which favors a split into a small pure region and a large impure one. [Pg.232]

While in the regression case the optimization criterion was based on residual sum of squares, this would not be meaningful in the classification case. A usual error function in the context of neural networks is the cross entropy or deviance, defined as... [Pg.236]

By the form of the first sum, the object is being estimated by a principle of maximum cross entropy. By the second sum, the noise is being estimated by a principle of least squares (notice the minus sign). The df have dropped out because of the sparsity constraint (11a). This is convenient, because it avoids the need for their detailed calculation. [Pg.252]

Shore, J. E. and Johnson, R. W., Axiomatic derivation of the principle of maximum-entropy and the principle of minimum cross-entropy. IEEE Trans. Inf. Theory 26, 26-37 (1980). [Pg.225]

The necessity of the prior distribution pj in Eq. (72) has been rationalized by minimizing the so-called entropy deficiency (or cross entropy [45])... [Pg.73]

Saito and Coifman [14] discuss a cross-entropy measure which can be used to measure how differently vectors are distributed. Let (1) and (2) be vectors from classes 1 and 2 respectively. If the elements in (1) and (2) are nonnegative and sum to unity, then cross-entropy is defined by... [Pg.192]

Other authors have recently dealt with some particular factors of the complexity measures. In particular. Shannon entropy has been extensively used in the study of many important properties of multielectronic systems, such as, for instance, rigorous bounds [38], electronic correlation [39], effective potentials [40], similarity [41], and minimum cross-entropy approximations [42]. [Pg.420]

Measure for comparing approximations In order to conveniently eompare the approximations, let / be a target FCD and let g be an approximate function to /. The cross-entropy distance between/ and g with respect to /, also known as Kullback-Leibler (KL) information of g at /, has been considered to assess the performance in statistical terms of g when approximating /. In fact, a number of authors have adopted the KL distance for measuring the quahty of proposal ftmetions in infering over their target densities Neil et al. (2007) and Keith et al. (2008) are some recent examples. The KL distance is defined as the expected value... [Pg.62]

Letting the illustrated analysis aside, we might make use of the cross-entropy measure (see Kullback Leibler (1951) for more details) for analyzing how the discrepancy between the 2N — and — methods and... [Pg.1417]

MC-method varies as a function of the number M of steps. Basically, the cross-entropy is given by... [Pg.1417]

Fig. 6 illustrates the cross-entropy measured for the avaUahility metric by both the 2A— and A —approaches. It depicts that the findings of Moura Droguett (2009) on the accuracy of the 2A-method are conservative. For the availability measure, the 2A-method presents an error estimate smaller than A -method over the number of steps and that tends... [Pg.1417]

Figure 6. Cross-entropy over number of steps 2N—xN ... [Pg.1418]

An important generalization of Shannon entropy, called relative (cross) entropy (also known as entropy deficiency, missing information, or directed divergence), has been proposed by Kullback and Leibler [5] and Kullback [6]. It measures the information distance between two (normalized) probability distributions for the same set of events ... [Pg.145]

To the knowledge of the author, no analytical solution (neither exact nor approximate) exists for finding the design point of the problem posed in Equation 14. Hence, it is necessary to use an optimization technique for determining x. Numerical validation has shown that a technique which is convenient fur such purpose is the so-called Cross Entropy (CE) optimization method (see, e.g. (Rubinstein 1999)). This method is a gradient-free algorithm and, hence, is suitable for dealing with structural non linearities. It has been applied to a number of optimization problems in different fields and it can be used for rare event simulation as well. For specific details about its theoretical basis and its features, it is referred to the literature (see, e.g. (de Boer et al. 2005) for a comprehensive tutorial). [Pg.12]

Rubinstein, R. 1999. The cross-entropy method for combinatorial and continuous optimization. Methodology and Computing in Applied Probability 1, 127-190. [Pg.20]

There are other metrics of information content, and several of them are based on the Shannon entropyAbout 10 years after introduction of the Shannon entropy concept, Jaynes formulated the maximum entropy approach, which is often referred to as Jaynes entropy and is closely related to Shannon s work. Jaynes introduction of the notion of maximum entropy has become an important approach to any study of statistical inference where all or part of a model system s probability distribution remains unknown. Jaynes entropy, or relations, which guide the parameterization to achieve a model of minimum bias, are built on the Kullback-Leibler (KL) function, sometimes referred to as the cross-entropy or relative entropy function, which is often used and shown (in which p and q represent two probability distributions indexed by k), as... [Pg.269]


See other pages where Cross-entropy is mentioned: [Pg.692]    [Pg.315]    [Pg.232]    [Pg.233]    [Pg.233]    [Pg.247]    [Pg.247]    [Pg.258]    [Pg.336]    [Pg.136]    [Pg.235]    [Pg.236]    [Pg.32]    [Pg.19]    [Pg.585]   
See also in sourсe #XX -- [ Pg.247 , Pg.252 ]




SEARCH



Cross-approximate entropy (Co-ApEn

© 2024 chempedia.info