Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Proportional Shannon Entropy

Lloyd and Pagels show that these three requirements lead uniquely to an average complexity of a state proportional to the Shannon entropy of the set of (experimentally determined) trajectories leading to the given state (= EiPi oSzPi)- The thermodynamic depth of a state S to which the system S has evolved via the possible trajectory is equal to the amount of information required to specify that trajectory, or Djj S) Hamiltonian systems, Lloyd and... [Pg.627]

Let pi be the proportion of references in discipline i, and Sg the Salton s cosine similarity of disciplines i and j according to their citing patterns. Then Shannon entropy (or diversity) is defined as H = — JT pi In pi and Rao-Stirling diversity is defined as A = 1 — J2ijsijPiPj- See Rafols and Meyer25 for details. Porter et al.19 also introduced a formulation equivalent to Rao-Stirling s diversity. [Pg.677]

The Fisher information, reminiscent of von Weizsacker s [70] inhomogeneity correction to electronic kinetic energy in the Thomas-Fermi theory, charactoizes the compactness of the probability density. For example, the Fisher information in normal distribution measures the inverse of its variance, called invariance, while the complementary Shannon entropy is proportional to the logarithm of variance, thus monotonically increasing with the spread of Gaussian distribution. Therefore, Shannon entropy and intrinsic accuracy describe complementary facets of the probability density the former reflects the distribution s ( spread ( disorder , a measure of uncertainty), while the latter measures its narrowness ( order ). [Pg.152]

Binary descriptors should be used when the considered characteristic is really a dual characteristic of the molecule or when the considered quantity cannot be represented in a more informative numerical form. In any case, the - mean information content of a binary descriptor /char is low (the maximum value is 1 when the proportions of 0 and 1 are equal), thus the standardized Shannon s entropy = /char/log2 , where n is the number of elements, gives a measure of the efficiency of the collected information. [Pg.234]

Methods based in traditional competitive learning are focused on data density representation to be optimal from the point of view of reducing the Shannon s information entropy for the use of codewords in a transmission task. However it is not always desirable a codebook representation with direct proportion between its codeword density and the data density. For example, in the human vision system, the attention is attracted to visually salient stimuli, and therefore only scene locations sufficiently different from their surroundings are processed in detail. A simple framework to think about how Saliency may be computed in biological brains has been developed over the past three decades [10,19]. [Pg.213]


See other pages where Proportional Shannon Entropy is mentioned: [Pg.25]    [Pg.25]    [Pg.51]    [Pg.15]    [Pg.429]   
See also in sourсe #XX -- [ Pg.25 ]




SEARCH



Shannon

© 2024 chempedia.info