Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Entropy Shannon

Nearly ten years ago, Tsallis proposed a possible generalization of Gibbs-Boltzmann statistical mechanics. [1] He built his intriguing theory on a reexpression of the Gibbs-Shannon entropy S = —k Jp r) np r)dr written... [Pg.197]

The Shannon Equation Eq. (1)) [4] enables one to evaluate the information content, I (also known as the Shannon entropy), of the system. [Pg.208]

Also called the uncertainty or - because of its formal similarity to the entropy function used in statistical mechanics - Shannon entropy. [Pg.29]

Lloyd and Pagels show that these three requirements lead uniquely to an average complexity of a state proportional to the Shannon entropy of the set of (experimentally determined) trajectories leading to the given state (= EiPi oSzPi)- The thermodynamic depth of a state S to which the system S has evolved via the possible trajectory is equal to the amount of information required to specify that trajectory, or Djj S) Hamiltonian systems, Lloyd and... [Pg.627]

In physical chemistry, entropy has been introduced as a measure of disorder or lack of structure. For instance the entropy of a solid is lower than for a fluid, because the molecules are more ordered in a solid than in a fluid. In terms of probability it means also that in solids the probability distribution of finding a molecule at a given position is narrower than for fluids. This illustrates that entropy has to do with probability distributions and thus with uncertainty. One of the earliest definitions of entropy is the Shannon entropy which is equivalent to the definition of Shannon s uncertainty (see Chapter 18). By way of illustration we... [Pg.558]

Jeffrey W. Godden and Jurgen Bajorath, Analysis of Chemical Information Content Using Shannon Entropy. [Pg.450]

Other entropy types have been developed to determine the mixing level in systems, including Shannon entropy and Renyi entropy. Shannon entropy has been used to show distributive mixing levels in color mixing experiments [18,19], while Renyi entropy has been used to show a more rigorous analysis of distributive mixing [20, 21]. The reader is referred to the references as these types of entropies are beyond the scope of this chapter. [Pg.336]

It is a method that samples the chemical space accessible to a library in a manner that, when combined with assay data, allows the elucidation of a set of pharmacophores consistent with biological activity. The library is selected in a manner that maximizes the Shannon entropy [75] for the possible outcomes being tested by the library [76]. [Pg.179]

Stahura F, Godden JW, Xue L, Bajorath J. (2000) Distinguishing between natural products and synthetic molecules by descriptor Shannon entropy analysis and binary QSAR calculations. J Chem Inf Comput Sci 40 1245-1252. [Pg.124]

The quantity — Z nm ln nm > one term °f the expression to be maximized, is the Shannon entropy H as used by Jaynes and familiar to those who have studied thermal physics. For a given set nm obeying Xm=i nm = N, maximum H is attained when all the nm have the same value nm = N/M. The requirement that H be maximum, even when other constraints are attached, tends to force the nm toward this constant value and hence inhibits large excursions. This property of maximum-entropy restorations is certainly desirable. [Pg.117]

Property-optimised ProSAR libraries have the best pharmacophore Shannon entropy of all the libraries and 99.7% of the compounds have good properties (Table 7.2). In terms ofphar-... [Pg.144]

Godden, J. W., Stahura, F. L., Bajorath, J. (2000) Variabilities of molecular descriptors in compound databases revealed by Shannon entropy calculations. J Chem Inf Comput Sci 40, 796-800. [Pg.151]

Alemanskin et al. (79) extended the use of the entropic measure of mixing to both dispersive and distributive mixing using essentially modified Shannon entropy for systems with multiple species. These authors have also employed entropy to asses color homogeneity in extruded samples by means of computer image analysis and using standard direct red-blue-red (RGB) correlations (80,81). [Pg.391]

Let pi be the proportion of references in discipline i, and Sg the Salton s cosine similarity of disciplines i and j according to their citing patterns. Then Shannon entropy (or diversity) is defined as H = — JT pi In pi and Rao-Stirling diversity is defined as A = 1 — J2ijsijPiPj- See Rafols and Meyer25 for details. Porter et al.19 also introduced a formulation equivalent to Rao-Stirling s diversity. [Pg.677]

Number of Disciplines Shannon Entropy Rao-Stirting Diversity... [Pg.678]

If /is large, then the amino acid found at that position is highly conserved. Conversely, a low / indicates that many amino acids are allowed at that site. This information content is directly related to the Shannon entropy through a Boltzman weighting of the fitness changes w,(a) to calculate the probability that a residue is in amino acid state pi a). [Pg.129]

Thus, this average noise descriptor expresses the difference between the Shannon entropies of the molecular one- and two-orbital probabilities,... [Pg.9]

This quantity averages the Shannon entropies conditional on the Gamma and lognormal models, with weights given by their posterior probabilities. In Appendix B, we show that the average entropy is a concave function on the space of probability distributions which is monotone under contractive maps (Sebastiani and... [Pg.127]

Proof. The result follows from the fact that Shannon entropy is concave in the space of probability distribution (DeGroot, 1970), and the average entropy is a convex combination of Shannon entropies. [Pg.134]

Proof. The result follows from the monotonicity of Shannon entropy (Sebastiani and Wynn, 2000). [Pg.134]

If we adopt the maximum entropy method to reconstruct a given function [81 ] g(r ), with r being a real variable ranging from —oo to oo, the physical meaning of T) does not matter, and our attention has to focus only on the maximization of the Shannon entropy... [Pg.408]

Notice that this information approach to Levy statistics is even more direct than the nonextensive thermodynamic approach. As shown in Ref. 52, the adoption of the method of entropy maximization, with the Shannon entropy replaced by the Tsallis entropy [53], does not yield directly the Levy distribution, but a probability density function n(x) whereby reiterated application of the convolution generates the stable Levy distribution. [Pg.409]

In virtue of the similarity of entropy and the measure of the information quantity, the information quantity determined by expression (6.2) often is called the Shannon entropy. [Pg.305]

In bioinformatics, the invariance (or conservatism) of the position of amino acid j in a polypeptide chain is typically estimated using the Shannon entropy as an integral characteristic of the probabiftstic (stochastic) process (see also equation (6.2)) ... [Pg.314]

When the probability of an event tends to unit or zero, the Shannon entropy is close or equal to zero. Thus, if probability pji of localization of amino acidj in position i equals unit (an absolutely certain event), then... [Pg.314]


See other pages where Entropy Shannon is mentioned: [Pg.283]    [Pg.494]    [Pg.135]    [Pg.135]    [Pg.139]    [Pg.144]    [Pg.145]    [Pg.147]    [Pg.677]    [Pg.348]    [Pg.430]    [Pg.10]    [Pg.40]    [Pg.127]    [Pg.134]    [Pg.135]    [Pg.315]   
See also in sourсe #XX -- [ Pg.558 ]

See also in sourсe #XX -- [ Pg.494 ]

See also in sourсe #XX -- [ Pg.117 ]

See also in sourсe #XX -- [ Pg.139 , Pg.144 , Pg.147 ]

See also in sourсe #XX -- [ Pg.348 ]

See also in sourсe #XX -- [ Pg.123 ]

See also in sourсe #XX -- [ Pg.418 , Pg.420 , Pg.421 , Pg.422 ]

See also in sourсe #XX -- [ Pg.117 , Pg.118 , Pg.123 , Pg.128 , Pg.134 ]

See also in sourсe #XX -- [ Pg.51 , Pg.54 , Pg.76 , Pg.86 ]

See also in sourсe #XX -- [ Pg.263 , Pg.279 ]

See also in sourсe #XX -- [ Pg.289 ]




SEARCH



Shannon

© 2024 chempedia.info