Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Shannon information

A heuristic derivation of T is as follows ([grass86a], [grass86c]) Since the Shannon information stored in a block of length-5 is given by... [Pg.219]

Another drawback to using Shannon information as a measure of complexity is the fact that it is based on an ensemble of all possible states of a system and therefore cannot describe the information content of a single state. Shannon information thus resembles traditional statistical mechanics - which describes the average or aggregate behavior of, say, a gas, rather than the motion of its constituent molecules - more so than it docs a complexity theory that must address the complexity of individual objects. [Pg.616]

Another way of looking at it is that Shannon information is a formal equivalent of thermodynamic entroi)y, or the degree of disorder in a physical system. As such it essentially measures how much information is missing about the individual constituents of a system. In contrast, a measure of complexity ought to (1) refer to individual states and not ensembles, and (2) reflect how mnc h is known about a system vice what is not. One approach that satisfies both of these requirements is algorithmic complexity theory. [Pg.616]

We have defined above a way of quantifying the structure of water based on the profile of fx values that encode the number of each possible joined state of a molecule. It is now possible to use this profile as a measure of the structure of water at different temperatures. As an application of this metric it is possible to relate this to physical properties. We have shown the results of our earlier work in Table 3.3. The reader is encouraged to repeat these and to explore other structure-property relationships using the fx as single or multiple variables. A unified parameter derived from the five fx values expressed as a fraction of 1.0, might be the Shannon information content. This could be calculated from all the data created in the above studies and used as a single variable in the analysis of water and other liquid properties. [Pg.56]

Record the average fx values for water, then convert these to fractions of 1.00, then calculate the Shannon information content. [Pg.70]

The rules in Example 4.6 are used to estimate the effective temperature resulting from the presence of a solute. In this study, replace 30 water molecules with 30 solute molecules. Use parameters for these solute molecules reflecting a moderately polar character, such as Tb(SS) = 0.5, J(SS) = 0.7 and Pb(WS) = 0.2, and T(WS) = 2.0. Run the dynamics and collect the fx values for the water. Convert these fx values as fractions of 1.00. Compute the Shannon information content, H, for this set of parameters. [Pg.70]

Yet another indirect connection between momentum and coordinate space charge densities is derived via a quantity called the Shannon information entropy... [Pg.67]

The Shannon information content (in thermodynamics, the entropy) can be calculated from the probability distribution of allowed amino acids substitutions (Fontana and Shuster, 1987 Saven and Wolynes, 1997 Dewey and Donne, 1998). Counting the number of sequences at a given fitness is mathematically isomorphic to calculating the entropy at a given fitness, S(F) = In fl, where the number of states (l is the number of sequences at fitness F. The entropy. v, for a given site i can also be calculated from Eq. (25)... [Pg.128]

The MEP is now well known to set an axiom of modem statistical mechanics [12-14]. The Shannon information entropy... [Pg.72]

Another consequence of Lemma 8.3 is that the result z gives no (Shannon) information about the first parameter, X, of exp x, X2), if X2 is chosen uniformly at random. This is called a hiding property, because the result z hides xj perfectly. [Pg.220]

The one-electron Shannon information entropy [108], Sr, of the electron density, pit), in coordinate space is defined as... [Pg.69]

K.D. Sen, Characteristic features of Shannon information entropy of confined atoms, J. Chem. Phys. 123 (2005) 074U0(4p). [Pg.76]

All possible contacts have been taken into account, and flie normalized Boltzmann probability distributions, as well as the Shannon information entropies, have been calculated for each pair of the molecules (i.e., AE, BE, CE, DE) as functions of temperature T within flie range from 100 K to 400 K. The results (Fig. 15.9) show that flie molecule A (wifli flie simplest structure) does not recognize molecule E at any temperature T under study. The ignorance (Shannon s entropy) of A with respect to E is of the order of 5 bits. Transforming molecule A into B improves the... [Pg.993]

Fig. 15.9. A model of the temperature-dependent molecular teaching the left-side object t student molecule ) is recognize the shape of the right-side molecule ( teacher molecule") in three steps ( lessons ) A - B C - D. Each of the complexes (AE, BE, CE, DE) is characterized by the state of ignorance (Shannon information entropy), while the knowledge learned in each lesson (A —B. B —> C. C -> D) is a measure of diminishing the ignorance of the student molecule fin bitsl. Fig. 15.9. A model of the temperature-dependent molecular teaching the left-side object t student molecule ) is recognize the shape of the right-side molecule ( teacher molecule") in three steps ( lessons ) A - B C - D. Each of the complexes (AE, BE, CE, DE) is characterized by the state of ignorance (Shannon information entropy), while the knowledge learned in each lesson (A —B. B —> C. C -> D) is a measure of diminishing the ignorance of the student molecule fin bitsl.
Analogous with the case of messages of the same probability, one defines the expected amount of information as the entropy of information, or the Shannon information entropy -... [Pg.992]

Here kg is Boltzmann s constant and prob(j) is the probability of observing the y th state of a system. The similarities of Equations (2.17) through (2.19) are not coincidental. It is apparent that information and entropy are related if not alternate sides of the same coin. The inaugural properties and applications of Equation (2.17) were the brainchild of Claude Shannon and thus / is commonly referred to as the Shannon information [4]. The term Shannon entropy is written almost as often on account of the ties to Equations (2.18) and (2.19). The mixing entropy of Eqnation (2.18) is visited several times in subsequent chapters. [Pg.22]

Then the probability of observing a peptide lacking in V is simply 1 - 0.7000 = 0.3000. Information-wise, an experiment aimed at the question Does the randomly selected peptide contain at least 1 V offers Shannon information ... [Pg.26]

There are numerous questions in the same vein. What is the probability that the peptide contains exactly one V Pursuit of the answer is guided by a fraction set 0.1400, 0.8600 and Shannon information 0.584 bits based on the question Does the peptide contain exactly one V In these simple examples, the probability set of Table 2.5 is used to generate others. Most important, they show the... [Pg.26]

That the widths are nonzero at all means that uncertainty will precede measurements endeavored by the chemist. As in Figure 3.10, a thermodynamic measurement attaches to Shannon information in a nontrivial way. Figure 3.12 then shows the contrasts in the Figure 3.11 examples. If the chemist queries the left-side pressure at a resolution of 50kgTIV i, there are 20 or so states that will manifest frequently, and which he or she can discriminate. The plot shows the sum of weighted surprisals as a function of state index i—the lower p values correspond to lower i. There is about 30% more information, approximately 3.8 bits, trapped via a system 1 measurement compared with system 2. Clearly, when an equation of state is used to anticipate a quantity such as pressure, there are more issues at play than a correction term to add or subtract. To be precise, an equation of state furnishes an estimate of an average of a physical quantity subject to fluctuations. [Pg.81]

The sum of weighted surprisals equates with the Shannon information for the variable X in question, in this case pressure ... [Pg.110]

More than one type of information is expressed by a program. A pathway can be assessed not only for the Shannon information, but also Kullback and mutual information. [Pg.119]

FIGURE 5.2 Pressure state distributions. A, B, and C refer to pathways of Figure 5.1. The Shannon information follows from pressnre queries and measurements at resolution equal to... [Pg.128]

A is the rednced (i.e., dimensionless) length of the pathway nnder consideration (cf. Equations 4.28 and 4.29). A is the reduced length of the straight-line path connecting initial and final states. is the Shannon information concerning state variable X(information expressed by X for the straight-line path queried at the same resolution. [Pg.140]

This is important because it shows that for straight-line pathways, the mutual information equates with the Shannon information for each control variable. It answers the question For what thermodynamic programs are the Shannon and mutual information equal ... [Pg.148]

Ethane and propane are restricted to C-H and C-C units. The same units are carried by ethanethiol, in addition to C-S and S-H. To quantify information in the first order—the bits per single ABA encounter—one tabulates the occurrence frequencies (/j) for each unit recorded on the tape. A given molecule poses Af different units. The first-order Shannon information (/j) follows from the (now) familiar formula ... [Pg.171]


See other pages where Shannon information is mentioned: [Pg.214]    [Pg.616]    [Pg.69]    [Pg.483]    [Pg.152]    [Pg.237]    [Pg.242]    [Pg.70]    [Pg.1009]    [Pg.680]    [Pg.402]    [Pg.449]    [Pg.1009]    [Pg.136]    [Pg.153]   
See also in sourсe #XX -- [ Pg.29 ]

See also in sourсe #XX -- [ Pg.152 ]




SEARCH



Shannon

© 2024 chempedia.info