Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Entropy information theory

On the other hand, we also have a set of desired probabilities P that we want the Boltzman Machine to learn. From elementary information theory, we know that the relative entropy... [Pg.535]

The most important characteristic of self information is that it is a discrete random variable that is, it is a real valued function of a symbol in a discrete ensemble. As a result, it has a distribution function, an average, a variance, and in fact moments of all orders. The average value of self information has such a fundamental importance in information theory that it is given a special symbol, H, and the name entropy. Thus... [Pg.196]

Coifman, R. R., and Wickerhauser, M. V., Entropy-based algorithms for best basis selection, IEEE Trans. Inform. Theory 38(2), 713-718 (1992). [Pg.98]

The first term in this expression is an entropy of mixing term related to electron transfer the second term is the information loss due to polarization of the AIM. Minimizing the information loss per atom results in the Hirshfeld population analysis [64,65] and many other results in the broad field of chemical information theory [26,66-75]. Zeroing the entropy of mixing term by choosing a reference ion that has the same number of electrons as the AIM, one obtains the Hirshfeld-I population analysis [76,77],... [Pg.277]

The entropies per unit time as well as the thermodynamic entropy production entering in the formula (101) can be interpreted in terms of the numbers of paths satisfying different conditions. In this regard, important connections exist between information theory and the second law of thermodynamics. [Pg.121]

It is most remarkable that the entropy production in a nonequilibrium steady state is directly related to the time asymmetry in the dynamical randomness of nonequilibrium fluctuations. The entropy production turns out to be the difference in the amounts of temporal disorder between the backward and forward paths or histories. In nonequilibrium steady states, the temporal disorder of the time reversals is larger than the temporal disorder h of the paths themselves. This is expressed by the principle of temporal ordering, according to which the typical paths are more ordered than their corresponding time reversals in nonequilibrium steady states. This principle is proved with nonequilibrium statistical mechanics and is a corollary of the second law of thermodynamics. Temporal ordering is possible out of equilibrium because of the increase of spatial disorder. There is thus no contradiction with Boltzmann s interpretation of the second law. Contrary to Boltzmann s interpretation, which deals with disorder in space at a fixed time, the principle of temporal ordering is concerned by order or disorder along the time axis, in the sequence of pictures of the nonequilibrium process filmed as a movie. The emphasis of the dynamical aspects is a recent trend that finds its roots in Shannon s information theory and modem dynamical systems theory. This can explain why we had to wait the last decade before these dynamical aspects of the second law were discovered. [Pg.129]

The recent theoretical approach based on the information theory (IT) in studying aqueous solutions and hydration phenomena [62 66] shows such a direction. IT is a part of the system based on a probabilistic way of thinking about communication, introduced in 1948 by Sharmon and subsequently developed [114]. It consists in the quantitative description of the information by defining entropy as a function of probability... [Pg.707]

The maximum entropy method (MEM) is an information-theory-based technique that was first developed in the field of radioastronomy to enhance the information obtained from noisy data (Gull and Daniell 1978). The theory is based on the same equations that are the foundation of statistical thermodynamics. Both the statistical entropy and the information entropy deal with the most probable distribution. In the case of statistical thermodynamics, this is the distribution of the particles over position and momentum space ( phase space ), while in the case of information theory, the distribution of numerical quantities over the ensemble of pixels is considered. [Pg.115]

Information theory 71,72) is a convenient basis for the quantitative characterization of structures. It introduces simple structural indices called information content (total or mean) of any structured system. For such a system having N elements distributed into classes of equivalence Nls N2,..., Nk a probability distribution P pi, p2,..., pk is constructed (p = Nj/N). The entropy of this distribution, calculated 71) by the Shannon formula1 ... [Pg.42]

In this form the 125 letters contain little or no information, but they are very rich in entropy. Such considerations have led to the conclusion that information is a form of energy information has been called negative entropy. In fact, the branch of mathematics called information theory, which is basic to the programming logic of computers, is closely related to thermodynamic theory. Living organisms are highly ordered, nonrandom structures, immensely rich in information and thus entropy-poor. [Pg.25]

The measure 5 [P(co)] is called the entropy corresponding to the distribution P((o). According to information theory, if a certain set of moments of P(co) are known, that P(co) is optimum which maximizes 5 [P(co)] subject to the moment constraints. Suppose we know only... [Pg.58]

Maximizing the information entropy of a distribution gives in some sense the smoothest distribution consistent with our available information65 on this distribution. We have tested the information theory prediction of from for two different systems the Stockmayer and modified Stockmayer simulations of CO. We have already seen that these two systems represent two extreme forms of molecular rotational motion. In the Stockmayer simulation the molecules rotate essentially freely whereas in the modified Stockmayer simulation there is evidence for strongly hindered rotational motion. from the... [Pg.101]

Unfortunately, there is great scope for confusion, as two distinct techniques include the phrase maximum entropy in their names. The first technique, due to Burg,135 uses the autocorrelation coefficients of the time series signal, and is effectively an alternative means of calculating linear prediction coefficients. It has become known as the maximum-entropy method (MEM). The second technique, which is more directly rooted in information theory, estimates a spectrum with the maximum entropy (i.e. assumes the least about its form) consistent with the measured FID. This second technique has become known as maximum-entropy reconstruction (MaxEnt). The two methods will be discussed only briefly here. Further details can be found in references 24, 99, 136 and 137. Note that Laue et a/.136 describe the MaxEnt technique although they refer to it as MEM. [Pg.109]

In this technique, a spectrum is generated to have maximum entropy (in the information theory sense)138 subject to being consistent with the observed Fid,136- 139 This inverse problem is solved iteratively. At each stage, the inverse Fourier transform of the spectrum is taken as an estimate of the FID and... [Pg.110]

Scheme 1.8 The probability scattering in benzene (HOckel theory) for the representative input orbital z, =2pz( and the associated OCT entropy/information descriptors. Scheme 1.8 The probability scattering in benzene (HOckel theory) for the representative input orbital z, =2pz( and the associated OCT entropy/information descriptors.
R.F. Nalewajski, Entropy descriptors of the chemical bond in information theory. I. Basic concepts and relations, Mol. Phys. 102 (2004) 531. [Pg.46]

R.F. Nalewajski, Many-orbital probabilities and their entropy/information descriptors in orbital communication theory of the chemical bond, J. Math. Chem. 47 (2010) 692. [Pg.48]

Incidentally, Equation (1.15) is also called the Shannon formula for entropy. Claude Shannon was an engineer who developed his definition of entropy, sometimes called information entropy, as a measure of the level of uncertainty of a random variable. Shannon s formula is central in the discipline of information theory. [Pg.13]

The maximum entropy method (MEM) is based on the philosophy of using a number of trial spectra generated by the computer to fit the observed FID by a least squares criterion. Because noise is present, there may be a number of spectra that provide a reasonably good fit, and the distinction is made within the computer program by looking for the one with the maximum entropy as defined in information theory, which means the one with the minimum information content. This criterion ensures that no extraneous information (e.g., additional spectral... [Pg.74]

Pa k 0l ), and to extract the extreme member p Q Oi ). We model the probabilities Pa k 0i ) on an information theory basis. We consider a relative or cross-information entropy (Shore and Johnson, 1980),... [Pg.74]

The success of the maximum entropy procedure to predict the shattering of clusters encourages us to use it in more complicated systems, where very little is known about the potential energy surface. In the next section the results from both molecular dynamics simulations and information theory analysis for clusters made up of N2 and O2 molecules are presented. [Pg.67]

The linear response function [3], R(r, r ) = (hp(r)/hv(r ))N, is used to study the effect of varying v(r) at constant N. If the system is acted upon by a weak electric field, polarizability (a) may be used as a measure of the corresponding response. A minimum polarizability principle [17] may be stated as, the natural direction of evolution of any system is towards a state of minimum polarizability. Another important principle is that of maximum entropy [18] which states that, the most probable distribution is associated with the maximum value of the Shannon entropy of the information theory. Attempts have been made to provide formal proofs of these principles [19-21], The application of these concepts and related principles vis-a-vis their validity has been studied in the contexts of molecular vibrations and internal rotations [22], chemical reactions [23], hydrogen bonded complexes [24], electronic excitations [25], ion-atom collision [26], atom-field interaction [27], chaotic ionization [28], conservation of orbital symmetry [29], atomic shell structure [30], solvent effects [31], confined systems [32], electric field effects [33], and toxicity [34], In the present chapter, will restrict ourselves to mostly the work done by us. For an elegant review which showcases the contributions from active researchers in the field, see [4], Atomic units are used throughout this chapter unless otherwise specified. [Pg.270]

For the formal theorems and proofs, it is useful if the reader is familiar with elementary information theory (see [Shan48] and [Gall68, Sections 2.2 and 2.3]). The most important notions are briefly repeated in the notation of [Gall68]. It is assumed that a common probability space is given where all the random variables are defined. Capital letters denote random variables and small letters the corresponding vadues, and terms like P(x) are abbreviations for probabilities like P(X = x). The joint random variable of X and Y is written as X, Y. The entropy of a random variable X is... [Pg.346]

The solution of a protein crystal structure can still be a lengthy process, even when crystals are available, because of the phase problem. In contrast, small molecule (< 100 atoms) structures can be solved routinely by direct methods. In the early fifties it was shown that certain mathematical relationships exist between the phases and the amplitudes of the structure factors if it is assumed that the electron density is positive and atoms are resolved [255]. These mathematical methods have been developed [256,257] so that it is possible to solve a small molecule structure directly from the intensity data [258]. For example, the crystal structure of gramicidin S [259] (a cyclic polypeptide of 10 amino acids, 92 atoms) has been solved using the computer programme MULTAN. Traditional direct methods are not applicable to protein structures, partly because the diffraction data seldom extend to atomic resolution. Recently, a new method derived from information theory and based on the maximum entropy (minimum information) principle has been developed. In the immediate future the application will require an approximate starting phase set. However, the method has the potential for an ab initio structure determination from the measured intensities and a very small sub-set of starting phases, once the formidable problems in providing numerical methods for the solution of the fundamental equations have been solved. [Pg.406]

Since this expression is similar to that found for entropy in statistical mechanics, it is called the entropy of the probability distribution p,. Jaynes [260] shows that the thermodynamic entropy is identical to the information theory entropy except for the presence of the Boltzman constant in the former, made necessary by our arbitrary temperature scale. [Pg.407]

The maximum entropy method achieves a remarkable universality and unification based on common sense reduced to calculation . It has been applied to information theory, statistical mechanics, image processing in radio astronomy, and now to X-ray crystallography. The prospects for a computational solution to the phase problem in protein crystallography appear promising and developments in the field are awaited eagerly. [Pg.408]


See other pages where Entropy information theory is mentioned: [Pg.23]    [Pg.23]    [Pg.721]    [Pg.311]    [Pg.28]    [Pg.534]    [Pg.306]    [Pg.28]    [Pg.36]    [Pg.95]    [Pg.568]    [Pg.188]    [Pg.390]    [Pg.145]    [Pg.382]    [Pg.17]    [Pg.2]    [Pg.175]    [Pg.407]    [Pg.117]   
See also in sourсe #XX -- [ Pg.109 ]




SEARCH



Entropy theory

Entropy, informational

Entropy-based information theory

Information entropy

© 2024 chempedia.info