Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Distributional entropy

One optimum requires a uniformly distributed entropy production rate in a heat exchanger, mixer, or separator. Consider the example of countercurrent and cocurrent heat exchangers shown in Figure 4.11. Temperature profiles... [Pg.177]

The intensive porperty Temperature is supplemented by a complementary extensive property, entropy. In the case of energy in form of heat it gives the number of degrees of freedom among which the average energy of motion (of the material particles involved), characterized by the temperature, is distributed. Entropy and temperature are complementary state variables. [Pg.1944]

One optimum requires a uniformly distributed entropy production rate in a heat exchanger, mixer, or separator. Consider the example of countercurrent and cocurrent heat exchangers shown in Figure 4.4. Temperature profiles show that the driving force AP or 1/AP is more uniformly distributed in the counter-current than in the cocurrent flow operation. This is the basic thermodynamic reason why a countercurrent is better than a cocurrent operation. The duty of the exchangers depends on the flow rate and Met and outlet temperatures T and T2 of cold streams. The duty is the amount of heat transferred from the hot fluid to cold... [Pg.192]

Entropy, enthalpy, and free energy of reversible polymerization Arrhenius relationship for rate constants Subcritical damped oscillations during thermal polymerization Polyrate of terpolymerization of AMS-AN-Sty Enthalpy of random copolymers Effect of chain sequence distribution Entropy and free energy of copolymerization Copolymer composition with and without ceiling temperature effect... [Pg.285]

Equation (27.4) has appeared before as the translational entropy of a lattice gas (Equation (7.9)) and as the entropy of mixing in a three-dimensional system (Equation (15.2)). Remarkably, this distributional entropy does not depend on whether a system is one-, two-, or three-dimensional. Nor does it depend on the arrangement of the binding sites. They could form a contiguous two-dimensional plane, or each binding site could be found on a different protein or polymer molecule. [Pg.516]

In statistical terms, a perceptual improvement is therefore obtained if the amplitude distribution in the filtered signal (image) is more concentrated around zero than in the raw data (contrast enhancement). A more concentrated amplitude distribution generally means smaller entropy. Thus, from an operator perception point of view, interesting results should be obtained if the raw data can be filtered to yield low entropy amplitude distributions. However, one should note that the entropy can be minimized by means of a (pathological) filter which always outputs zero or another constant value. Thus, appropriate restrictions must be imposed on the filter construction process. [Pg.89]

In the experiments, the probabilities were estimated from the processed signal by means of a histogram. It is well known that the entropy is large for nearly uniform distributions and small for distributions with few peaks. Thus it is an interesting candidate as a performance measure when the goal is to process a signal to become more easily interpreted. [Pg.91]

For equilibrium systems, diemiodynamic entropy is related to ensemble density distribution p as... [Pg.388]

Wlien H has reached its minimum value this is the well known Maxwell-Boltzmaim distribution for a gas in themial equilibrium with a unifomi motion u. So, argues Boltzmaim, solutions of his equation for an isolated system approach an equilibrium state, just as real gases seem to do. Up to a negative factor (-/fg, in fact), differences in H are the same as differences in the themiodynamic entropy between initial and final equilibrium states. Boltzmaim thought that his //-tiieorem gave a foundation of the increase in entropy as a result of the collision integral, whose derivation was based on the Stosszahlansatz. [Pg.685]

Clearly, G = A +. S in this example. The entropy matrix can be obtained from the Maxwell-Boltzmann distribution... [Pg.700]

This can be inserted in equation (02.2.3) to give tlie orientational distribution function, and tlius into equation (02.2.6) to deteniiine the orientational order parameters. These are deteniiined self-consistently by variation of tlie interaction strength iin equation (c2.2.7). As pointed out by de Gemies and Frost [20] it is possible to obtain tlie Maier-Saupe potential from a simple variational, maximum entropy metliod based on tlie lowest-order anisotropic distribution function consistent witli a nematic phase. [Pg.2556]

Amadei, A., Apol, M. E. F., Di Nola, A., Berendsen, H. J. C. The quasi-Gaussian entropy theory Free energy calculations based on the potential energy distribution function. J. Chem. Phys. 104 (1996) 1560-1574... [Pg.162]

When g = 1 the extensivity of the entropy can be used to derive the Boltzmann entropy equation 5 = fc In W in the microcanonical ensemble. When g 1, it is the odd property that the generalization of the entropy Sq is not extensive that leads to the peculiar form of the probability distribution. The non-extensivity of Sq has led to speculation that Tsallis statistics may be applicable to gravitational systems where interaction length scales comparable to the system size violate the assumptions underlying Gibbs-Boltzmann statistics. [4]... [Pg.199]

This probability distribution can be found by extremizing the generalization of the entropy Eq. (1) subject to the constraints... [Pg.206]

The Boltzmann distribution is fundamental to statistical mechanics. The Boltzmann distribution is derived by maximising the entropy of the system (in accordance with the second law of thermodynamics) subject to the constraints on the system. Let us consider a system containing N particles (atoms or molecules) such that the energy levels of the... [Pg.361]

More fundamental treatments of polymer solubihty go back to the lattice theory developed independentiy and almost simultaneously by Flory (13) and Huggins (14) in 1942. By imagining the solvent molecules and polymer chain segments to be distributed on a lattice, they statistically evaluated the entropy of solution. The enthalpy of solution was characterized by the Flory-Huggins interaction parameter, which is related to solubihty parameters by equation 5. For high molecular weight polymers in monomeric solvents, the Flory-Huggins solubihty criterion is X A 0.5. [Pg.435]

Equation (1) can be viewed in an over-simplistic manner and it might be assumed that it would be relatively easy to calculate the retention volume of a solute from the distribution coefficient, which, in turn, could be calculated from a knowledge of the standard enthalpy and standard entropy of distribution. Unfortunately, these properties of a distribution system are bulk properties. They represent, in a single measurement, the net effect of a large number of different types of molecular interactions which, individually, are almost impossible to separately identify and assess quantitatively. [Pg.49]

It is clear that a graph of ln(V j-) or In(k ) against 1/T will give straight line. This line will provide actual values for the standard enthalpy (AH ), which can be calculated from the slope of the graph and the standard entropy (AS ), which can be calculated from the intercept of the graph. These types of curves are called van t Hoff curves and their important characteristic is that they will always give a linear relationship between In(V r) and (1/T). However, it is crucial to understand that the distribution... [Pg.49]

Summarizing, the greater the forces between the molecules, the greater the energy (enthalpy) contribution, the larger the distribution coefficient, and the greater the retention. Conversely, any reduction in the random nature of the molecules or any increase in the amount of order in the system reduces the distribution coefficient and attenuates the retention. In chromatography, the standard enthalpy and standard entropy oppose one another in their effects on solute retention. Experimentally it has... [Pg.53]

Different portions of the standard free energy of distribution can he allotted to different parts of a molecule and, thus, their contribution to solute retention can be disclosed. In addition, from the relative values of the standard enthalpy and standard entropy of each portion or group, the nianner in which the different groups interact with the stationary phase may also be revealed. [Pg.61]

Uncertainly estimates are made for the total CDF by assigning probability distributions to basic events and propagating the distributions through a simplified model. Uncertainties are assumed to be either log-normal or "maximum entropy" distributions. Chi-squared confidence interval tests are used at 50% and 95% of these distributions. The simplified CDF model includes the dominant cutsets from all five contributing classes of accidents, and is within 97% of the CDF calculated with the full Level 1 model. [Pg.418]


See other pages where Distributional entropy is mentioned: [Pg.248]    [Pg.164]    [Pg.163]    [Pg.248]    [Pg.164]    [Pg.163]    [Pg.658]    [Pg.660]    [Pg.389]    [Pg.392]    [Pg.533]    [Pg.465]    [Pg.649]    [Pg.400]    [Pg.435]    [Pg.248]    [Pg.102]    [Pg.324]    [Pg.7]    [Pg.48]    [Pg.51]    [Pg.53]    [Pg.54]    [Pg.83]    [Pg.84]    [Pg.133]    [Pg.141]   
See also in sourсe #XX -- [ Pg.248 ]




SEARCH



Distribution of entropy production

Distribution of maximal entropy

Entropy Effects in Phase Distribution Porous Media

Entropy and distribution of probability

Entropy distribution

Entropy distribution

Entropy driven distribution

Entropy theory distribution computation

Maximal entropy distribution

Maximal entropy distribution calculation

Maximal entropy distribution determination

Maximum entropy distribution

Probability density distribution function for the maximum information entropy

Probability distribution, entropy-sampling

Spectrum distribution entropy

© 2024 chempedia.info