Thomas-Fermi model averages


One of the major uses of molecular simulation is to provide useful theoretical interpretation of experimental data. Before the advent of simulation this had to be done by directly comparing experiment with analytical (mathematical) models. The analytical approach has the advantage of simplicity, in that the models are derived from first principles with only a few, if any, adjustable parameters. However, the chemical complexity of biological systems often precludes the direct application of meaningful analytical models or leads to the situation where more than one model can be invoked to explain the same experimental data.  [c.237]

The remaining problem analysis technique can be applied to any feature of the network that can be targeted, such as minimum area. In Chap. 7 the approach to targeting for heat transfer area [Eq. (7.6)] was based on vertical heat transfer from the hot composite curve to the cold composite curve. If heat transfer coefficients do not vary significantly, this model predicts the minimum area requirements adequately for most purposes. Thus, if heat transfer coefficients do not vary significantly, then the matches created in the design should come as close as possible to the conditions that would correspond with vertical transfer between the composite curves. Remaining problem analysis can be used to approach the area target, as closely as a practical design permits, using a minimum (or nea minimum) number of units. Suppose a match is placed, then its area requirement can be calculated. A remaining problem analysis can be carried out by calculating the area target for the stream data, leaving out those parts of the data satisfied by the match. The area of the match is now added to the area target for the remaining problem. Subtraction of the original area target for the whole-stream data Anetwork gives the area penalty incurred.  [c.387]

This paper is structured as follows in section 2, we recall the statement of the forward problem. We remind the numerical model which relates the contrast function with the observed data. Then, we compare the measurements performed with the experimental probe with predictive data which come from the model. This comparison is used, firstly, to validate the forward problem. In section 4, the solution of the associated inverse problem is described through a Bayesian approach. We derive, in particular, an appropriate criteria which must be optimized in order to reconstruct simulated flaws. Some results of flaw reconstructions from simulated data are presented. These results confirm the capability of the inversion method. The section 5 ends with giving some tasks we have already thought of.  [c.327]

Industrial catalysts usually consist of one or more metals supported on a metal oxide. The supported metal can be viewed as discrete smgle crystals on the support surface. Changes in the catalyst stmcture can be achieved by varying the amount, or loading , of the metal. An increased loading should result in a particle size increase, and so the relative population of a particular crystal face with respect to other crystal faces may change. If a reaction rate on a per active site basis changes as the metal loading changes, then the reaction is deemed to be stmcture sensitive. The surface science approach to studying stmcture-sensitive reactions has been to examine the chemistry that occurs over different crystal orientations. In general, these studies have shown that close-packed, atomically smooth metal surfaces such as (111) and (100) fee and (110) bcc surfaces are less reactive than more open, rough surfaces such as fcc(l 10) and bcc(l 11). The remaining task is then to relate the stmcture sensitivity results from single-crystal studies to the activity results over real-world catalysts.  [c.938]

Another example of the difficulty is offered in figure B3.1.5. Flere we display on the ordinate, for helium s (Is ) state, the probability of finding an electron whose distance from the Fie nucleus is 0.13 A (tlie peak of the Is orbital s density) and whose angular coordinate relative to that of the other electron is plotted on the abscissa. The Fie nucleus is at the origin and the second electron also has a radial coordinate of 0.13 A. As the relative angular coordinate varies away from 0°, the electrons move apart near 0°, the electrons approach one another. Since both electrons have opposite spin in this state, their mutual Coulomb repulsion alone acts to keep them apart.  [c.2160]

In tire light of tire previously mentioned difficulties various simplified models of proteins have been suggested [14]. The main rationale for using such drastic simplifications is tliat a detailed study of such models can enable us to decipher certain general principles that govern tire folding of proteins [5, 6 and 7]. For tliis class of model detailed computations without sacrificing accuracy is possible. Such an approach has yielded considerable insights into tire mechanisms, time scales and patliways in tire folding of polypeptide chains. In tliis section we will outline some of tire results tliat have been obtained (largely from our group) witli tire aid of simple lattice models of proteins.  [c.2645]

A decomposition of the binding free energy into component terms is useful as a means of gaining insight into the factors that contribute to the stability of the complex, or lack thereof [3]. Two important terms, the protein-ligand energy and the solvation free energy can be calculated rather easily the former can be computed with the molecular mechanics force field, and averaged over a molecular dynamics simulation, and the latter can be estimated in terms of solvent polarization computed on the basis of a continuum dielectric model and an empirical hydrophobic free energy which is proportional to the molecular surface. However, some other component terms are very difficult to estimate. Consequently, this approach is most useful when the latter terms can be assumed to be constant, as for a series of related ligands, for which differences in binding free energy can then be estimated from the remaining components, and lends itself to high-volume tasks, such as screening a library of small molecules to find likely inhibitors of an enzyme when the structure of the binding site is known, e.g. [20].  [c.131]

We have chosen to study the extraction of the xenon atom from its binding site inside the hydrophobic cavity in mutant T4 lysozyme as a simple system in which to model the ligand extraction process. The internal binding site in this mutant is hydrophobic and excludes water as a result, an important source of friction in the extraction of a ligand (the simultaneous entry of water molecules) is absent. On the other hand, this system shares with the avidin-biotin system the requirement for a distortion of the geometry at the exit point in order to permit the ligand to escape. With long, but feasible, simulations it may therefore be possible to approach conditions of very slow extraction and hence small friction, in which the extraction force is dominated by the change in free energy (Cf. eq. 4). We describe first the interactive simulations in which we located an exit path for the xenon atom, and then the results of a scries of extractions performed at different rates.  [c.141]

The idea of LN is to eliminate LIN s expensive implicit integration component and, concomitantly, reduce the interval length over which the harmonic model is followed. This view is reasonable given that the anharmonic corrections are small when At < 5 is (Fig. 10). Our first implementation of LN was without force splitting [73]. The method was verified in this form for model proteins and shown to yield a speedup factor of around 5 with respect to reference (BBK) explicit trajectories At = 0.5 fs). This is possible because the linearization component is relatively cheap. As found by many others who introduced MTS schemes, the expensive component is the work required for the gradient evaluation of large systems (e.g., more than 80% of the total CPU for systems of 2000 atoms and more). In the context of the LN method, we found that an extrapolative force-splitting approach, together with the Langevin formulation, can alleviate severe resonances [88, 89).  [c.251]

The idea behind this approach is simple. First, we compose the characteristic vector from all the descriptors we can compute. Then, we define the maximum length of the optimal subset, i.e., the input vector we shall actually use during modeling. As is mentioned in Section 9.7, there is always some threshold beyond which an inaease in the dimensionality of the input vector decreases the predictive power of the model. Note that the correlation coefficient will always be improved with an increase in the input vector dimensionality.  [c.218]

An alternative method to the point charge model presented above is to apply the so-called multipole expansion, which is based on electric moments or multipoles. These are charges, dipoles, quadrupoles, octopoles, etc. Based on this principle, MM2, MM3, or MM4 avoid the very time-consuming Coulomb double-sum of interacting charges by a dipole-dipole formulation of the electrostatic energy. Here, dipoles oriented along the polar bonds in a neutral molecule are used to calculate the interaction energy. Despite the fact that many charge-charge interactions are substituted by only a few dipole-dipole interactions (which also saves computational time), the latter approach compares well with the Coulomb form. However, if a molecule contains charged groups, these non-zero monopoles must be included in the electrostatics calculation via additional charge-charge and charge-di-pole interaction terms.  [c.345]

The spectra from strong oscillators have special features which are different from those from metallic and dielectric substrates. Different structures in tanf and A are observed on a metallic substrate, dependent on the thickness of the film (Fig. 4.65). For very thin films up to approximately 100 nm the Berreman effect is found near the position of n = k and n < 1 with a shift to higher wavenumbers in relation to the oscillator frequency. This effect decreases with increasing thickness (d > approx. 100 nm) and is replaced by excitation of a surface wave at the boundary of the dielectric film and metal. The oscillator frequency (TO mode) can now also be observed. On metallic substrates for thin films (d < approx. 2 pm) only the 2-component of the electric field is relevant. With thin films on a dielectric substrate the oscillator frequency and the Berreman effect are always observed simultaneously, because in these circumstances all three components of the electric field are possible (Fig. 4.66).  [c.272]

In pattern recognition analysis, each AE hit is represented by a pattern vector, composing of representative AE features. The technique quantify the similarity or dissimilarity of pattern vectors/AE hits and assists the AE analyst to find statistical correlation between signal groups in a multi-dimensional feature space and establish the signature of AE, i.e. to discover a characteristic set of reproducible attributes of AE signals capable to describe the evolution of damage stages and/or to correlate with the various failure mechanisms. Such an approach, enables the AE analyst to overcome difficulties encountered due to human incapability of representing and visualising data in more than three dimensions. Furthermore, it does not require previously known signals from each failure mechanism, which for composite materials it is doubtful if they can be simulated in a reliable way by model specimens.  [c.39]

Various aspects of the experimental approach to the chemisorption bond are illustrated in the preceding sections. Modem spectroscopic and surface diffraction techniques provide a wealth of information about the chemisorbed state. Analysis of LEED intensity data permits the estimation of adsorbate-adsorbent bond lengths [147], usually 5-10% longer than in molecules having a similar bond. Bond lengths may also be obtained from XPD and SEXAFS data (see Table VIII-1) [148]. A bond energy can be obtained from temperature-programmed desorption data if coupled with knowledge of the activation energy for adsorption (Eq. XVIII-21) see Ref. 149 for the case of a heterogeneous surface. The traditional approach to obtaining bond energies is, of course, through isosteric heats of adsorption, although complications are that equilibrium may be difficult to reach and/or the surface may be heterogeneous. Some literature data compiled by Shustorovich, Baetzold, and Muetterties [ISO] are shown in Table XVIII-2. For hydrogen atom-metal bonds Q averages about 62 kcal/mol, corresponding to about 20 kcal/mol for desorption as H2. Bond energies for CO and NO run somewhat higher. Values can vary depending on the surface preparation and, of course, on the crystal plane involved if the surface is a well-defined one. Older compilations may be found in Refs. 81 and 84, and more recent ones, in Somoijai [13].  [c.712]

An increasingly popular approach because of its computational efficiency is known as density functional theory (DFT).f The method was originally developed for an interacting gas in a weak external field and calculates the effect of the field using the one-particle density, thus separating the effect from that of particle-particle interactions [161]. The expression for the total energy further separates out an electron exchange-correlation functional. The application to chemical bonding with its high electron density and rapidly varying local fields would seem to involve irresponsible approximations, yet actually works quite well (see Refs. 148, 162 for examples). One example is the calculation of the interaction of carbon monoxide with functional groups in zeolites [163], and of H2 on various metal surfaces. See Refs. 163a-d and 164 for recent examples of the results of DFT calculations. Generally speaking, the method does better for bond energies and vibrational frequencies than for structure determinations. DFT, somewhat related to the Xa method, currently seems to be more in the  [c.714]

The question stated above was fomuilated in two ways, each using an exact result from classical mechanics. One way, associated witii the physicist Losclnnidt, is fairly obvious. If classical mechanics provides a correct description of the gas, then associated with any physical motion of a gas, there is a time-reversed motion, which is also a solution of Newton s equations. Therefore if //decreases in one of these motions, tliere ought to be a physical motion of the gas where H increases. This is contrary to the //-tlieorem. The other objection is based on the recurrence theorem of Poincare [H], and is associated with the mathematician Zemielo. Poincare s theorem states that in a bounded mechanical system with finite energy, any mitial state of the gas will eventually recur as a state of the gas, to within any preassigned accuracy. Thus, if //decreases during part of the motion, it must eventually increase so as to approach, arbitrarily closely, its initial value.  [c.686]

The previous sections indicate that the frill quantum dynamical treatment of IVR in an intemiediate size molecule even under conditions of coherent excitation shows phenomena reminiscent of relaxation and equilibration. This suggests that, in general, at very high excitations in large polyatomic molecules with densities of states easily exceeding the order of 10 (or about 10 molecular states in an energy interval corresponding to 1 J moH ), a statistical master equation treatment may be possible [38, 122]. Such an approach has been justified by quantum simulations in model systems as well as analytical considerations [38], following early ideas in the derivation of the statistical mechanical Pauli equation [176]. Figure A3.13.14 shows the kinetic behaviour in such model systems. The coarse grained populations of groups of quantum states ( levels with less than 100 states, indexed by capital letters / and J) at tlie same total energy show very similar behaviour if calculated from the Scln-ddinger equation, e.g. equation (A3.13.43). or the Pauli equation (A3.13.71).  [c.1079]

The modellmg of the multiple scadering requires input of all atomic positions, so that the trial-and-error approach must be followed one guesses reasonable models for the surface structure, and tests them one by one until satisfactory agreement with experiment is obtained. For simple structures and in cases where structural infonuation is already known from other sources, this process is usually quite quick only a few basic models may have to be checked, e.g. adsorption of an atomic layer in hollow, bridge or top sites at positions consistent with reasonable bond lengths. It is then relatively easy to refine the atomic positions within the best-fit model, resultmg in a complete structural detenuination. The refinement is nonnally performed by some fonu of automatedsteepest-descent optimization, which allows many atomic positions to be adjusted simultaneously [H] Computer codes are also available to accomplish this part of the analysis [25]. The trial-and-error search with refinement may take minutes to hours on current workstations or personal computers.  [c.1770]

Finally, endohedral fullerenes are discussed. They attracted considerable attention for tlieir potential use as superconductors, organic ferromagnets and magnetic resonance imaging agents (MRI). The entluisiasm tliat has arisen is based, in part, on tire fact tliat tire carbon network of each fullerene surrounds a large empty space which, in turn, renders it capable of encapsulating atomic particles. Furtliennore, tliese novel materials created tire stimulating possibility to fine-tune tire fullerene s physical and chemical properties via systematic substitution of tire embedded metal species. In general, two approaches are pursued to incoriDorate tire metal into tire fullerene s interior. The first one implies tire synergetic utilization of tire arc discharge metliod of carbon rods in tire presence of metal carbides [198, 199]. Thus, tire metal is present during tire genesis of tire fullerene network and can be scavenged by tire closing sphere. In contrast to tliis approach, tire second alternative involves tire chemically induced opening of tire carbon network, stuffing of tire vacant interior witli metals, and, in tire last step, tire subsequent re-closing of tire open sphere [200]. It should be emphasized tliat tire latter concept is a very challenging endeavour from tire standpoint of syntliesis. In fact, so far only tire first route has led to isolable yields of endohedral fullerenes.  [c.2422]

A different approach comes from the idea, first suggested by Flelgaker et al. [77], of approximating the PES at each point by a harmonic model. Integration within an area where this model is appropriate, termed the trust radius, is then trivial. Normal coordinates, Q, are defined by diagonalization of the mass-weighted Flessian (second-derivative) matrix, so if  [c.266]

The first work on generalizing the Gaussian wavepacket methods to account for non-adiabatic effects was made by Sawada and Metiu [33]. They used a wave function described by a single Gaussian function for the nuclear wavepacket in each electronic state, and derived equations of motion for the Gaussian parameters that are similar to the Heller equations Eqs. (42)-(45), but include terms with the non-adiabatic coupling. This direct Gaussian wavepacket approach has been applied to model systems [216], but the inflexibility of the wave function form makes it unable to obtain more than qualitative infoiination. Recently, the method has been extended to use a hamionic oscillator (Gauss-Heiinite) basis set representing the packets on each surface [217], which may add enough flexibility for reasonable results.  [c.294]

Irreversible work might also be discounted by forcing a conformational change in the system followed by the reverse conformational change, i.e., inducing hysteresis. Such an approach may yield a model free estimate of the irreversible work component from the hysteresis (Baljon and Robbins, 1996 Xu et ah, 1996). Finally, lengthening the simulation time decreases the amount of irreversible work and the simulated process could, ideally, reach cluasl-equilibrium in the limit of very long simulation times.  [c.59]

VV e now wish to establish the general functional form of possible wavefunctions for the two electrons in this pseudo helium atom. We will do so by considering first the spatial part of the u a efunction. We will show how to derive functional forms for the wavefunction in which the i change of electrons is independent of the electron labels and does not affect the electron density. The simplest approach is to assume that each wavefunction for the helium atom is the product of the individual one-electron solutions. As we have just seen, this implies that the total energy is equal to the sum of the one-electron orbital energies, which is not correct as ii ignores electron-electron repulsion. Nevertheless, it is a useful illustrative model. The wavefunction of the lowest energy state then has each of the two electrons in a Is orbital  [c.57]

The main reason for the failure of pairwise potentials is that they are unable to deal simultaneously with both surface and bulk environments. Thus on the surface there are generally fewer bonds, but these tend to be stronger than in the bulk, where there are more, bul weaker, bonds. Several many-body potentials have been devised to try to address this problem. Many of these potentials have a similar, sometimes mathematically equivalent, functional form. This reflects their common origins in some form of quantum mechanical description of bonding. However, they differ in their underlying approach, the degree to which they conform to these quantum mechanical origins and the way in which they are parametrised. Here we will outline various models the Finnis-Sinclair model (and the Sutton-Chen extension), the embedded-atom model, the StUlinger-Weber model and the Tersoff model.  [c.259]

US model can be combined with the Monte Carlo simulation approach to calculate a r range of properties them is available from the simple matrix multiplication method. 2 RIS Monte Carlo method the statistical weight matrices are used to generate chain irmadons with a probability distribution that is implied in their statistical weights.  [c.446]

Once a protein model has been constructed, it is important to examine it for flaws. Much ( this analysis can be performed automatically using computer programs that examine tl structure and report any significant deviations from the norm. A simple test is to genera a Ramachandran map, in order to determine whether the amino acid residues occupy tl energetically favourable regions. The conformations of side chains can also be examine to identify any significant deviations from the structures commonly observed in X-ray stru tures. More sophisticated tests can also be performed. One popular approach is Eisenberg 3D profiles method [Bowie et al. 1991 Liithy et al. 1992]. This calculates three properties f( each amino acid in the proposed structure the total surface area of the residue that is burie in the protein, the fraction of the side-chain area that is covered by polar atoms and the loc secondeiry structure. These three parameters are then used to allocate the residue to one ( eighteen environment classes. The buried surface area and fraction covered by polar aton give six classes (Figure 10.25) for each of the three types of secondary structure (a-helix, / sheet or coil). Each amino acid is given a score that reflects the compatibility of that amir acid for that environment, based upon a statistical analysis of known protein structure Specifically, the score for a residue i in an environment is calculated using  [c.559]

Liquid hydrogen fluoride is another fluid of interest due to its sh ong hydrogen-bonding potential. Experimental data suggest the existence of chain-like strucfures, each containing betw een six and eight HF molecules held together by hydrogen bonds. In the liquid these chains adopt a zig-zag conformation and are significantly entangled. In addition, there is the possibility of branched structures formuig, but tire relative importance of these is a matter of debate. The structure of the liquid is very sensitive to the nature of the potential model. The ab initio molecular dynamics simulations used a density functional approach, and it was necessary to use a gradient-corrected functional in order to describe the system correctly. The simulation contained 54 molecules, with the production phase lasting 0.8ps [Rothlisberger and Parrinello 1997]. Although the data from the simulation were rather noisy due to the short simulation time, a number of feafures were apparent. For example, a small degree of branching was observed, with a difference between the hkelihood of branching at the hydrogen (1%) and fluorine atoms (6%).  [c.636]

Genetic algorithms can also be used to derive QSAR equations [Rogers and Hopfinger 1994] The genetic algorithm is supplied with the compounds, their activities and informatioi about their properties cind other relevant descriptors. From this data, the genetic algorithn generates a population of linear regression models, each of which is then evaluated to give the fitness score. A new population of models is then derived using the usual genetic algorithm operators (see Section 9.9.1), with the parameters in the models being selectee on the basis of the fitness. Unlike other methods, the genetic algorithm approach provide a family of models from which one can either select the model with the best score o generate an average model.  [c.717]


See pages that mention the term Thomas-Fermi model averages : [c.44]    [c.53]    [c.36]    [c.528]    [c.282]    [c.1012]    [c.726]    [c.1021]    [c.2223]    [c.388]    [c.389]    [c.444]    [c.498]    [c.529]    [c.249]    [c.251]    [c.256]    [c.460]    [c.469]    [c.556]    [c.561]    [c.625]    [c.727]   
Molecular modelling Principles and applications (2001) -- [ c.0 ]