Aaberg performance


The critical factor for any method involving an approximation or an extrapolation is its range of application. Liu et al. [15] demonstrated that the approach performed well for mutations involving the creation or deletion of single atoms. The method has also been successfully applied to the prediction of the relative binding affinities of benzene, toluene and o-, p-, and m-xylene to a mutant of T4-lysozyme [16]. In both cases, however, the perturbation to the system was small. To investigate range over which the extrapolation may  [c.159]

In order to adapt an engine to a given fuel of a given octane number, the automobile manufacturer must consider the design and control parameters in order to prevent knocking in all possible operating conditions the variables at hand are essentially the compression ratio and ignition advance which in turn determine the motor performance (thermal efficiency and specific horsepower). Horsepower can always be maintained by technological devices such as cylinder displacement and transmission ratios but the thermal efficiency always remains closely tied to the octane number. This is illustrated by the following example a 6-point increase in octane number (RON or MON) —corresponding to an average difference between a premium gasoline and a regular gasoline— enables a one-point gain in compression ratio (from 9 to 10, for example), which results in an efficiency improvement of 6%. An average 1% efficiency gain per point of octane number increase is thereby obtained. This approach has led to the concept of Car Efficiency Parameter (CEP). For an engine with a compression ratio exactly adapted to the fuel used, the CEP represents the weight per cent change in consumption resulting from a one-point change in octane number. In the preceding example, the CEP equals 1. That is the value most often used in economic evaluation of the technology. Now if the manufacturer changes the system acting on not the compression ratio but the ignition advance, the preceding tendency still applies but with a lower CEP, between 0.5 and 1. As a  [c.198]

In the future it will be difficult to avoid deterioration of certain characteristics such as viscosity, asphaltene and sediment contents, and cetane number. The users must employ more sophisticated technological means to obtain acceptable performance. Another approach could be to diversify the formulation of heavy fuel according to end use. Certain consuming plants require very high quality fuels while others can accept a lower quality.  [c.241]

A novel approach for suppression of grain noise in ultrasonic signals, based on noncoherent detector statistics and signal entropy, is presented. The performance of the technique is demonstrated using ultrasonic B-scans from samples with coarse material structure.  [c.89]

One approach to a mathematically well defined performance measure is to interpret the amplitude values of a processed signal as realizations of a stochastic variable x which can take a discrete number of values with probabilities P , n = 1,2,..., N. Briefly motivated in the introduction, then an interesting quality measure is the entropy H x) of the amplitude distribu-  [c.90]

In the systematic approach, the contaminated signal was processed using transients with parameters selected from a uniformly sampled grid in the parameter space. For each parameter value, the quality of the processed signal was computed. An example result is presented in Figure 2 which shows the performance as a function of the two parameters and / p. The parameter values /, and which yielded the lowest entropy were selected for processing.  [c.91]

A novel approach for suppression of material noise in ultrasonic signals, based on noncoherent detector statistics and signal entropy, has been presented. Experimental evaluation of the technique, using ultrasonic images from samples with coarse material structure, has proven its high performance.  [c.95]

This paper is structured as follows in section 2, we recall the statement of the forward problem. We remind the numerical model which relates the contrast function with the observed data. Then, we compare the measurements performed with the experimental probe with predictive data which come from the model. This comparison is used, firstly, to validate the forward problem. In section 4, the solution of the associated inverse problem is described through a Bayesian approach. We derive, in particular, an appropriate criteria which must be optimized in order to reconstruct simulated flaws. Some results of flaw reconstructions from simulated data are presented. These results confirm the capability of the inversion method. The section 5 ends with giving some tasks we have already thought of.  [c.327]

The results in Figure 3 illustrate how we by using the regression approach can find relevant features in an automatic manner. We can also interpret the extracted features physically. Maybe we would expect the log amplitude of the frequencies around 18 MHz to be the most relevant feature but the results in this example indicate that we instead should put more weight to the frequencies around 10 MHz, at least when performing the PE measurements and the preprocessing as was described above. One interpretation of this result is that since the higher frequencies are attenuated more heavily then the lower, we loose much energy (or loosely speaking, information) in these frequencies and therefore the lower frequencies will bear comparatively more information. However, it is important to note that the extraced weights are closely related to the preprocessing method used and, equally important, to how the measurements are performed.  [c.891]

Within all industries there is a need for a continuous review of the financial performance of the assets it manages particularly in relation to cost of operation the North Sea is no exception. Oil and gas companies have assessed their methods of operation and business strategies with one broadly common conclusion. This is to move from operating as all encompassing organisations with multi - disciplinary services in - house, to concentrating on an increasingly focussed core business. Fundamental to this approach was the recognition that many of these non - core activities would be better carried out by specialist external companies.  [c.1011]

We present state-to-state transition probabilities on the ground adiabatic state where calculations were performed by using the extended BO equation for the N = 3 case and a time-dependent wave-packet approach. We have already discussed this approach in the N = 2 case. Here, we have shown results at four energies and all of them are far below the point of Cl, that is, E = 3.0 eV.  [c.71]

The classical microscopic description of molecular processes leads to a mathematical model in terms of Hamiltonian differential equations. In principle, the discretization of such systems permits a simulation of the dynamics. However, as will be worked out below in Section 2, both forward and backward numerical analysis restrict such simulations to only short time spans and to comparatively small discretization steps. Fortunately, most questions of chemical relevance just require the computation of averages of physical observables, of stable conformations or of conformational changes. The computation of averages is usually performed on a statistical physics basis. In the subsequent Section 3 we advocate a new computational approach on the basis of the mathematical theory of dynamical systems we directly solve a  [c.98]

Potentials of Mean Force The conceptually simplest way to force the formation of a complex is to constrain the distance between the ligand and the binding site, while gradually reducing this distance. Then, in order for the ligand to approach the binding site along a path, s, an external force, Fs must be applied that balances the net internal force on the ligand. The work, Ws performed by the external force includes the free energy change and a contribution, Wp required to overcome friction  [c.134]

We have chosen to study the extraction of the xenon atom from its binding site inside the hydrophobic cavity in mutant T4 lysozyme as a simple system in which to model the ligand extraction process. The internal binding site in this mutant is hydrophobic and excludes water as a result, an important source of friction in the extraction of a ligand (the simultaneous entry of water molecules) is absent. On the other hand, this system shares with the avidin-biotin system the requirement for a distortion of the geometry at the exit point in order to permit the ligand to escape. With long, but feasible, simulations it may therefore be possible to approach conditions of very slow extraction and hence small friction, in which the extraction force is dominated by the change in free energy (Cf. eq. 4). We describe first the interactive simulations in which we located an exit path for the xenon atom, and then the results of a scries of extractions performed at different rates.  [c.141]

The fact that metallocenes employed as homogeneous catalysts are well-defined organometallic species make them ideally suitable for a theoretical molecular modelling study. In the past, studies on both classical Ziegler-Natta catalysts as well as on metallocenes have generally been performed with semi-empirical quantum mechanical methods or low level ab-initio methods. In more recent years, several groups have reported high level ab initio calculations on metallocene complexes, see, e.g., [3-5]. In particular the paper by Ahlrichs et al. [4] showed that this high level was necessary to retrieve qualitatively correct energetic data. All these calculations concerned static molecular structures, i.e. calculation of structures and energies of reactant states, transition states and products. Moreover, only the energetics were usually considered and entropic considerations were not taken into account. These limitations may be overcome by applying a quantum molecular dynamics approach. We will summarize results from quantum molecular dynamics simulations we have performed on ethylene insertion in various metallocenes and related species, which has produced a full record of the chemical reaction including dynamics and bond formation as well as bond breaking phenomena.  [c.434]

From among the many reaction classification schemes, only a few are mentioned here. The first model concentrates initially on the atoms of the reaction center and the next approach looks first at the bonds involved in the reaction center. These are followed by systems that have actually been implemented, and whose performance is demonstrated.  [c.183]

Although the optimized backtracking algorithm offers considerable improvement over the brute force approach, it stiU remains a heavy task to search a structural database with more than 50 000 compounds on conventional computers. Today, substructure searching is quite often performed on databases that contain a few himdred thousand or even a million structures. The third strategy (pre-processing the computationally expensive parts of the algorithm) for optimisation of substructure searching allows this algorithm to be applied effectively on very large structural databases. This is done by a process termed "screening".  [c.302]

Identification of mass spectra is typically performed by searching for similarities of the measured spectrum to spectra stored in a library. Several MS databases and corresponding software products are used routinely [81, 82], A more challenging problem is the interpretation of mass spectra by means of computational chemistry, and different strategies have been applied to the problem of substructure recognition. The use of correlation tables containing characteristic spectral data together with corresponding substructures has been successfully applied to spectroscopic methods. However, because of the complexity of chemical reactions that may occur in mass spectrometers and because of the lack of generally applicable rules, this approach has been less successful with MS.  [c.534]

Gasteiger and co-workers followed an approach based on models of the reactions taking place in the spectrometer [94, 95). Automatic knowledge extraction is performed from a database of spectra and the corresponding structures, and rules are saved. The rules concern possible elementary reactions, and models relating the probability of these reactions with physicochemical parameters calculated for the structures. The knowledge can then be applied to chemical structures in order to predict a) the reactions that occur in the spectrometer, b) the resulting products, and c) the corresponding peaks in the mass spectrum.  [c.535]

In some cases the atomic charges are chosen to reproduce thermodynamic properties calculated using a molecular dynamics or Monte Carlo simulation. A series of simulations is performed and the charge model is modified until satisfactory agreement with experiment is obtained. This approach can be quite powerful despite its apparent simplicity, but it is only really practical for small molecules or simple models.  [c.207]

Fixed functional dependence An alternative to a series expansion is to assume a particular functional dependence of the free energy on A. Jayaram and Beveridge [8], for example, derived an expression for the free energy of the system assuming the fluctuations of the potential energy obeyed a Gaussian distribution. This approach performed well when used to estimate the excess free energy of water but performed less well for hydration free energies of simple compounds. Similar expressions have been recently proposed by Amadei et al. [9]. A special case of this class of method is Linear Response theory. The basic premise in Linear Response theory is that the response of the environment to any given perturbation is linear, the classic example being the response of a system of constant dielectric to the introduction of a charge. The Linear Response assumption is equivalent to assuming that the fluctuations in the energy of interaction between a molecule and its surroundings are Gaussian distributed. If true, the difference in free energy is determined by the 1st and 2nd derivatives of the free energy with respect to the change. The approach is, therefore, the same as a Taylor expansion truncated after the second term assuming linear coupling between the initial and final states [10, 4, 11). In the Linear Response limit the difference in free energy between two states may also be expressed as  [c.152]

Under normal conditions only combinations of dienes and dienophiles that have FMO s of similar energy can be transformed into a Diels-Alder adduct. When the gap between the FMO s is large, forcing conditions are required, and undesired side reactions and retro Diels-Alder reactions can easily take over. These cases challenge the creativity of the organic chemist and have led to the invention of a number of methods for promoting reluctant Diels-Alder reactions under mild conditions ". One very general approach, performing Diels-Alder reactions under high pressure, makes use of the large negative volume of activation (about -25 to -45 cm per mole) characteristic for this reaction. The rate enhancements are modest, typically in the order of a factor 10 at a pressure of 1500 atm . Selectivities also benefit from an increase in pressure. Another physical method uses ultrasound irradiation. However, the observed accelerations are invariably a result of indirect effects such as the development of low concentrations of catalytically active species and more efficient mixing of the heterogeneous reaction mixtures under ultrasound conditions . Catalysis of Diels-Alder reactions through formation of supramolecrdar assemblies is becoming increasingly popular. Large molecrdes containing a cavity (e.g. cyclodextrins " or related  [c.11]

Direct three-dimensionai (or volumetric) imaging have been performed e.g. by Sire et al. [7]. In their work the whole specimen is insonified by a cone beam and reconstructed directly. In the present work the three-dimensional information was obtained by constructing two-dimensional reflection tomograms and stacking these in multiple continues planes in the third dimension, as indicated in Fig. 4. This approach needs less processing time and data storage than for direct reconstruction. Therefore, the stacking technique has been adopted for this NDE study. Once the data are mapped into a volumetric matrix composed of cubic voxels it can be numerically dissected in any plane. Fig. 4 also shows the six discontinuity types, i.e., (a)-(f) in an increasing axial distance from the edge of the 50 mm long cylinder.  [c.204]

An experimental activity on the stress measurement of a pressure vessel using the SPATE technique was carried out. It was demontrated that this approach allows to define the distribution of stress level on the vessel surface with a quite good accuracy. The most significant advantage in using this technique rather than others is to provide a true fine map of stresses in a short time even if a preliminary meticolous calibration of the equipment has to be performed.  [c.413]

Abstract At the Institute fuer Theoretische Nachrichtentechnik uiid Informationsver-arbeituiig at the University of Hannover investigations were carried out in cooperation with the Institute of Nuclear Engineering and Non-Destructive Testing concerning 3D analysis of internal defects using stereoradioscopy based on camera modelling. A camera calibration approach is used to determine 3D position and volume of internal defects using only two different X-ray images recorded from arbitrary directions. The volume of defects is calculated using intensity evaluation considering polychromatic radiation of microfocus X-ray tubes. The system performance was determined using test samples with different types of internal defects. Using magnifications between 1.1 and 1.4 the system achieves an accuracy of 0.5mm calculating the 3D positions of defects using samples rotated only 10° between two views and an accuracy of 0.3mm using 25° rotation. During calibration the distortion inherent in the image detector system is reduced from a maximum of 3.8mm to less than 0.1mm (0.3 pixel). The defect volumes are calculated with an overall accuracy of 10%. Additional results will be presented using the system to analyse casting defects.  [c.484]

The modellmg of the multiple scadering requires input of all atomic positions, so that the trial-and-error approach must be followed one guesses reasonable models for the surface structure, and tests them one by one until satisfactory agreement with experiment is obtained. For simple structures and in cases where structural infonuation is already known from other sources, this process is usually quite quick only a few basic models may have to be checked, e.g. adsorption of an atomic layer in hollow, bridge or top sites at positions consistent with reasonable bond lengths. It is then relatively easy to refine the atomic positions within the best-fit model, resultmg in a complete structural detenuination. The refinement is nonnally performed by some fonu of automatedsteepest-descent optimization, which allows many atomic positions to be adjusted simultaneously [H] Computer codes are also available to accomplish this part of the analysis [25]. The trial-and-error search with refinement may take minutes to hours on current workstations or personal computers.  [c.1770]

Using a similar approach, but with some changes in the details of producing the contimium seed light that were intended to produce as broad a signal bandwidth as possible, De Silvestri and co-workers [34] subsequently showed that signal pulses as short as 7.2 fs in the visible regime could be produced. In contemporaneous work, Kobayashi and co-workers [33] obtained sub-10 fs pulses from die visible signal and near-IR idler from a comparable apparatus. These latter two results have nearly matched die legendary performance of the pulse-compressed CPM dye laser of Shank and co-workers [15].  [c.1972]

By using this approach, it is possible to calculate vibrational state-selected cross-sections from minimal END trajectories obtained with a classical description of the nuclei. We have studied vibrationally excited H2(v) molecules produced in collisions with 30-eV protons [42,43]. The relevant experiments were performed by Toennies et al. [46] with comparisons to theoretical studies using the trajectory surface hopping model [11,47] fTSHM). This system has also stimulated a quantum mechanical study [48] using diatomics-in-molecule (DIM) surfaces [49] and invoicing the infinite-onler sudden approximation (lOSA).  [c.241]

A major motivation for the study of conical intersections is their assumed importance in the dynamics of photoexcited molecules. Molecular dynamics methods are often used for this purpose, based on available potential energy surfaces [118-121]. We briefly survey some methods designed to deal with relatively laige molecules (>5 atoms). Several authors combine the potential energy surface calculations with dynamic simulations. A relatively stiaightfor-ward approach is illustrated by the work of Ohmine and co-workers [6,122]. Ab initio calculations of the ground and excited potential surfaces of polyatomic molecules (ethylene and butadiene) were performed. Several specific nuclear motions were chosen to inspect their importance in inducing curve crossing. These included torsion, around C=C and C-C bonds, bending, stretching and hydrogen-atom migration. The ab initio potentials were parametrized into an analytic form in order to solve the dynamic equations of motion. In this way, Ohmine was able to show that hydrogen migration is important in the radiationless decay of ethylene.  [c.385]

This paper presents the theoretical background and some practical applications of a new conformational free energy simulation approach, aimed at correcting the above shortcomings. The new method, called Conformational Free energy Thermodynamic Integration (CFTI), is based on the observation that it is possible to calculate the conformational free energy gradient with respect to an arbitrary number of conformational coordinates from a single simulation with all coordinates in the set kept fixed [2, 8]. The availability of the conformational gradient makes possible novel techniques of multidimensional conformational free energy surface exploration, including locating free energy minima by free energy optimization and analysis of structural stability based on second derivatives of the free energy. Additionally, by performing simulations with all "soft degrees of freedom of the system kept fixed, free energy averages converge very quickly, effectively overcoming the conformational sampling problem.  [c.164]

The CFTI method is highly efficient, has improved convergence properties and enables new ways of exploring energy landscapes of flexible molecules. The efficiency is due to the fact that calculation of the free energy gradient with respect to an arbitrary number of coordinates may be performed at essentially the same cost as a standard one-dimensional TI simulation under the same conditions [2]. This is because the most expensive terms to evaluate, dU/d k, may be expressed in terms of simple algebraic transformations of the Cartesian gradient dU/dqj, which is known at each step of a simulation [2, 8]. A single simulation yields derivatives of free energy with respect to all conformational degrees of interest, yielding a complete local characterization of conformational space, not just the derivative along a one-dimensional reaction path [2, 8]. This enables the determination of stability of structures with respect to perturbations, location of minima on the free energy surface, and finding minimum free energy paths connecting different states. The accelerated convergence may be achieved by selecting all soft degrees of freedom as the fixed coordinates. In the case of peptides these would be the backbone 4>, Ip, and some of the sidechain dihedrals [8, 9, 10]. The sampling of the restricted conformational space of remaining hard degrees of freedom and solvent is very fast - simulations of 20-50 ps were sufficient to obtain precise gradient values in the studied cases. Simulations of similar length are sometimes used in the standard approach to free energy profiles, where only the reaction coordinate is constrained. However in these methods, because of the size of the available conformational space, the convergence of thermodynamic averages is often assumed rather than actually achieved.  [c.166]

Initial results of the CFTI method, a new conformational free energy simular tion approach, are presented. The main idea of the method is the generalization of standard thermodynamic integration from one to many dimensions. By performing a single simulation with a set of conformational coordinates held fixed, free energy derivatives with respect to all coordinates in the set may be obtained. The availability of the conformational free energy gradient opens the door to new ways of exploring free energy surfaces of flexible molecules.  [c.173]

Screening systems normally use a predefined set of structural fragments called keys. For each key a preliminary (in a pre-process phase) substructure search is performed across the whole structural database. For each database compound a string of bits is constructed. Each bit of this string denotes the presence or absence of a key in the corresponding database compound. The kth bit in the bit-string is set to 1 if the kth key fragment is a substructure of the current database compoimd otherwise the kth bit is set to 0. In the same way during the substructure searching a bit-string of the query structure is constructed. This step is usually very fast since a few himdred isomorphism checks are performed for a set of quite simple structural fragments. Further, the query bit-string is compared with each of the bit-strings from the database. The target compounds are screened as follows each key which is present in the query structure must be present in the target structure (the corresponding bits are compared by using the fast bitwise operation AND, OR, and XOR). If at least one key that is present in the query graph is not present in the target graph, then this compound is pruned from a further processing. In this way a great many structures which are not likely to survive the isomorphism check are pruned early in the screening stage, thus escaping the much more time-consuming backtracking algorithm. This results in a much smaller set of structures being checked for structure isomorphism by means of the backtraddng algorithm. For example, in a typical screening session more than 90% of the database compounds which do not contain the query substructure are removed. Inasmuch as the time dependence of the screening procedure is linear, it decreases the time needed for substructure searching considerably (for example, this approach is 10 to 20 times faster than without the use of screening). Additionally, the screening procedure itself is very fast since it involves only a few isomorphism checks and is performed with the very fast bitwise operations.  [c.302]

Gasteiger and co-authors [71] implemented the following approach for full spectra simulation. The previously mentioned 3D MoRSE descriptor (Section 8.4.3), derived om an equation used in electron diffraction studies, allows the representation of the 3D structure of a molecule by a fixed (constant) number of variables. By using a fast 3D structure generator they were able to study the correlation between any three-dimensional structure and IR spectra using ANN. Steinhauer et al. [72] used RDF codes as structure descriptors. Together with the IR spectrum, a counterpropagation (CPG) neural network was trained to establish the complex relationship between an RDF descriptor and an IR spectrum. After training, the simulation of an IR spectrum is performed using the RDF descriptor of the query compound as the information vector of the Kohonen layer, which determines the central neuron. On input of this query RDF descriptor, the central neirron is selected and the corresponding IR spectrum in the output layer is presented as the simulated spectrum. Selzer et al. [73] described an application of this spectrum simulation method that provides rapid access to arbitrary reference spectra. Kostka et al. described a combined application of spectrum prediction and reaction prediction expert systems [74]. The combination of the reaction prediction system EROS and IR spectrum simulation proved to be a powerful tool for computer-assisted substance identification. More details and the description of a web tool implementing this method can be found in the "Tools Section 10.2.5.3.  [c.530]

Neural networks have been applied to IR spectrum interpreting systems in many variations and applications. Anand [108] introduced a neural network approach to analyze the presence of amino acids in protein molecules with a reliability of nearly 90%. Robb and Munk [109] used a linear neural network model for interpreting IR spectra for routine analysis purposes, with a similar performance. Ehrentreich et al. [110] used a counterpropagation network based on a strategy of Novic and Zupan [111] to model the correlation of structures and IR spectra. Penchev and co-workers [112] compared three types of spectral features derived from IR peak tables for their ability to be used in automatic classification of IR spectra.  [c.536]

Then, in 1960, Corey introduced a general methodology for planning organic syntheses. Corey s synthon concept [2.1-25] was a downright change of the perception of an organic synthesis. The synthesis plan for a target molecule is developed by starting with the target structure (the product of the synthesis ) and working backwards to available starting materials, The rctrosynthctic analysis or disconnection of the target molecule in tbc reverse direction is performed by the systematic use of analytical rules which have been formulated by Corey. For the example of tropinonc, this is shown in Figure 10.3-29. Corey s approach is nowadays widely accepted as the disconnection approach and is taught in a number of textbooks (e.g Ref. [26 ).  [c.569]

The Japanese program system AlPHOS is developed by Funatsu s group at Toyo-hashi Institute of Technology [40]. AlPHOS is an interactive system which performs the retrosynthetic analysis in a stepwise manner, determining at each step the synthesis precursors from the molecules of the preceding step. AlPHOS tries to combine the merits of a knowledge-based approach with those of a logic-centered approach.  [c.576]

In our treatment of molecular systems we first show how to determine the energy for a given iva efunction, and then demonstrate how to calculate the wavefunction for a specific nuclear geometry. In the most popular kind of quantum mechanical calculations performed on molecules each molecular spin orbital is expressed as a linear combination of atomic orhilals (the LCAO approach ). Thus each molecular orbital can be written as a summation of the following form  [c.61]

I hi. procedure requires an initial guess of the density matrix, P. The simplest approach is to ii f the null matrix, which corresponds to ignoring all the electron-electron terms so that the eleclroiis just experience the bare nuclei. This can sometimes lead to convergence problems, uhich may be prevented if a lower level of theory (such as semi-empirical or extended Fiiii-kcll is used to provide the initial guess. Moreover, a better guess may enable the lalculation to be performed more quickly. A variety of criteria can be used to establish uliellicr the calculation has converged or not. For example, the density matrix can be ciH. qured with that from the previous iteration, and/or the change in energy can be monitored together with the basis set coefficients.  [c.81]

We have now considered the key features of the ab initio approach to quantum mechanical calculations and so, as an antidote to the rather theoretical nature of the chapter so far, it i-. appropriate to consider how the method might be used in practice. Quantum mechanics can be used to calculate a wide range of properties. In addition to thermodynamic and structural values, quantum mechanics can be used to derive properties dependent upon the electronii. distribution. Such properties often cannot be determined by any other method. In Ihi -section we shall provide a flavour of the ways in which quantum mechanics is used in molecular modelling. Other applications, such as the location of transition structures LinJ the use of quantum mechanics in deriving force field parameters, will be discussed in later chapters. Many different computer programs are now available for performing ab initio calculations probably the best known of these is the Gaussian series of programs which originated in the laboratory of John Pople, who has made numerous contributions to the field, recognised by the award of the Nobel Prize in 1998.  [c.94]

Chapter 2 we worked through the two most commonly used quantum mechanical models r performing calculations on ground-state organic -like molecules, the ab initio and semi-ipirical approaches. We also considered some of the properties that can be calculated ing these techniques. In this chapter we will consider various advanced features of the ab Itio approach and also examine the use of density functional methods. Finally, we will amine the important topic of how quantum mechanics can be used to study the solid state.  [c.128]

An alternative approach is exemplified by the MM2/MM3/MM4 family of programs. First, a molecular orbital calculation is performed on the tt system. If the initial conformation of the system is non-planar the calculation is performed on the equivalent planar system. The force lield parameters are then modified according to the quantum mechanical bond orders. In ]VIMF2 (the name given to the special version of MM2 which incorporated these features) Ihesc parameters are the force constant for the bonds in the tt system, the reference bond lengths and the torsional barriers [Sprague et al. 1987 Allinger and Sprague 1973]. The system is then subjected to the usual molecular mechanics treatment using the new force Held parameters. A linear relationship between the stretching constants and the bond orders, and between the reference bond lengths and the bond orders was found to give good results. Initially, the torsional barriers were assumed to be proportional to the square of the bond orders, but this relationship was modified slightly in subsequent versions  [c.251]


See pages that mention the term Aaberg performance : [c.166]    [c.53]    [c.160]    [c.170]    [c.298]    [c.530]    [c.131]    [c.138]    [c.251]   
Industrial ventilation design guidebook (2001) -- [ c.816 , c.817 ]