Aaberg defined


The unrestricted approach defines two different sets of spatial molecular orbitals—those that hold electrons with spin up  [c.228]

The Newton-Raphson approach, being essentially a point-slope method, converges most rapidly for near linear objective functions. Thus it is helpful to note that tends to vary as 1/P and as exp(l/T). For bubble-point-temperature calculation, we can define an objective function  [c.118]

The wave function T i oo ( = 11 / = 0, w = 0) corresponds to a spherical electronic distribution around the nucleus and is an example of an s orbital. Solutions of other wave functions may be described in terms of p and d orbitals, atomic radii Half the closest distance of approach of atoms in the structure of the elements. This is easily defined for regular structures, e.g. close-packed metals, but is less easy to define in elements with irregular structures, e.g. As. The values may differ between allo-tropes (e.g. C-C 1 -54 A in diamond and 1 -42 A in planes of graphite). Atomic radii are very different from ionic and covalent radii.  [c.45]

The type of development, type and number of development wells, recovery factor and production profile are all inter-linked. Their dependency may be estimated using the above approach, but lends itself to the techniques of reservoir simulation introduced in Section 8.4. There is never an obvious single development plan for a field, and the optimum plan also involves the cost of the surface facilities required. The decision as to which development plan is the best is usually based on the economic criterion of profitability. Figure 9.1 represents a series of calculations, aimed at determining the optimum development plan (the one with the highest net present value, as defined in Section 13).  [c.214]

Firm items such as pipelines are often estimated using charts of cost versus size and length. The total of such items and allowances may form a preliminary project estimate. In addition to allowances some contingency s often made for expected but undefined changes, for example to cover design and construction changes within the project scope. The objective of such an approach is to define an estimate that has as much chance of under running as over running (sometimes termed a 50/50 estimate).  [c.299]

One approach to a mathematically well defined performance measure is to interpret the amplitude values of a processed signal as realizations of a stochastic variable x which can take a discrete number of values with probabilities P , n = 1,2,..., N. Briefly motivated in the introduction, then an interesting quality measure is the entropy H x) of the amplitude distribu-  [c.90]

The input signal of the system sensor-material of the material testing is given by the surrounding area of the sensor, like the air, the material and its flaw respectively. A material edge of a sufficient thickness can be considered as a step function s(x), defined in first approach as a step from 0 (material) to 1 (air). Then the sensor scans the material edge and measures the step response h(x) point by point. The derivation of h(x) delivers the pulse response g(x). In chapter 4.1. and 4.2. it will be shown measured step responses of different magnet field sensors, calculated impulse responses and their using for comparisons.  [c.366]

With the New Approach the number of directives needed to achieve the internal market target significantly decreased. In addition the European Council and the European Parliament were no longer required to deal with detailed technical requirements. Instead they were called upon only to define essential requirements needed to protect the public interest. The main elements of the New Approach can be summarised as follows  [c.938]

Using this scan sampling approach, inspectors can concentrate on one task at a time. First the scan itself defining its scope and parameters. The manual sampling method allows the inspector to concentrate more on the probe positioning and placement because he will not be constantly looking away to see a display. The grid and enforced scanning pattern helping to prevent missed areas. Once the scan is completed, the inspector can then move to the second task of analyzing the data.  [c.1017]

The film pressure is defined as the difference between the surface tension of the pure fluid and that of the film-covered surface. While any method of surface tension measurement can be used, most of the methods of capillarity are, for one reason or another, ill-suited for work with film-covered surfaces with the principal exceptions of the Wilhelmy slide method (Section II-6) and the pendant drop experiment (Section II-7). Both approaches work very well with fluid films and are capable of measuring low values of pressure with similar precision of 0.01 dyn/cm. In addition, the film balance, considerably updated since Langmuir s design (see Section III-7) is a popular approach to measurement of V.  [c.114]

The treatment of physical adsorption has so far been based on more or less plausible physical models leading to expressions for an adsorption isotherm. Historically, this has been the productive approach, focused on surface area determination. The multilayer region was the one of interest, with submonolayer adsorption viewed mainly as a means of exploring adsorbent heterogeneity (see Section XVII-14). We return to this phenomenological approach in following sections, but recognize here the important developments in recent years in which the methods of chemisorption (note Section XVIII-2) have been applied to physisorption systems. Clean, well-defined surfaces are used, the adsorption is studied at sufficiently low temperatures that the ambient vapor pressure is low enough to permit the use of the various diffraction and spectroscopic techniques as well as of the various microscopies such as scanning tunneling (STM) and atomic force (AFM).  [c.634]

If we define s follow tire same approach as in the one-photon case. We now take the  [c.249]

At the limit of extremely low particle densities, for example under the conditions prevalent in interstellar space, ion-molecule reactions become important (see chapter A3.51. At very high pressures gas-phase kinetics approach the limit of condensed phase kinetics where elementary reactions are less clearly defined due to the large number of particles involved (see chapter A3.6).  [c.759]

Another approach involves starting with an initial wavefimction Iq, represented on a grid, then generating // /q, and consider that tiiis, after orthogonalization to Jq, defines a new state vector. Successive applications //can now be used to define an orthogonal set of vectors which defines as a Krylov space via the iteration (n = 0,.. ., A)  [c.984]

With this convention, we can now classify energy transfer processes either as resonant, if IA defined in equation (A3.13.81 is small, or non-resonant, if it is large. Quite generally the rate of resonant processes can approach or even exceed the Leimard-Jones collision frequency (the latter is possible if other long-range potentials are actually applicable, such as by pennanent dipole-dipole interaction).  [c.1054]

The fitting parameters in the transfomi method are properties related to the two potential energy surfaces that define die electronic resonance. These curves are obtained when the two hypersurfaces are cut along theyth nomial mode coordinate. In order of increasing theoretical sophistication these properties are (i) the relative position of their minima (often called the displacement parameters), (ii) the force constant of the vibration (its frequency), (iii) nuclear coordinate dependence of the electronic transition moment and (iv) the issue of mode mixing upon excitation—known as the Duschinsky effect—requiring a multidimensional approach.  [c.1201]

The above approximation, however, is valid only for dilute solutions and with assemblies of molecules of similar structure. In the event that concentration is high where intemiolecular interactions are very strong, or the system contains a less defined morphology, a different data analysis approach must be taken. One such approach was derived by Debye et al [21]. They have shown tliat for a random two-phase system with sharp boundaries, the correlation fiinction may carry an exponential fomi.  [c.1396]

In more detail, in time-dependent approaches an initial wavepacket associated with the separate parts of the colliding system is prepared. The most efficient approach for a grid construction is to use two grids. Thus, the total wavefunction is divided into two parts [30, ], a simple one which defines the initial wavefiinction and a more complicated part, represented (for collinear scattering) on a two-dimensional grid of R, r values, which is used to carry most of the wavefiinction (essentially the scattered part) and which is padded with absorbing potentials (see figure B3.4.5 ). With this approach and with metliods which reduce the number of grid points (m a given region) to at most two per oscillation [, ], the total number of grid points can be reduced to  [c.2300]

A commonly used approach is coordinate driving. Here an appropriate internal coordinate, or a linear combination of coordinates, is chosen as a reaction coordinate. At various intervals along this coordinate, between its value in the reactants and in the products, all the other variables are optimized. This then defines a minimum energy path. The energy maximum on this path can be shown to be the transition state geometry. Usually, however, the maximum on the path is located only approximately. Coordinate driving involves several minimizations in (n - 1) variables consequently it is quite expensive. Moreover, its success depends on a good definition of the reaction coordinate it should be roughly parallel with the true reaction path. If, at any pomt along the path, the reaction coordinate becomes nearly perpendicular to the reaction path, the latter may become discontinuous. The minimum energy path defined in this way has little physical significance, as different choices of reaction coordinate can produce different pathways.  [c.2350]

Pye C C and Poirier R A 1998 Graphical approach for defining natural Internal coordinates J. Comput. Chem. 19 504  [c.2357]

Furtlier details can be found elsewhere [20, 78, 82 and 84]. An approach to tire dynamics of nematics based on analysis of microscopic correlation fimctions has also been presented [85]. Various combinations of elements of tire viscosity tensor of a nematic define tire so-called Leslie coefficients [20, 84].  [c.2558]

An experimental activity on the stress measurement of a pressure vessel using the SPATE technique was carried out. It was demontrated that this approach allows to define the distribution of stress level on the vessel surface with a quite good accuracy. The most significant advantage in using this technique rather than others is to provide a true fine map of stresses in a short time even if a preliminary meticolous calibration of the equipment has to be performed.  [c.413]

A common approach to treating retardation in dispersion forces is to define an effective Hamaker constant that is not constant but depends on separation distance. Lxioking back at Eq. VI-22, this defines the effective Hamaker constant  [c.235]

Various aspects of the experimental approach to the chemisorption bond are illustrated in the preceding sections. Modem spectroscopic and surface diffraction techniques provide a wealth of information about the chemisorbed state. Analysis of LEED intensity data permits the estimation of adsorbate-adsorbent bond lengths [147], usually 5-10% longer than in molecules having a similar bond. Bond lengths may also be obtained from XPD and SEXAFS data (see Table VIII-1) [148]. A bond energy can be obtained from temperature-programmed desorption data if coupled with knowledge of the activation energy for adsorption (Eq. XVIII-21) see Ref. 149 for the case of a heterogeneous surface. The traditional approach to obtaining bond energies is, of course, through isosteric heats of adsorption, although complications are that equilibrium may be difficult to reach and/or the surface may be heterogeneous. Some literature data compiled by Shustorovich, Baetzold, and Muetterties [ISO] are shown in Table XVIII-2. For hydrogen atom-metal bonds Q averages about 62 kcal/mol, corresponding to about 20 kcal/mol for desorption as H2. Bond energies for CO and NO run somewhat higher. Values can vary depending on the surface preparation and, of course, on the crystal plane involved if the surface is a well-defined one. Older compilations may be found in Refs. 81 and 84, and more recent ones, in Somoijai [13].  [c.712]

A concrete example of the variational principle is provided by the Hartree-Fock approximation. This method asserts that the electrons can be treated independently, and that the n-electron wavefiinction of the atom or molecule can be written as a Slater detenninant made up of orbitals. These orbitals are defined to be those which minimize the expectation value of the energy. Since the general mathematical fonn of these orbitals is not known (especially m molecules), then the resulting problem is highly nonlinear and fonnidably difficult to solve. However, as mentioned in subsection (A1.1.3.2). a connnon approach is to assume that the orbitals can be written as linear combinations of one-electron basis fiinctions. If the basis fiinctions are fixed, then the optimization problem reduces to that of finding the best set of coefficients for each orbital. This tremendous simplification provided a revolutionary advance for the application of the Hartree-Fock method to molecules, and was originally proposed by Roothaan in 1951. A similar fonn of the trial fiinction occurs when it is assumed that the exact (as opposed to Hartree-Fock) wavefiinction can be written as a linear combination of Slater detenninants (see equation (A1.1.104)). In the conceptually simpler latter case, tire objective is to minimize an expression of the fonn  [c.37]

Hj, H2 and H. The pemuitation (12) (where S denotes space-fixed position labels) is defined in this approach as pemuiting the nuclei that are in positions 1 and 2, and the pemuitation (123) as replacing the proton in position 1 by the proton in position 2 etc. With this definition the effect of first doing (12) and then doing (123) can be drawn as  [c.144]

Redlich [3] has criticized the so-called zeroth law on the grounds that the argument applies equally well for the introduction of any generalized force, mechanical (pressure), electrical (voltage), or otherwise. The difference seems to be that the physical nature of these other forces has already been clearly defined or postulated (at least in the conventional development of physics) while in classical thennodynamics, especially in the Bom-Caratheodory approach, the existence of temperature has to be inferred from experiment.  [c.325]

The previous calculations, while not altogether trivial, are among the simplest uses one can make of kinetic theory arguments. Next we turn to a somewhat more sophisticated calculation, that for the mean free path of a particle between collisions witii other particles in the gas. We will use the general fonn of the distribution fiinction at first, before restricting ourselves to the equilibrium case, so as to set the stage for discussions m later sections where we describe die fomial kinetic theory. Our approach will be first to compute the average frequency with which a particle collides with other particles. The inverse of this frequency is the mean time between collisions. If we then multiply the mean time between collisions by the mean speed, given by equation (A3.1.8), we will obtain the desired result for the mean free path between collisions. It is important to point out that one might choose to define the mean free path somewhat differently, by using die root mean square velocity instead of v, for example. The only change will be in a mimerical coefficient. The important issue will be to obtain the dependence of the mean free path upon the density and temperature of the gas and on the size of the particles. The mimerical factors are not that unportant.  [c.669]

Since taking simply ionic or van der Waals radii is too crude an approximation, one often rises basis-set-dependent ab initio atomic radii and constnicts the cavity from a set of intersecting spheres centred on the atoms [18, 19], An alternative approach, which is comparatively easy to implement, consists of rising an electrical eqnipotential surface to define the solnte-solvent interface shape [20],  [c.838]

It should be noted that in the cases where y"j[,q ) > 0, the centroid variable becomes irrelevant to the quantum activated dynamics as defined by (A3.8.Id) and the instanton approach [37] to evaluate based on the steepest descent approximation to the path integral becomes the approach one may take. Alternatively, one may seek a more generalized saddle point coordinate about which to evaluate A3.8.14. This approach has also been used to provide a unified solution for the thennal rate constant in systems influenced by non-adiabatic effects, i.e. to bridge the adiabatic and non-adiabatic (Golden Rule) limits of such reactions.  [c.893]

How are fiindamental aspects of surface reactions studied The surface science approach uses a simplified system to model the more complicated real-world systems. At the heart of this simplified system is the use of well defined surfaces, typically in the fonn of oriented single crystals. A thorough description of these surfaces should include composition, electronic structure and geometric structure measurements, as well as an evaluation of reactivity towards different adsorbates. Furthemiore, the system should be constructed such that it can be made increasingly more complex to more closely mimic macroscopic systems. However, relating surface science results to the corresponding real-world problems often proves to be a stumbling block because of the sheer complexity of these real-world systems.  [c.921]

Diffraction measurements offer a complementary approach to the real-space imaging described earlier. In such schemes, periodically modulated surfaces are utilized to produce well-defined SH (or SF) radiation at discrete angles, as dictated by the conservation of tire in-plane component of the wavevector. As an example of this approach, a grating in the surface adsorbate density may be produced tlirough laser-mduced desorption in the field of two interfering beams. This monolayer grating will readily produce diffracted SH beams in addition to the usual reflected beam. In addition to their intrinsic interest, such structures have pennitted precise measurements of surface diffusion. One may accomplish this by observing the temporal evolution of SH diffraction efSciency, which falls as surface difhision causes the modulation depth of the adsorbate grating to decrease. This teclmique has been applied to examine difhision of adsorbates on the surface of metals [100] and semiconductors [101].  [c.1298]

A related teclmique that also relies on the interference of x-rays for solid characterization is extended x-ray absorption fine structure (EXAFS) [65, 66]. Because the basis for EXAFS is the interference of outgoing photoelectrons with their scattered waves from nearby atoms, it does not require long-range order to work (as opposed to diffraction techniques), and provides infonnation about the local geometry around specific atomic centres. Unfortunately, EXAFS requires tlie high-mtensity and tunable photon sources typically available only at synclirotron facilities. Further limitations to the development of surface-sensitive EXAFS (SEXAFS) have come from the fact that it requires teclmology entirely different from that of regular EXAFS, involving in many cases ultrahigh-vacuum enviromnents and/or photoelectron detection. One interesting advance in SEXAFS came with the design by Stohr et al of fluorescence detectors for the x-rays absorbed by the surface species of small samples that allows for the characterization of well defined systems such as single crystals under non-vacuum conditions [67]. Figure Bl.22.9 shows the S K-edge x-ray absorption data obtained for a c (2 X 2)S-Ni(100) overlayer using their original experimental set-up. This approach has since been extended to the analysis of lighter atoms (C, O, F) on many different substrates and under atmospheric pressures [68].  [c.1791]

Flead and Silva used occupation numbers obtained from a periodic FIF density matrix for the substrate to define localized orbitals in the chemisorption region, which then defines a cluster subspace on which to carry out FIF calculations [181]. Contributions from the surroundings also only come from the bare slab, as in the Green s matrix approach. Increases in computational power and improvements in minimization teclmiques have made it easier to obtain the electronic properties of adsorbates by supercell slab teclmiques, leading to the Green s fiinction methods becommg less popular [182].  [c.2226]

Simulation of both bulk phases iu a single box, separated by an interface, is closest to what we do in real life. It is necessary to establish a well defined interface, most often a planar one between two phases in slab geometry. A large system is required, so that one can characterize the two phases far Irom the interface, and read off the corresponding bulk properties. Naturally, this is the approach of choice if the interfacial properties (for example, the surface tension) are themselves of interest. The first stage in such a simulation is to prepare bulk samples of each phase, as close to the coexisting densities as possible, in ciiboidal periodic boundaries, iismg boxes whose cross sections match. The two boxes are brought together, to make a single longer box, giving the desired slab arrangement with two planar interfaces. There must then follow a period of equilibration, with mass transfer between the phases if the initial densities were not quite right.  [c.2271]

The atomic force microscope (ATM) provides one approach to the measurement of friction in well defined systems. The ATM allows measurement of friction between a surface and a tip with a radius of the order of 5-10 nm figure C2.9.3 a)). It is the tme realization of a single asperity contact with a flat surface which, in its ultimate fonn, would measure friction between a single atom and a surface. The ATM allows friction measurements on surfaces that are well defined in tenns of both composition and stmcture. It is limited by the fact that the characteristics of the tip itself are often poorly understood. It is very difficult to detennine the radius, stmcture and composition of the tip however, these limitations are being resolved. The AFM has already allowed the spatial resolution of friction forces that exlribit atomic periodicity and chemical specificity [3, K), 13].  [c.2745]

A different approach comes from the idea, first suggested by Flelgaker et al. [77], of approximating the PES at each point by a harmonic model. Integration within an area where this model is appropriate, termed the trust radius, is then trivial. Normal coordinates, Q, are defined by diagonalization of the mass-weighted Flessian (second-derivative) matrix, so if  [c.266]

A different approach that also leads to a representation of the nuclear wave function suitable for direct dynamics is to follow the work of Heller on the time evolution of Gaussian wavepackets. The nuclear wave function in Eq. (7) is represented by one or more Gaussian functions. Equations of motion for the parameters defining these functions are then deteiinined, which are found to have properties that can be related to classical mechanics. The underlying idea is the observation that a wavepacket with a Gaussian fomi retains this form when moving in a harmonic potential, and under these circumstances the method can be equivalent to full quantum mechanical wavepacket propagation [147]. In more complicated cases, a hannonic approximation to the fine potential is used, and the method becomes a semiclassical one. The dynamics shown in Figures 3 and 4 support the idea, as the wavepacket retains a form that is approximately a distorted Gaussian at all times.  [c.272]

Already in 1938, Evans and Warhurst [17] suggested that the Diels-AIder addition reaction of a diene with an olefin proceeds via a concerted mechanism. They pointed out the analogy between the delocalized electrons in the tiansition states for the reaction between butadiene and ethylene and the tt electron system of benzene. They calculated the resonance stabilization of this transition state by the VB method earlier used by Pauling to calculate the resonance energy of benzene. They concluded that the extra aromatic stabilization of this transition state made the concerted route more favorable then a two-step process. In a subsequent paper [18], Evans used the Hilckel MO theory to calculate the transition state energy of the same reaction and some others. These ideas essentially introduce a chemical reacting complex (reactants and products) as a two-state system. Dewar [42] later formulated a general principle for all pericyclic reactions (Evans principle) Thermal pericyclic reactions take place preferentially via aromatic transition states. Aromaticity was defined by the amount of resonance stabilization. Evans principle connects the problem of themial pericyclic reactions with that of aromaticity Any theory of aromaticity is also a theory of pericyclic reactions [43]. Evans approach was more recently used to aid in finding conical intersections [44], (cf. Section Vni).  [c.341]

Our qualitative approach [40,41], which is based on the phase change theorem of Longuet-Higgins, considers spin pairing as the principal factor for locating conical intersections. We consider transitions from the first excited state to the ground-state, and form the loop on the ground-state surface. A given spin-paired system (anchor) may support many nuclear configurations on the ground state surface, but only one of them is usually at an energy minimum. As in all other methods mentioned above, the task is to find the two coordinates defining the loop that surrounds the conical intersection. We use for this purpose the reactiou coordiuates conuectiug the chosen anchors A pair of reaction coordinates is sought, of which one is phase preserving (p) and the other phase inverting (i). There are many such pairs in a polyatomic molecule, which may be sorted out systematically. For any specific product, the reaction coordinate leading from the starting material is a natural choice. The phase change associated with this coordinate is well defined (either phase preserving or inverting). The second coordinate may be chosen from among all other reactions of the reactant, which may be found by considering all possible electron re-pairings. In practice, experiment and chemical intuition are used to facilitate and shorten the search. The three anchors of the loop, which are A—the reactant, B—the desired product, and C—another product, must form a phase inverting loop. This means that either all three reactions (A B, B C, and C A) are phase inverting, or that only one of them is. The loops formed are designated as i and ip, respectively. As shown in Section VI, the method is readily combined with high level quantum calculations for polyatomic systems.  [c.386]


See pages that mention the term Aaberg defined : [c.248]    [c.727]    [c.813]    [c.887]    [c.1297]    [c.2207]    [c.44]    [c.74]    [c.385]   
Industrial ventilation design guidebook (2001) -- [ c.1448 ]