# A General Approach

The process by which we determine the probability that there is a significant difference between two samples is called significance testing or hypothesis testing. Before turning to a discussion of specific examples, however, we will first establish a general approach to conducting and interpreting significance tests. [c.83]

A general review of the UNIEAC and ASOG (32) methods, through the 1985 updates for the American Institute of Chemical Engineers (AIChE) under the Design Institute for Physical Property Data (DIPPR Project 802), is available (179). Eor temperatures in the range 300—425 K and pressures below 500 kPa (5 atm), UNIEAC is recommended as is the ASOG method (163). Investigation of the availabiHty of interaction parameters is suggested as is comparing errors for the two methods to make a selection for a given problem. A general approach to method selection for specific problems and a comparison of binary and ternary solubiHty predictions to experimental results for a wide range of functional groups are available (162). In making a method selection, it is important to confirm that the method version matches the selection procedure. This confirmation is not always easy to make, based on commercial software documentation. [c.252]

The process by which we determine the probability that there is a significant difference between two samples is called significance testing or hypothesis testing. Before turning to a discussion of specific examples, however, we will first establish a general approach to conducting and interpreting significance tests. [c.83]

One can to outline a general approach for medium selection along with a test sequence applicable to a large group of filter media of the same type. There are three methods of filter media tests laboratory- or bench-scale pilot-unit, and plant tests. The laboratory-scale test is especially rapid and economical, but the results obtained are often not entirely reliable and should only be considered preliminary. Pilot-unit tests provide results that approach plant data. The most reliable results are often obtained from plant trials. [c.149]

The aim is to give a general approach which can be followed when total energies are calculated by both semi-empirical and ab initio methods. [c.339]

Soil has been defined in many ways, often depending upon the particular interests of the person proposing the definition. In discussion of the soil as an environmental factor in corrosion, no strict definitions or limitations will be applied rather, the complex interaction of all earthen materials will come within the scope of the discussion. It is obvious only a general approach to the topic can be given, and no attempt will be made to give full and detailed information on any single facet of the topic. [c.377]

Sections 13.1 to 13.8 will deal mainly with the economics of a field development. Exploration economics is introduced in Section 13.8. The general approach to this section will be to look at an investment proposal from an operator s point of view. [c.303]

The goal of this presentation is to propose a more or less general approach based on Gibbs statistics to the basic problem of introducing a priori knowledge into the reconstruction algorithms. The application to three-level structure reconstruction, which is quite actual for NDT, will be chosen as object to illustrate the proposed approach. It will be shown that Bayesian Reconstruction technique using Gibbs priors gives a straight forward way to implement fast algorithms for image restoration from extremely limited number of projections and views of multi-level structures. [c.114]

The general approach goes back to Frenkel [63] and has been elaborated on by Halsey [64], Hill [65], and McMillan and Teller [66]. A form of Eq. XVII-78, with a = 0, [c.628]

Contrary to what appears at a first sight, the integral relations in Eqs. (9) and (10) are not based on causality. However, they can be related to another principle [39]. This approach of expressing a general principle by mathematical formulas can be traced to von Neumann [242] and leads in the present instance to an equation of restriction, to be derived below. According to von Neumann complete description of physical systems must contain [c.111]

The general procedure in a QSPR approach consists of three steps structure representation descriptor analysis and model building (see also Chapter X, Section 1.2 of the Handbook). [c.489]

Rovibrational final-state analysis can also be achieved even for the case of classical nuclei. A product fragment with classical nuclei rotates and vibrates as a classical object. A classical quantum correspondence is adopted, such that this classical object is described by an evolving coherent state. For the case of a diatomic fragment when rotational excitations can be neglected or decoupled, the dynamics can be resolved into quantum states [42]. For low excitations with near equidistant splittings between consecutive vibrational energy levels the harmonic oscillator coherent state provides an excellent basis for obtaining vibrationally resolved cross-sections [43]. As a general approach valid for polyatomic molecular product fragment a multidimensional Prony [44] method has been developed [45], which can produce rovibrationally resolved cross-sections for the case of weak coupling between rotation and vibrational modes. [c.240]

Flexibility to cope with irregular domain geometry in a straightforward and systematic manner is one of the most important characteristics of the finite element method. Irregular domains that do not include any curved boLindary sections can be accurately discretized using triangular elements. In most engineering processes, however, the elimination of discretization error requires the use of finite elements which themselves have curved sides. It is obvious that randomly shaped curved elements cannot be developed in an ad hoc manner and a general approach that is applicable in all situations must be sought. The required generalization is obtained using a two step procedure as follows [c.34]

The probabilistic nature of a confidence interval provides an opportunity to ask and answer questions comparing a sample s mean or variance to either the accepted values for its population or similar values obtained for other samples. For example, confidence intervals can be used to answer questions such as Does a newly developed method for the analysis of cholesterol in blood give results that are significantly different from those obtained when using a standard method or Is there a significant variation in the chemical composition of rainwater collected at different sites downwind from a coalburning utility plant In this section we introduce a general approach to the statistical analysis of data. Specific statistical methods of analysis are covered in Section 4F. [c.82]

A general approach to these compounds is based on the reaction of dichi oromethyllithium with boronic esters. Rearrangement of the complex followed by reduction with potassium triisopropoxyborohydride provides the homologated boronic ester, which can be oxidized to the corresponding alcohol or transformed iato the homologated aldehyde by reaction with methoxy(phenylthio) methyUithium (522,523). P-Chiral alcohols not available ia high optical purity by asymmetric hydroboration of terminal alkenes are readily prepared by this method. [c.324]

The probabilistic nature of a confidence interval provides an opportunity to ask and answer questions comparing a sample s mean or variance to either the accepted values for its population or similar values obtained for other samples. For example, confidence intervals can be used to answer questions such as Does a newly developed method for the analysis of cholesterol in blood give results that are significantly different from those obtained when using a standard method or Is there a significant variation in the chemical composition of rainwater collected at different sites downwind from a coalburning utility plant In this section we introduce a general approach to the statistical analysis of data. Specific statistical methods of analysis are covered in Section 4F. [c.82]

The scheme in Figure 10.1 illustrates a general approach for devising a monitoring strategy. Where doubt exists about the level of exposure, a crude assessment can be made by determining levels under expected worst-case situations, paying attention to variations and possible errors. More detailed assessment may be required, depending upon the outcome. Sampling times should be long enough to overcome fluctuations but short enough for results to be meaningfully associated with specific activities and for coiTcctive actions to be identified. For monitoring particulates, sampling times may be determined from the following equation [c.363]

The [CBTJi open circuit plant—a general approach [c.39]

A general approach for obtaining the phonon dispersion relations of CNTs is given by tight binding molecular dynamics (TBMD) adopted for the CNT geometry, in which the atomic force potential for general carbon materials is used [5,10]. Here we use the scaled force constants from those of two-dimensional (2D) graphite [2,11], and we construct a force constant tensor for a [c.52]

As a general rule all products covered by the New Approach directives must bear the CE-marking which symbolises conformity of the products to the requirements of the directive including the relevant certification procedures. The main principles which are basic to the application of the CE-marking can be summarised as follows [c.940]

There is a number of other ways of obtaining an estimate of surface area, including such obvious ones as direct microscopic or electron-microscopic examination. The rate of charging of a polarized electrode surface can give relative areas. Bowden and Rideal [46] found, by this method, that the area of a platinized platinum electrode was some 1800 times the geometric or apparent area. Joncich and Hackerman [47] obtained areas for platinized platinum very close to those given by the BET gas adsorption method (see Section XVII-5). The diffuseness of x-ray diffraction patterns can be used to estimate the degree of crystallinity and hence particle size [48,49]. One important general approach, useful for porous media, is that of permeability determination although somewhat beyond the scope of this book, it deserves at least a brief mention. [c.580]

A fundamental approach by Steele [8] treats monolayer adsorption in terms of interatomic potential functions, and includes pair and higher order interactions. Young and Crowell [11] and Honig [20] give additional details on the general subject a recent treatment is by Rybolt [21]. [c.615]

Chemisoq)tion bonding to metal and metal oxide surfaces has been treated extensively by quantum-mechanical methods. Somoijai and Bent [153] give a general discussion of the surface chemical bond, and some specific theoretical treatments are found in Refs. 154-157 see also a review by Hoffman [158]. One approach uses the variation method (see physical chemistry textbooks) [c.714]

A concrete example of the variational principle is provided by the Hartree-Fock approximation. This method asserts that the electrons can be treated independently, and that the n-electron wavefiinction of the atom or molecule can be written as a Slater detenninant made up of orbitals. These orbitals are defined to be those which minimize the expectation value of the energy. Since the general mathematical fonn of these orbitals is not known (especially m molecules), then the resulting problem is highly nonlinear and fonnidably difficult to solve. However, as mentioned in subsection (A1.1.3.2). a connnon approach is to assume that the orbitals can be written as linear combinations of one-electron basis fiinctions. If the basis fiinctions are fixed, then the optimization problem reduces to that of finding the best set of coefficients for each orbital. This tremendous simplification provided a revolutionary advance for the application of the Hartree-Fock method to molecules, and was originally proposed by Roothaan in 1951. A similar fonn of the trial fiinction occurs when it is assumed that the exact (as opposed to Hartree-Fock) wavefiinction can be written as a linear combination of Slater detenninants (see equation (A1.1.104)). In the conceptually simpler latter case, tire objective is to minimize an expression of the fonn [c.37]

A more intuitive, and more general, approach to the study of two-level systems is provided by the Feynman-Vemon-Flellwarth geometrical picture. To understand this approach we need to first introduce the density matrix. [c.229]

Scherer et al [205. 206] showed how to prepare, using interferometric methods, pairs of laser pulses with known relative phasing. These pulses were employed in experiments on vapour phase I2, in which wavepacket motion was detected in tenns of fluorescence emission. A more general approach, which can be used in principle to generate pulse sequences of any type, is to transfomi a single input pulse into a shaped output profile, with the intensity and phase of the output under control tln-oughout. The idea being exploited by a number of investigators, notably Warren and Nelson, is to use a programmable dispersive delay line constructed from a pair of diffraction gratings spaced by an active device that is used either to absorb or phase shift selectively the frequency-dispersed wavefront. The approach favoured by Warren and co-workers exploits a Bragg cell driven by a radio-frequency signal obtained from a frequency synthesizer and a computer-controlled arbitrary wavefomi generator [207]. Nelson and co-workers use a computer-controlled liquid-crystal pixel array as a mask [208]. In the fiiture, it is likely that one or both of these approaches will allow execution of currently impossible nonlinear spectroscopies with highly selective infomiation content. One can take inspiration from the complex pulse sequences used in modem multiple-dimension NMR spectroscopy to suppress unwanted interfering resonances and to enliance selectively the resonances from targeted nuclei. [c.1990]

Product angular and velocity distributions can be measured with REMPI detection, similar to Doppler probmg in a laser-induced fluorescence experiment discussed in section B2.3.3.5. With appropriate time- and space-resolved ion detection, it is possible, in principle, to detemiine the three-dimensional velocity distribution of a product (see equation (B2.3.1 bit. The time-of-arrival of a particular mass in the TOFMS will be broadened by the velocity of the neutral molecule being detected. In some modes of operation of a TOFMS, e.g. space-focusing conditions [M], the shift of the arrival time from tlie centre of a mass peak is proportional to the projection of the molecular velocity along the TOFMS axis. In addition, Doppler tuning of the probe laser allows one component of the velocity perpendicular to the TOFMS axis to be detemiined. A more general approach for the two-dimensional velocity distribution in the plane perpendicular to the TOFMS direction involves the use of imaging detectors [66]. [c.2083]

As a side note, filter diagonalization is also iisefiil in a more general context. It can be shown that it is an efficient approach for extracting frequencies from a short-time segment of a general signal [112. 113 and 114. 118.119], so that it is not even necessary to use a wavepacket All one needs is a signal. This feature is very important in semi-classical and path integral simulations discussed below, where all die infonnation is extracted from a time-dependent correlation fiinction, because the quality of the simulations degrades as a fiinction of time (the number of trajectories is typically increased exponentially as the time is increased) therefore, infonnation must be extracted from the shortest time possible. [c.2310]

Already in 1938, Evans and Warhurst [17] suggested that the Diels-AIder addition reaction of a diene with an olefin proceeds via a concerted mechanism. They pointed out the analogy between the delocalized electrons in the tiansition states for the reaction between butadiene and ethylene and the tt electron system of benzene. They calculated the resonance stabilization of this transition state by the VB method earlier used by Pauling to calculate the resonance energy of benzene. They concluded that the extra aromatic stabilization of this transition state made the concerted route more favorable then a two-step process. In a subsequent paper [18], Evans used the Hilckel MO theory to calculate the transition state energy of the same reaction and some others. These ideas essentially introduce a chemical reacting complex (reactants and products) as a two-state system. Dewar [42] later formulated a general principle for all pericyclic reactions (Evans principle) Thermal pericyclic reactions take place preferentially via aromatic transition states. Aromaticity was defined by the amount of resonance stabilization. Evans principle connects the problem of themial pericyclic reactions with that of aromaticity Any theory of aromaticity is also a theory of pericyclic reactions [43]. Evans approach was more recently used to aid in finding conical intersections [44], (cf. Section Vni). [c.341]

For modelling conformational transitions and nonlinear dynamics of NA a phenomenological approach is often used. This allows one not just to describe a phenomenon but also to understand the relationships between the basic physical properties of the system. There is a general algorithm for modelling in the frame of the phenomenological approach determine the dominant motions of the system in the time interval of the process treated and theti write [c.116]

Almost from the introduction of the CSP and CI-CSP methods, applications to large systems with realistic potential functions became possible. A general CSP code was written and is available.The applications of CSP so far include a study of electron photodetachment dynamics for I Ar) ,n = 2,..., 12, a study of the dynamics following electronic excitation of Ba Ar)n for n = 10, n = 20, and a study of electronic excitation, cage effects and vibrational dephasing and relaxation dynamis for h in l2 Rg)n, for Rg = Ar, Xe and n = 17,47 (corresponding respectively to complete first and complete first two solvation layers around the iodine). The dynamics of Ba(Ar) following the excitation of the Ba in this cluster involves nonadiabatic transitions, since there are three quasidegenerate p-states of the Ba atom, and the system is then governed by three potential energy surfaces corresponding to these state, with possible non-adiabatic transitions between them. The simulations of Ref. 45 thus went beyond simple CSP, and were in fact a three-configuration CI-CSP, with a separable term in the Cl wavefunction for each adiabatic electronic state. This approach, while not a converged quantum treatment, is expected already to be more reliable than semiclassical surface hopping methods that treat nuclear motions classically. Comparison of the CI-CSP with semiclassical surface [c.372]

This basic LFER approach has later been extended to the more general concept of fragmentation. Molecules are dissected into substructures and each substructure is seen to contribute a constant inaement to the free-energy based property. The promise of strict linearity does not hold true in most cases, so corrections have to be applied in the majority of methods based on a fragmentation approach. Correction terms are often related to long range interactions such as resonance or steric effects. [c.489]

Commercial implementations of this general approach are ACD/I-Lab [36], Specinfo (Chemical Concepts) [37], WINNMR (Bruker), and KnowItAll (Bio-Rad) [38]. Figure 10.2-3 shows the workspace generated by ACD/I-Lab after predicting a H NMR spectrum. ACD calculations are currently based on over 1 200 000 experimental chemical shifts and 320 000 experimental coupling constants [36]. [c.522]

Then, in 1960, Corey introduced a general methodology for planning organic syntheses. Corey s synthon concept [2.1-25] was a downright change of the perception of an organic synthesis. The synthesis plan for a target molecule is developed by starting with the target structure (the product of the synthesis ) and working backwards to available starting materials, The rctrosynthctic analysis or disconnection of the target molecule in tbc reverse direction is performed by the systematic use of analytical rules which have been formulated by Corey. For the example of tropinonc, this is shown in Figure 10.3-29. Corey s approach is nowadays widely accepted as the disconnection approach and is taught in a number of textbooks (e.g Ref. [26 ). [c.569]

A somewhat related method is the charge equilbration method of Rappe and Goddard [Rappe and Goddard 1991]. This is employed in the Universal Force Field (UFF) [Rappe et al. 1992] as a general method for calculating charge distributions over a very wide range of molecules (in principle, fhe entire periodic table). An additional feature of the method is that the charges are dependent upon the molecular geometry and so can change during the course of a calculation such as a molecular dynamics simulation. The starting point for this approach is a series expansion of the energy of an isolated atom in terms of the charge [c.211]

See pages that mention the term

**A General Approach**:

**[c.560] [c.298] [c.295] [c.2311] [c.262] [c.667] [c.112] [c.836] [c.2314] [c.41] [c.478] [c.151] [c.205] [c.131]**

See chapters in:

** Modeling of chemical kinetics and reactor design
-> A General Approach
**