Recent approaches


From the viewpoint of AI and cognitive science, the theoretical question underlying knowledge-based technology is how to model expert problem-solving behavior. The predominant models of representation and reasoning in AI are logic, rules, and objects. More recent approaches to knowledge-based systems include task-specific models. The emphasis here is on applying the theory, ie, how do the problem-solving models and the representation concepts apply to real-world problems The representation and reasoning techniques are described independentiy of any specific development tool, and illustrated using examples from chemical engineering wherever possible.  [c.532]

Significant recent approaches to chemical reactor network synthesis can be classified into two categories, viz. superstructure optimization and network targeting. In the former, a superstructure is postulated and then an optimal sub-network within it is identified to maximize performance index (Kokossis and Floudas, 1990).  [c.281]

The steric constant Es and related quantities do not constitute the only approach to the study of steric effects on reactivity. Steric strain energy calculations and topological indices are more recent approaches. Qualitative concepts have been  [c.343]

One of the more recent approaches for designing xc functionals is based on inverting eq. (6.7). An accurate electron density may be calculated by advanced wave  [c.181]

By choosing the appropriate n value, the amount of adsorption predicted at large P/P values is reduced, and a better fit to data can usually be obtained. Kiselev and coworkers have proposed an equation paralleling Eq. XVII-SS, but that reduces to the BET equation if the lateral interaction parameter, is zero [18]. There are other modifications that also give better fits to data but also introduce one or more additional parameters. Young and Crowell [47a] and Gregg and Sing [48] may be referred to for a more detailed summary of such modifications see Sircar [49] for a recent one. One interesting example is the following. If the adsorbed film is not quite liquidlike so that Oi/bi bQ, its effective vapor pressure will be P rather than P (note Section X-6C), and the effect is to multiply x by a factor k [50]. Redhead [51] discusses a composite model that approaches the BET equation at one limit and the Frenkel-Halsey-Hill one at another (see Section XVII-7B).  [c.622]

Not only has there been great progress in making femtosecond pulses in recent years, but also progress has been made in the shaping of these pulses, that is, giving each component frequency any desired amplitude and phase. Given the great experimental progress in shaping and sequencing femtosecond pulses, the inexorable question is How is it possible to take advantage of this wide possible range of coherent excitations to bring about selective and energetically efficient photochemical reactions Many intuitive approaches to laser selective chemistry have been tried smce 1980. Most of these approaches have focused on depositing energy in a sustained maimer, using monochromatic radiation, into a particular state or mode of the molecule. Virtually all such schemes have failed, due to rapid intramolecular energy redistribution.  [c.268]

The various approaches to laser control of chemical reactions have been discussed in detail in several recent reviews [64. 65],  [c.269]

For a recent critical evaluation of situations where current DFT approaches experience difficulties, see Davidson E R 1998 How robust is present-day DFT  [c.2199]

Abstract. Geometric integrators are numerical timestepping schemes which preserve invariant structures associated to physical dynamical systems. For example, a symplectic integrator is one which preserves a strong differential invariant of the flows of Hamiltonian systems (the 2-form dq A dp associated to canonical variables (j,p). For constrained systems such as the rigid body, preservation of geometric phase-flow structure is complicated by the choice of coordinates and the need for efficiency. Nowhere are these issues more critical than in the simulation of rigid body systems. In recent work, several alternative geometric approaches to rigid body systems integrators have been proposed and applied in molecular simulation. In this article, these methods are introduced and compared with a simple model problem.  [c.349]

The first chapter, on Conformational Dynamics, includes discussion of several rather recent computational approaches to treat the dominant slow modes of molecular dynamical systems. In the first paper, SCHULTEN and his group review the new field of steered molecular dynamics (SMD), in which large external forces are applied in order to be able to study unbinding of ligands and conformation changes on time scales accessible to MD  [c.497]

Besides the simple data presentation through computer graphics methods, a new research area has been established within recent years, which is particularly well capable of handling the requirements of data mining - visual data mining [21]. The functions of visual data mining range from visualization and analysis of results from classical data mining approaches, to new methods that allow a complete visual exploration of raw data and thus are an alternative to classical data mining methods.  [c.475]

The synthesis design program WODCA has been being developed by J. Gasteiger s group since 1990. Previously, synthesis planning was an integral part of the program system EROS [49-52], which is now focused on reaction prediction. However, since the advancement and refinement of both approaches became more and more sophisticated, it seemed no longer reasonable to treat synthesis planning and reaction prediction in one system. Therefore, aroimd 1990 the decision was made to handle both problems by separate systems. Initially, WODCA had been developed for planning of the syntheses of individual target compounds. Work in recent years now also enables its use in designing entire libraries of compounds [53].  [c.577]

Table 10.4-5. Recent successes of structure-based virtual screening approaches. Table 10.4-5. Recent successes of structure-based virtual screening approaches.
As pointed out above, aromatic reactivity depends, at least in part, on the way in which the rr-electron energies of the molecules change between the ground state and the transition state. The last equation gives a measure of this change, over the early part of the reaction where the molecule is not too seriously distorted from its ground state. As the electrophile approaches the site of reaction, q. reflects to a first approximation the change in the rr-energy of the aromatic, and because Sa. is negative, reaction is favoured by high values of q. On the closer approach of the reagent the term involving -n assumes importance. The effect on of these terms together can only be assessed by using arbitrary values of Sa in the last equation. Higher polarizability terms become more important as the electrophile approaches even more closely but their calculation is lengthy.  [c.131]

Following the Cahn-lngold-Prelog priority rules, the prochirality symbols pro-R and pro-S are assigned to the enantiotopic ligands according to the chirality of the considered product. The prochirality symbols Re and Si are assigned to the enantiofaces of trigonal prochiral centres according to the clockwise or anti-clockwise sequence of the three ligands when viewed at the enantioface as though from an approaching reagent (D. Seebach, 1982).  [c.359]

In one study, analytical chemists were asked to evaluate a data set consisting of a normal calibration curve, three samples of different size but drawn from the same source, and an analyte-free sample (Table 5.3). At least four different approaches for correcting the signals were used by the participants (1) ignore the correction entirely, which clearly is incorrect (2) use the y-intercept of the calibration curve as a calibration blank, CB (3) use the analyte-free sample as a reagent blank, RB and (4) use both the calibration and reagent blanks. Equations for calculating the concentration of analyte using each approach are shown in Table 5.4, along with the resulting concentration for the analyte in each of the three samples.  [c.128]

The strongest evidence that stratospheric depletion is occurring comes from the discovery of the Antarctic ozone hole. In recent years during the spring, depletions of 60% integrated over all altitudes and 95% in some layers have been observed over Antarctica. During winter in the southern hemisphere, a polar vortex develops that prevents the air from outside of the vortex from mixing with air inside the vortex. The depletion begins in August, as the approaching spring sun penetrates into the polar atmosphere, and extends into October. When the hole was first observed, existing chemical models could not account for the rapid loss, but attention was soon focused on stable reservoir species for chlorine. These compounds, namely HCl and CINO, are formed in competing reactions involving Cl and CIO that temporarily or permanently remove Cl and CIO from participating in the O destmction reactions. For example.  [c.380]

The creation of liquids to be used as fuels from sources other than natural cmde petroleum (qv) broadly defines synthetic Hquid fuels. Hence, fuel Hquids prepared from naturally occurring bitumen deposits qualify as synthetics, even though these sources are natural Hquids. Synthetic Hquid fuels have characteristics approaching those of the Hquid fuels in commerce, specifically gasoline, kerosene, jet fuel, and fuel oil (see Aviation and other gas turbine fuels Gasoline and other motor fuels). For much of the twentieth century, the synthetic fuels emphasis was on Hquid products derived from coal (qv) upgrading or by extraction or hydrogenation of organic matter in coke Hquids, coal tars, tar sands (qv), or bitumen deposits. More recentiy, however, much of the direction involving synthetic fuels technology has changed. There are two reasons.  [c.78]

Oxides of nitrogen, NO, can also form. These are generally at low levels and too low an oxidation state to consider water scmbbing. A basic reagent picks up the NO2, but not the lower oxidation states the principal oxide is usually NO, not NO2. Generally, control of NO is achieved by control of the combustion process to minimize NO, ie, avoidance of high temperatures in combination with high oxidant concentrations, and if abatement is required, various approaches specific to NO have been employed. Examples are NH injection and catalytic abatement (43).  [c.58]

Kinetic measurements are studies of the rates at which chemical reactions occur. Generally, these studies involve preparing a chemical system using reagent concentrations different from the equiUbtium values and then monitoring the concentration changes as the system approaches equiUbtium, although other, less direct strategies are sometimes exploited. Chemical kinetic data are used in materials sciences, biochemistry and molecular biology, earth and atmospheric science, and many branches of engineering. Related concepts appear in nuclear physics, but presuppositions and methods are different there.  [c.507]

Interest ia developiag imitation synthetic cheese gained prominence ia the middle 1960s. One such product is prepared from specific amounts of natural cheese, pregelatinized starch, a high protein binding agent (preferably soybean), water, and sugar (17). The mixture is heated to 90°C and extmded into the form of small strands. More recent approaches have been to combine caseinate and soy protein (enriched in the glycinin fraction) to make imitation processed cheeses. An imitation cheese spread based on soy protein and caseinate consists of margarine, 37.38 wt % sodium caseinate, 5.58 wt % maltrin dextrin, 5.50 wt % modified food starch, 3.52 wt % gelatin, 2.46 wt % soy isolate, 2.28 wt % whey, sweet, 1.66 wt % acid blend, 0.86 wt % lactic acid (50 wt %), 96.33 citric acid, dry (50 wt %), 3.60 acetic acid, cone (50 wt %), 0.07 emulsifier, 0.17 wt % salt, 0.08 wt % and water, 40.51 wt % (16).  [c.446]

Coming as it does at the heels of a rather weighty pedagogical tome, this last chapter is intended to be more of a freewheeling mixture of pedagogy - including discussions of various measures of complexity and recent approaches to reformulating conventional field theory using somewhat less-than-conventional cellular-automatalike premises - and outright fanciful speculation - not all of which is relegated to the last section, entitled, appropriately enough, Some Final Musings... And a Glimpse of a New Cosmogony. While we do not expect the reader to necessarily agree, or even be amused, by all of our musings, there should be little doubt but that cellular automata are powerful catalysts for the synthesis of interesting ideas. For cellular automata are nothing if not powerful catalysts...  [c.605]

Detemiination of a PES from spectroscopic data generally requires fitting a parameterized surface to the observed energy levels together with theoretical and other experimental data. This is a difficult process because it is not easy to devise realistic fimctional representations of a PES with parameters that are not strongly correlated, and because calculation of the vibrational and rotational energy levels from a PES is not straightforward and is an area of current research. The fomier issue will be discussed further in section Al.5.5.3. The approaches available for the latter currently include numerical integration of a truncated set of close-coupled equations, methods based on the discrete variable representation and diffusion Monte Carlo teclmiqiies [28]. Some early and fine examples of potential energy surfaces detemiined in this maimer include the H2-rare gas surfaces of LeRoy and coworkers [97, 98 and 99], and the hydrogen halide-rare gas potential energy surfaces of Hutson [100. 101 and 102]. More recent work is reviewed by van der Avoird et aJ [103].  [c.201]

SEM with low acceleration voltage (1-10 kV) (LVSEM) can be applied without metal coating of the sample, e.g. for quality control purposes in semiconductor industries, or to image ceramics, polymers or dry biological samples. The energy of the beam electrons (the acceleration voltage) should be selected so that charge neutrality is approached, i.e. the amount of energy that enters the sample also leaves the sample in the fomi of SE and BSE. Modem SEM instmments, equipped with FEGs provide an adequate spot size, although tire spot size increases with decreasing acceleration voltage. The recent implementation of a cathode lens system [47] with very low aberration coefficients will allow the surfaces of non-metal coated samples at beam energies of only a few electronvolts to be imaged without sacrificing spot size. New contrast mechanisms and new experimental possibilities can be expected.  [c.1642]

In this chapter we review some of the most important developments in recent years in connection with the use of optical teclmiques for the characterization of surfaces. We start with an overview of the different approaches available to tire use of IR spectroscopy. Next, we briefly introduce some new optical characterization methods that rely on the use of lasers, including nonlinear spectroscopies. The following section addresses the use of x-rays for diffraction studies aimed at structural detenninations. Lastly, passing reference is made to other optical teclmiques such as ellipsometry and NMR, and to spectroscopies that only partly depend on photons.  [c.1780]

The complications which occur with bifiircation, i.e. when more than one product arrangement is accessible, can be solved by various methods. Historically, the first close-coupling approaches for multiple product chaimels employed fitting procedures [ ], where the close-coupling equations are simultaneously propagated from each of the asymptotes inwards and then are fitted together at a dividing surface. This approach has been replaced in recent calculations by two methods. One is based on using absorbing potentials to turn the reactive problem into an inelastic one, as explained later. The other is to use hyperspherical coordinates for carrying out the close-coupling propagation [41, 42, 43, 44 and 45]. The hyperspherical coordinates consist of a single radius p, which is zero at the origin (when all nuclei are stuck together) and increases outwards, and a set of angles. For the collinear problem as well as the atom-diatom problem (mvolving tliree independent distances) the hyperspherical coordinates are typically just the regidar spherical coordinates. Close-coupling propagation starts at p = 0 and moves outward until a large value of p is reached. When the asymptote are reached one fits the wavefunction to have the fonn of equation (B3.4.4) and thus obtains the scattering matrix.  [c.2297]

An alternative benchmark approach for handling the R-T effect in triatomic molecules has been developed by Jensen and Bunker (JB) and coworkers. It is based on the use of MORBID Hamiltonian [66,67,102], a very sophisticated variant of the above described approaches that handle the bending motion in a different way than their stretching counterparts. This method is described in great detail in a recent book [2], so that we restrict ourselves here only to a small comment. It might look anachronistic (this approach postpones that of HCR) to develop a very ambitious approach not employing probably the most appropriate general Hamiltonian. A justification is given by JB in their book [15] However, one disadvantage (... of the approaches like HC s...) is the fact that in practice, many (if not most) interactions between molecular basis states are weak and could be successfully treated by perturbation theory in the fonn of a contact transformation. Jn the variational approaches, these weak interactions are treated by direct matrix diagonalization at a high cost of computer time and memory. We cannot judge if this sentence is relevant in the case of triatomics, but it certainly gains weight when the larger molecules are to be handled.  [c.515]

Abstract. Protein-ligand interactions control a majority of cellular processes and are the basis of many drug therapies. First, this paper summarizes experimental approaches used to characterize the interactions between proteins and small molecules equilibrium measurement of binding constant and standard free energy (jf binding and the dynamic approach of ligand extraction via atomic force microscopy. Next, the paper reviews ideas about the origin of different component terms that contribute to the the stability of protein-ligand complexes. Then, theoretical approaches to studying protein-small molecule interactions are addressed, including forced extraction of ligand and perturbation methods for calculating potentials of mean force and free energies for molecular transformation. Last, these approaches are illustrated with several recent studies from our laboratory (1) binding of water in cavities inside proteins, (2) calculation of binding free energy from first principles by a new application of molecular transformation, and (3) extraction of a small ligand (xenon) from a hydrophobic cavity in mutant T4-lysozyme L99A.  [c.129]

T. Schlick. Modeling superhelical DNA Recent analytical and dynamic approaches. Curr. Opin. Struc. Biol, 5 245-262, 1995.  [c.260]

Abstract. It was revealed that the QCMD model is of canonical Hamiltonian form with symplectic structure, which implies the conservation of energy. An efficient and reliable integrator for transfering these properties to the discrete solution is the symplectic and explicit Pickaback algorithm. The only drawback of this kind of integrator is the small stepsize in time induced by the splitting techniques used to discretize the quantum evolution operator. Recent investigations concerning Krylov iteration techniques result in alternative approaches which overcome this difficulty for a wide range of problems. By using iterative methods in the evaluation of the quantum time propagator, these techniques allow for the stepsize to adapt to the classical motion and the coupling between the classical and the quantum mechanical subsystem. This yields a drastic reduction of the numerical effort. The pros and cons of both approaches as well as the suitable applications are discussed in the last part.  [c.396]

The pre-processing concepts have been a more recent development of substructure searching systems. These approaches have become popular since the mid-1980s, when the cost of the storage devices (hard disks and CD-ROMs) decreased.  [c.298]

The Rekker approach is still used with revised Z/ systems, e.g., in the software program Z/SYBYL [8]. Over recent decades various other substructure-based approaches have been developed that are mostly implemented and available as computer programs.  [c.493]

In recent decades, much attention has been paid to the application of artificial neural networks as a tool for spectral interpretation (see, e.g.. Refs. [104, 105]). The ANN approach app]ied to vibrational spectra allows the determination of adequate functional groups that can exist in the sample, as well as the complete interpretation of spectra. Elyashberg [106] reported an overall prediction accuracy using ANN of about 80 % that was achieved for general-purpose approaches. Klawun and Wilkins managed to increase this value to about 95% [107].  [c.536]

A clear conclusion from such comparative studies is that density functional methods using gradient-corrected functionals can give results for a wide variety of properties that are competitive with, and in some cases superior to, ab initio calculations using correlation (e.g. MP2). Gradient-corrected functionals are required for the calculation of relative conformational energies and the study of intermolecular systems, particularly those involving hydrogen bonding [Sim et al. 1992]. As is the case with the ab initio methods the choice of basis set is also important in detertnining the results. By keeping the basis set constant (6-31G being a popular choice) it is possible to make objective comparisons. Four examples of such comparative studies are those of Johnson and colleagues, who considered small neutral molecules [Johnson et al. 1993] St-Amant et al, who examined small organic molecules [St-Amant et al 1995] Stephens et al, who performed a detailed study of the absorption and circular dichroism spectra of 4-methyl-2-oxetanone [Stephens et al 1994] and Frisch et al, who compared a variety of density functional methods with one another and to traditional ab initio approaches [Frisch et al 19%]. The evolution of defined sets of data such as those associated with the Gaussian-u series of models has also acted as a spur to those involved in developing density functional methods. For example, much of Becke s work on gradient corrections and on mixed Hartree-Fock/ density function methods was evaluated using data sets originally collated for the Gaussian-1 and Gaussian-2 methods. A more recent example is a variant of the Gaussian-3 method which uses B3LYP to determine geometries and zero-point energies [Baboul et al 1999].  [c.157]

Chemometrics. Statistics and Computer Application in Analytical Chemistry. New York, Wiley-VCH. yer D C and P D J Grootenhuis 1999. Recent Developments in Molecular Diversity nputational Approaches to Combinatorial Chemistry. Annual Reports in Medicinal Chemistry 187-296,  [c.736]

However, the electronic theory also lays stress upon substitution being a developing process, and by adding to its description of the polarization of aromatic molecules means for describing their polarisa-bility by an approaching reagent, it moves towards a transition state theory of reactivity. These means are the electromeric and inductomeric effects.  [c.127]

In principle, it should be possible to derive the value of k for any method by considering the chemical and physical processes responsible for the signal. Unfortunately, such calculations are often of limited utility due either to an insufficiently developed theoretical model of the physical processes or to nonideal chemical behavior. In such situations the value of k must be determined experimentally by analyzing one or more standard solutions containing known amounts of analyte. In this section we consider several approaches for determining the value of k. For simplicity we will assume that Sreag has been accounted for by a proper reagent blank, allowing us to replace S eas in equations 5.1 and 5.2 with the signal for the species being measured.  [c.106]

Conventional methods of microfabrication of integrated circuits and devices constitute the reductive (top-down) approach to the constmction of micron-and submicron-scale stmctures (16). At the present time, the smallest features in commercial integrated circuits measure 0.35 )Xm across, and technologies for further reduction in size have succeeded in forming feature sizes as small as 0.18 pm (15,81). Further rniniaturization, however, will require major technological breakthroughs in the processes undedying microfabrication, especially photoHthography, the heart of microfabrication. The breakthroughs must not only allow further reductions in the size of the smallest features, but also be economically feasible to implement as manufacturing practices. Recent history of the semiconductor industry shows that, as the limits of a particular manufacturing technology are approached and surpassed, the economic consequences have been a spectacular increase in the cost of building new factories. At present, building a new siHcon chip fabrication faciHty needs a capital layout of about 1.5 biUion. Although the size of the smallest possible features on the chip have shmnk by 14% every year, the price of  [c.202]

There are distinct performance differences for garments that are resistant to flame alone compared to those that are resistant to both heat buildup and flammabUity. Appropriate tests have been devised that measure the thermal protective performance of fabrics when exposed to a radiant heat source. Sophisticated constmctions against bioha2ards need further development to produce materials that are both thermally comfortable and impermeable to bloodbome pathogens and other deleterious microorganisms. The most promising approaches are materials that are laminated or coated fabrics that are permeable to vapor but impermeable to Hquids. Statistics included in recent OSHA standards for protection against blood-home pathogens (eg, hepatitis and AIDS vimses) estimate that currently close to six million persons in the United States (health care workers and many other occupations) requite protective clothing and other types of safeguards against these bioha2ards (43).  [c.73]

A review pubHshed ia 1984 (79) discusses some of the methods employed for the determination of phenytoia ia biological fluids, including thermal methods, spectrophotometry, luminescence techniques, polarography, immunoassay, and chromatographic methods. More recent and sophisticated approaches iaclude positive and negative ion mass spectrometry (80), combiaed gas chromatography—mass spectrometry (81), and ftir immunoassay (82).  [c.255]

Fluorescence Immunoassay. Basic FIA follows the same formats and approaches as EIA. The difference Hes in the indicator a fluotophote is used instead of an enzyme. This allows direct quantification of the indicatot—antibody—antigen complex, or free indicator-reagent, without the need for a substrate.  [c.26]


See pages that mention the term Recent approaches : [c.224]    [c.67]    [c.1264]    [c.1910]    [c.2295]    [c.2472]    [c.2479]    [c.537]    [c.537]   
Advanced control engineering (2001) -- [ c.12 ]