A Typical Implementation


Usually when an isomorphism exists between Gq and a subgraph of Gp an entire tree scanning is not necessary and the isomorphism is found at an early stage. To prove that Gq is not a substructure of Gp is a more time-consuming task, since it requires a traversing of all the mappings of the search tree. The backtracking approach is applied to many other tasks that require searching of a solution in a tree structure. This approach is typically implemented through the popular "depth-first search [11] algorithm Figure 6-4), where each node in the tree is expanded on the deepest level of the tree. Only when the search hits dead end no isomorphism is found) does the search go back and expand the nodes at the shallower levels.  [c.299]

This discussion focuses on the individual components of a typical molecular mechanics force field. It illustrates the mathematical functions used, why those functions are chosen, and the circumstances under which the functions become poor approximations. Part 2 of this book. Theory and Methods, includes details on the implementation of the MM-t, AMBER, BlO-t, and OPES force fields in HyperChem.  [c.22]

Aes is often combined with surface ion sputtering using a high energy ion beam to achieve profiling of the elemental composition as a function of depth into the soHd (3,19). This approach is typically implemented using a beam of inert gas ions such as accelerated to 200—5000 eV to sequentially sputter away the surface layer by layer. Simultaneous with this sputtering process, aes can be used to probe the elemental composition in the sputtering crater that is created.  [c.281]

The VLE for the system at hand may be simple and easily represented by an equation or, in some systems, may be so complex that they caimot be adequately measured or represented. ExceUent treatises are available for selection and implementation of vapor—Hquid equiHbrium studies (11—14). Typical VLE for binary systems are shown graphically in Eigure 1. Eigure la is a representative boiling point diagram showing equiHbrium compositions as functions of temperature at a constant pressure. The lower line is the Hquid bubble point line, the locus of points at which a Hquid on heating forms the first bubble of vapor. The upper line is the vapor dew point line, representing points at which a vapor on cooling forms the first drop of condensed Hquid. The Hquid and vapor compositions are conventionally plotted in terms of the low boiling (more volatile) substance, E, in the mixture. The system point M has a vapor composition ofj in equiHbrium with a Hquid composition of at a temperature of T. Eigure lb is a typical isobaric phase otj—x diagram. Eor further discussion, several textbooks are available (15,16).  [c.155]

The occurrence of errors gives rise to various consequences. The nature of the imderlying causes needs to be fed back to policy makers so that remedial strategies can be implemented. A typical strategy will consist of applying existing resources to make changes that will improve human performance and therefore reduce error. This may involve interventions such as improved job design, procedures or training or changes in the organizational culture. These are shown by the arrows to the right of Figure 1.6. An additional (or alternative) strategy is to reduce the level of demands so that the nature of the job does not exceed the human capabilities and resources currently available to do it. An important aspect of optimizing demands is to ensure that appropriate allocation of function takes place such that functions in which humans excel (e.g., problem solving, diagnosis) are assigned to the human while those functions which are not performed well by people (e.g., long-term monitoring) are assigned to machines and/or computers.  [c.17]

Microcomputer Based Electronic Systems. The introduction of microelectronics and computers in energy control systems enabled the implementation of complex closed-loop control algorithms. A typical example of this type of control is an oven temperature control. Figure 3 depicts a diagram of temperature control m an electric oven. The oven temperature is measured by a sensor that gives an analog output. The sensor output signal is converted to a digital signal using an analog to digital converter and then transmitted to a computer that has a control program. The input temperature is compared with a  [c.299]

The goal of most DSM programs has been relatively narrow to reduce energy and power demand to avoid investments in new power plants or transmission and distribution systems. DSM was used within the context of integrated resource planning to yield the lowest cost of energy services by avoiding more costly construction and operation of supply-side power plants. DSM was considered a resource comparable and substitutable for supply-side resources (hence the name—integrated resource planning). Individual utilities have typically implemented DSM for their own customers, as ordered by public utility commissions or other regulatoi y bodies. All utility customers generally have shared the costs for DSM programs because regulators mandated such programs and consequently provided cost-recovery mechanisms for utilities.  [c.759]

The chip contains approximately 512 x 512 picture elements (pixels). The tissues being imaged are illuminated with light from a few of the fiber optic strands. Color images are produced by alternately illuminating with red, green, and blue light. Data from the CCD chip is therefore a series of red, green, blue, red, green, blue, etc, images which are processed to produce the color video. Endoscopes of this form typically have a camera system connected to an external TV monitor, a fiber optic light source, a tube for rinsing the camera lens with water, and a small tube for insertion of a needle or forceps device for collecting biopsy samples. This combination of implements fits into a flexible 1-cm tube.  [c.49]

Every developed nation has experienced product tampering incidents. The principal difference between domestic and foreign incidents is the motive of the tamperers. In the United States, typically random tampering without prior threat occurs whereas outside the United States, extortion prior to injury occurs, with money appearing to be the primary motive. Most developed nations are either implementing or modifying their rules on the use of tamper-evident packaging. Some features as they are used in the United States would have to be modified or the use of a secondary feature required to meet the standards of various other countries.  [c.521]

Method Transfer. Method transfer involves the implementation of a method developed at another laboratory. Typically the method is prepared in an analytical R D department and then transferred to quahty control at the plant. Method transfer demonstrates that the test method, as mn at the plant, provides results equivalent to that reported in R D. A vaUdated method containing documentation eases the transfer process by providing the recipient lab with detailed method instmctions, accuracy and precision, limits of detection, quantitation, and linearity.  [c.369]

An audit must be exercised with great care lest it become a policing function. To optimize the effectiveness of the audit, proper techniques include the following (51). (/) Initiation the auditee should be given ample notice of the impending visit (2) plan the audit standard, such as GMP or ISO 9000, the scope, the schedule, etc, should be clearly identified (J) implementation factual information to document all observations should be carefully collected while objectivity is maintained and casting of blame avoided (4) wrap-up the auditee should be informed of any positive, as well as negative, findings immediately and the audit results presented to senior management representatives at an exit meeting (5) reporting a written audit report should be available in a timely fashion, typically within two to four weeks and (6) conclusion an appropriate period of time, typically four to six weeks, should be  [c.371]

While process design and equipment specification are usually performed prior to the implementation of the process, optimization of operating conditions is carried out monthly, weekly, daily, hourly, or even eveiy minute. Optimization of plant operations determines the set points for each unit at the temperatures, pressures, and flow rates that are the best in some sense. For example, the selection of the percentage of excess air in a process heater is quite critical and involves a balance on the fuel-air ratio to assure complete combustion and at the same time make the maximum use of the Heating potential of the fuel. Typical day-to-day optimization in a plant minimizes steam consumption or cooling water consumption, optimizes the reflux ratio in a distillation column, or allocates raw materials on an economic basis [Latour, Hydro Proc., 58(6), 73, 1979, and Hydro. Proc., 58(7), 219, 1979].  [c.742]

Mechanical Design and Implementation Issues The choice of catalyst has a significant impact on the mechanical design and operation of the reactive column. The catalyst must allow the reaction to occur at reasonable rates at the relatively low temperatures and pressures common in distillation operations (typically less than 10 atmospheres and between 50°C and 250°C). Selection of a homogeneous catalyst, such as a high-boiling mineral acid, allows the use of more traditional tray designs and internals (albeit designed with allowance for high-liquid holdups). With a homogeneous catalyst, lifetime is not a problem, as it is added (and withdrawn) continuously. Alternatively, heterogeneous solid catalysts require either complicated mechanical means for continuous replenishment or relatively long lifetimes in order to avoid constant maintenance. As with other multiphase reactors, use of a sohd catalyst adds an additional resistance to mass transfer from the bulk liquid (or vapor) to the catalyst surface, which may be the limiting resistance. The catalyst containment system must be designed to ensure adequate liquid-solid contacting and minimize bypassing. A number of specialized column internal designs, catalyst containment methods, and catalyst replenishment systems have been proposed for both homogeneous and heterogeneous catalysts. A partial list of these methods is given in Table 13-24.  [c.1321]

At the conclusion of the audit, the auditor should review findings with the toller. It is not necessarily the role of the auditors to prescribe specific toller activities that will alleviate audit findings. The auditors may discuss potential responses or implementation activity, however the audit subjects should be given adequate time to determine a complete and integrated response to the audit findings. Typically, the toller responds to the specific findings with action plans that are discussed with their client company and mutually agreed upon at some period after conclusion of the audit. The action plan should include a timeline for completion and a method of tracking individual action item progress. This action plan is a statement of agreement to specific elements to achieve the agreed performance specified in the toll contract. Progress against the elements of this action plan should be reviewed as part of an overall review of the toller contract performance. Audit subjects receiving low performance ratings may need more aggressive review of performance against action plans.  [c.114]

Depending on plant size and expander manufacturer, power levels of between 2,720-5,440 hp (2-4 MW) are typically being recovered, but higher power levels are not unusual. New approaches have been adopted in the manufacture of both rotors and stationary components for small radial turboexpanders. Unlike conventional technology employing single blades, both inlet guide vanes and impellers wheels are designed and manufactured as integral, and not built-up (composite), components. Impellers or wheels are milled out of high-strength forged disks while the stator components, depending on size, are fabricated by spark erosion from a ring or made as precision castings. These manufacturing methods ensure maximum flexibility in the implementation of specific design requirements.  [c.134]

This book is an invaluable adjunct to those engineers wanting to better understand power supply operation in order to effectively implement the computer-aided design (CAD) tools available. The broad implementation and success of CAD tools, along with the internationalization of the world s design resources, has led to competition that has shortened the typical product design cycle from more than a year to a matter of months. As a result, it is important for design engineers to locate and apply just the right amount of information without a long learning period.  [c.268]

Pnor to the implementation of any evaporative emission controls, fuel vapors were freely vented from the fuel tank to the atmosphere. Diurnal, hot soak, running losses, resting losses, and refueling emissions are the typical evaporative contributions from a motor vehicle. Diurnal emissions occur while a vehicle is parked and the fuel tank is heated due to daily temperature changes Hot soak emissions are the losses that occur due to the heat stored in the fuel tank and engine compartment immediately after a fully warmed up vehicle has been shut down. Running loss emissions are the evaporative emissions that are generated as a result of fuel heating during driving conditions. Resting losses are due to hydrocarbon migration through materials used in fuel system components. Refueling emissions occur due to the fuel vapor that is displaced from the fuel tank as liquid fuel is pumped in.  [c.236]

Metal ion complexes. These classic CSPs were developed independently by Davankov and Bernauer in the late 1960s. In a typical implementation, copper (II) is complexed with L-proline moieties bound to the surface of a porous polymer support such as a Merrifield resin [28-30]. They only separate well a limited number of racemates such as amino acids, amino alcohols, and hydroxy acids.  [c.59]

As noted above, at THz frequencies the Rayleigh-Jeans approximation is a good one, and it is typical to report line intensities and detector sensitivities in tenns of the Rayleigh-Jeans equivalent temperatures. In frequencies range where the atmospheric transmission is good, or from airborne or space-bome platfomis, the effective background temperature is only a few tens of Kelvin. Under such conditions, SIS mixers based on Nb, a particular implementation of which is pictured in figure Bl.4.2 can now perfomi up to 1.0 THz with mixer t—] earliest SIS microwave and millimetre-wave receivers utilized waveguide  [c.1240]

Figure C 1.4.8. (a) An energy level diagram showing the shift of Zeeman levels as the atom moves away from the z = 0 axis. The atom encounters a restoring force in either direction from counteriDropagating light beams, (b) A typical optical arrangement for implementation of a magneto-optical trap. Figure C 1.4.8. (a) An energy level diagram showing the shift of Zeeman levels as the atom moves away from the z = 0 axis. The atom encounters a restoring force in either direction from counteriDropagating light beams, (b) A typical optical arrangement for implementation of a magneto-optical trap.
Th is discussion focuses on th e individual compon en ts of a typical molecular mechanics force field. It illustrates the mathematical functions used, wdi y those functions are chosen, and the circiim -Stan ces u n der wh ich the fun ction s become poor approxirn atiori s. Part 2 of th is book, Theory and Melhadx, includes details on the implementation of the MM+,. AM BHR, RlO-g and OPl.S force fields in HyperChem.  [c.22]

The primary focus of this chapter is on the operator, since he or she is often closest to the process and provides the last line of defense during a process upset. In the typical batch type system the operator is an integral part of the process control. Operators implement the procedural safeguards needed for the safe operation of the process. The reliability of procedural safeguards (standard operating procedures) are dependent on the effectiveness of training, operator experience, the strength of managerial implementation and process documentation. Not only are these hard to measure, but they can change significantly due to a wide variety of factors, such as personnel turnover or change in management. Complicating matters is the fact that a typical operator working in a batch processing facility may have to perform a number of diverse functions during a routine shift. Moveover, most of these functions must be performed in a specified sequence and timeframe. Some of these functions are listed below  [c.125]

Corresponding implementations of the velocity Verlet operator can be easily derived for this Liouville propagator [38]. It should also be realized that the decomposition of iL into a sum of iLi and 1L2 is arbitrary. Other decompositions are possible and may lead to algorithms that are more convenient. One example is that in a typical MD simulation, a large portion of the computer processing time is spent in examining the non-bond pair interactions. These non-bond forces, therefore, can be divided into fast and slow parts based on distance by using a continuous switching function [42]. Applications of this MTS method to protein simulations have been shown to reduce the CPU time by a factor of 4-5 without altering dynamical properties [39,40,42]. In addition, this MTS approach shows significantly better performance enliancement in systems where the separation of fast and slow motions is pronounced [43].  [c.65]

Part 1 is a computerized implementation of the Gaussian plume equations for continuous ground-level or elevated release. The release rate may be time varying within specific prescribed con.straints on variability. Reflection of the plume off the mixing layer lower boundary i.s also modeled. Point, line, area, and volume source geometries may be modeled using the code. Pari 2 predicts the dispersion of denser-than-air gases based on empirical data obtained from wind-tunnel studies for puff and continuous releases into boundary layer shear flow. The model is applicable for level, unobstructed dispersion as well as more complex flow and turbulence structure due to the presence of downwind obstacles. Part 1 assumes complete reflection of the plume at ground level and in the mixing layer. Volume sources are initially approximated as right parallelpipod.s. I he modeling of line, area, and volumetric vapor sources is not an accurate solution of the diffusion equation and may give inaccurate results in downwind concentration estimates when the dimen.sions of the source are not much less than the downwind distance of interest. Part 2 assumes dense gas releases are ground-level point sources released into an ambient temperature of 20 ° C. I Icat transfer effects typically present in dense gas dispersion are not modeled in the code.  [c.362]

Facility-specific implementation requires formation of a local team at each of your company s facilities. As a general rule, these teams comprise resident staff and report to the facility manager. A typical local teeim would include the facility scifety manager and representatives from operations and engineering.  [c.98]

A variety of multidimensional GC systems have been developed for the complete characterization of gasoline and naphtha-type samples. The limit of these multidimensional systems has been the introduction of the with PIONA-analyser in 1971 (12), with FIONA standing for Paraffins-Iso-paraffins-Olefins-Naph-thenes -Aromatics. This system has exploited the unique separation of naphthenes and paraffins as a function of carbon number on a column packed with zeolites of a very specific pore size (13X molecular sieves) (13). In later years, the technique has been expanded to samples having boiling points up to 270 °C (14) and implemented in a commercial instrument (15), which is still in use in the majority of the refinery laboratories for the compositional analyses of gasolines and naphthas. Other investigators have developed comparable systems with capillary columns (16-22), some of which incorporated a mass spectrometer, but these were never commercialized. More recently, with the introduction of oxygenates in gasolines, all of these analyser systems have experienced the shortcoming that they are not able to separate the oxygenates from the hydrocarbon matrix. A new multi-column system has therefore been developed, i.e. the Reformulyser, which overcomes this shortcoming. Figure 14.12 depicts a schematic diagram of the Reformulyser system, with a typical resulting chromatogram obtained from this set-up being shown in Figure 14.13.  [c.390]

In America and elsewhere, coal powered steam locomotives gave way to diesel electric traction over the period 1925 to 1960. This three-decade-plus process of "dieselization —from first innovation to universal application—became a textbook case in the study of the diffusion of new industrial technology. Professor Edwin Mansfield has demonstrated that the economics of dieselization were akin to those of stimulus and response in psychology railroads that could benefit the most from diesel power implemented the new technology most expeditiously. Consequendy (and like jet engines later replacing piston aircraft), dieselization followed a typical logistics euwe—slowinidal acceptance in the face of skepticism and uncertainty, then rapid deployment as benefits were understood, and finally tapering of demand as opportunities became saturated.  [c.724]

In some sense, density fiinctional theory is an a posteriori theory. Given the transference of tlie exchange-correlation energies from an electron gas, it is not surprising that errors would arise in its implementation to highly non-unifonu electron gas systems as found in realistic systems. Flowever, the degree of error cancellations is rarely known a priori. The reliability of density fiinctional theory has only been established by numerous calculations for a wide variety of condensed matter systems. For example, the cohesive energies, compressibility, structural parameters and vibrational spectra of elemental solids have been calculated within the density functional theory [24]. The accuracy of the method is best for systems in which the cancellation of errors is expected to be complete. Since cohesive energies involve the difference in energies between atoms in solids and atoms in free space, error cancellations are expected to be significant. This is reflected in the fact that historically cohesive energies have presented greater challenges for density functional theory the errors between theory and experiment are typically 5-10%, depending on the nature of the density fiinctional. In contrast, vibrational frequencies which involve small structural changes within a given crystalline enviromnent are easily reproduced to within 1-2%.  [c.97]

The concept that Binnig and co-workers [73] developed, which they named the atomic force microscope (AFM, also known as the scaiming force microscope, SFM), involved mounting a stylus on the end of a cantilever with a spring constant, k, which was lower than that of typical spring constants between atoms. This sample surface was then rastered below the tip, using a piezo system similar to that developed for the STM, and the position of the tip monitored [74], The sample position (z-axis) was altered in an analogous way to STM, so as to maintain a constant displacement of the tip, and the z-piezo signal was displayed as a fiinction of V andy coordinates (figure Bl.19.16). The result is a force map, or image of the sample s surface [75], since displacements in the tip can be related to force by Flooke s Law, F = -Icz, where z is the cantilever displacement. In AFM, the displacement of the cantilever by the sample is very simply considered to be the result of long-range van der Waals forces and Bom repulsion between tip and sample. Flowever, in most practical implementations, meniscus forces and contaminants often dominate the interaction with interaction lengths frequently exceeding those predicted [76]. In addition, an entire family of force microscopies has been developed, where magnetic, electrostatic, and other forces have been measured using essentially the same instmment.  [c.1692]

The most connnon commercially prepared amplifier systems are pumped by frequency-doubled Nd-YAG or Nd-YLF lasers at a 1-5 kHz repetition rate a continuously pumped amplifier that operates typically in the 250 kHz regime has been described and implemented connnercially [40]. The average power of all of the connnonly used types of Ti-sapphire amplifier systems approaches 1 W, so the energy per pulse required for an experiment effectively detennines the repetition rate.  [c.1971]

For example, many manufacturers have explored the use of x-rays (wavelengths less than 1 nm) for the hthographic process for producing stmctures smaller than 0.1 p.m (55,85). Use of x-rays, however, faces a host of obstacles to implementation (55,81,85). For example, commercial patterning tools for x-ray masks constitute a major challenge they requke masks made of materials such as gold or tungsten to absorb the x-rays. The features of gold masks must have very high aspect ratios (eg, 5) in order to absorb x-rays in opaque regions of the pattern that can have widths smaller than 100 nm. For optical and uv hthography, masks are typically four or five times larger than the actual image on a chip. They are focused onto the chip with demagniftcation to achieve a greater degree of definition in the Hthography process. However, despite numerous efforts (86), no commercially feasible way to focus x-rays exists, resulting in the use of masks that are exactly the same size as the ckcuit. This means that at each of the 20 or more mask levels, an alignment tolerance of only a few tens of nanometers is requked. This is a difficult goal to achieve. One approach to circumvent these problems involves using a high-powered laser to bombard a metal target to generate x-rays, which then illuminate a reflective mask. A series of mirrors then demagnify the image to its actual size on the siHcon wafer (85).  [c.203]

In 1997, UOP announced the PX-Plus process which also uses a selectivated catalyst to convert toluene to para-rich xylenes. Pina commercialized a TDP process known as the (T2PX) process in 1984 (70). It uses a proprietary catalyst to react toluene at 42—48% conversion with selectivities to benzene of 42 wt % and to xylenes of 46 wt %. The xylenes produced are at equiUbrium. Typical commercial operating conditions of 390—495°C, H2 partial pressure of 4.1 Mpa, H2/hydrocarbon molar ratio of 4 1, and LHSV of 1—2/h. Pina s first commercial implementation occurred in 1985 at their Port Arthur refinery.  [c.417]

In principle, masks with complex valued transmittance functions can be prepared using a composite of two masks, with one implementing only the amphtude perturbations and the other having only a phase distribution via surface deformations usually one method or the other is used. The binary detour-phase method (27) and its variants (28,29) yield masks which have a collection of transparent holes in an opaque background where the size of the holes encode the amphtude and whose spatial position within some predefined limits encode the phase. Because of the binary nature of the masks thus produced, such holograms typically yield higher order diffraction and diffract only a small percentage of the light into the usefiil first order.  [c.162]

Development of adequate ion sources are required for large-scale implementation of PSII. Gaseous discharges with either thermionic, radio frequency, or microwave ionisa tion sources have been successfully used to produce ions. The production of large-scale, uniform, mass analysed plasmas is usually technologically and economically prohibitive. Consequentiy, PSII often produces a broadened implant profile due to the varied stopping ranges of the assorted ion masses that are usually present. For example, when using typical nitrogen plasmas, the atomic ions (N" ) implant deeper than the molecular ions (N" 2)- Nongaseous sources such as vacuum arc discharges (169) appear promising, although large, high current steady-state sources must stiU be developed for a practical PSII system.  [c.400]

A newer form of communications LED has been introduced which utilizes the fact that the radiative recombination process in a LED may be significantly altered when the light-emitting region is placed within an optical cavity that is on the dimensions of the wavelength of emitted light (36). These devices, referred to as resonant cavity LEDs (RCLEDs), exhibit unique and advantageous operating characteristics (37). This surface-emitting device stmcture typically employs a quantum well active region with mirrors on each side. Typically, the bottom and top mirrors consist of distributed Bragg reflectors (DBRs), which unlike vertical cavity surface-emitting lasers have lower reflectivity products to preclude stimulated emission. A metal (reflective) layer is, however, sometimes employed for one of the mirrors. In order to obtain the desired operating characteristics, the exact placement of the quantum well within the active region is critical. Consequently, these devices require high control of the device layer thicknesses, and thus are grown by MOCVD or MBE. The width of the emission spectmm can be substantially decreased in these stmctures, such that EWHM <1 nm in some devices. As a result, these devices can be employed as emitters in systems having greatly reduced chromatic dispersion and significantly enhanced communications bandwidth (38). These devices are being considered for implementation in high speed optical data links.  [c.122]

Reactor operation at 80 to 85% butane conversion to produce maximum yields provides an opportunity for recycle processes to recover the unreacted butane in the stream that is sent to the oxidation reactor. Patents have been issued on recycle processes (146,147) both with and without added oxygen. Pantochim has announced the commercialization of a partial recycle process (119). Operation of the butane to maleic anhydride process in a total recycle configuration can produce molar yields that approach the reaction selectivity which is typically 65 to 75%, significantly higher than the 50 to 60% molar yields from a single pass, high conversion process. The Du Pont transport bed process achieves its high reported yields at least partially through implementation of recycle technology. Recovery of the fuel value of the butane in the off-gas from a single pass configuration plant reduces the economic attractiveness of recycle operation.  [c.455]

The above FF controller can be implemented using analog elements or more commonly by a digital computer. Figure 8-33 compares typical responses for PID FB control, steady-state FF control (.s = 0), dynamic FF control, and combined FF/FB control. In practice, the engineer can tune K, and Tl in the field to improve the performance oTthe FF controller. The feedforward controller can also be simplified to provide steady-state feedforward control. This is done by setting. s = 0 in Gj. s). This might be appropriate if there is uncertainty in the dynamic models for Gl and Gp.  [c.732]

Implementation Issues A critical factor in the successful application of any model-based technique is the availability of a suitaole dynamic model. In typical MPC applications, an empirical model is identified from data acquired during extensive plant tests. The experiments generally consist of a series of bump tests in the manipulated variables. Typically, the manipulated variables are adjusted one at a time and the plant tests require a period of one to three weeks. The step or impulse response coefficients are then calculated using linear-regression techniques such as least-sqiiares methods. However, details concerning the procedures utihzed in the plant tests and subsequent model identification are considered to be proprietary information. The scaling and conditioning of plant data for use in model identification and control calculations can be key factors in the success of the apphcation.  [c.741]

The issue of which approach to Ewald sums is most efficient for a given system size has been plagued by controversy. Probably the best comparison is that by Pollock and Glosli [46]. They implement optimized versions of Ewald summation, EMA and P3M. They conclude that for system sizes of any conceivable interest, the P3M algorithm is most efficient. Interestingly, they also show that P3M can be used to efficiently calculate energies and forces for finite boundary conditions, using a box containing the cluster and a clever filter function in reciprocal space. The particle-mesh-based algorithms are excellent at energy conservation, which is an additional advantage. On the other hand, the EMA may scale better in highly parallel implementations because of the high communication needs of the EET. In addition, since the expensive part of the EMA is due to long-range interactions, the EMA may be more appropriate for multiple time step implementations [41]. The algorithms for P3M and the force-interpolated PME are essentially identical, differing only in the form of the modification to the reciprocal space weighting factors exp[—7t-m-/p-L-]/m-. The sampling density for the P3M turns out to be a shifted B-spline, so the weighting factors are very similar. Thus for the same grid density and order of interpolation, the computational costs of the P3M and force-interpolated PME are the same. In the case that contributions to the Ewald sum from high frequency reciprocal vectors m outside the K X K X K array can be neglected, the expressions for P3M and force-interpolated PME become equivalent, and the accuracy and efficiency are thus equivalent [45]. Under all reasonable simulation parameters it was found that the errors due to neglect of high frequency reciprocal vectors were small compared to remaining errors, so the above two algorithms are equivalent for practical purposes. Eor typical simulation parameters (9 A cutoff, RMS force error lO ) the smooth PME is more efficient than either P3M or force-interpolated PME, because its accuracy is only marginally less than  [c.111]

We now describe a new loop modeling protocol in the conformational search class [239]. It is implemented in the program Modeller (Table 1). The modeling procedure consists of optimizing the positions of all non-hydrogen atoms of a loop with respect to an objective function that is a sum of many spatial restraints. Many different combinations of various restraints were explored. The best set of restraints includes the bond length, bond angle, and improper dihedral angle tenns from the CHARMM22 force field [80,81], statistical preferences for the main chain and side chain dihedral angles [31], and statistical preferences for non-bonded contacts that depend on the two atom types, their distance through space, and separation in sequence [120]. The objective function was optimized with the method of conjugate gradients combined with molecular dynamics and simulated annealing. Typically, the loop prediction corresponds to the lowest energy conformation out of the 500 independent optimizations. The algorithm allows straightforward incorporation of additional spatial restraints, including those provided by template fragments, disulfide bonds, and ligand binding sites. To simulate comparative modeling problems, the loop modeling procedure was evaluated by predicting loops of known structure in only approximately correct environments. Such environments were obtained by distorting the anchor regions corresponding to the three residues at either end of the loop and all the atoms within 10 A of the native loop conformation for up to 2-3 A by molecular dynamics simulations. In the case of five-residue loops in the correct environments, the average error was 0.6 A, as measured by local superposition of the loop main chain atoms alone (C, N, Ca, O). In the case of eight-residue loops in the correct environments, 90% of the loops had less than 2 A main chain RMS error, with an average of less than 1.2 A (Fig. 6).  [c.286]

Typically successful implementation of PTPM in a large plant takes three years. Implementation calls for  [c.728]


See pages that mention the term A Typical Implementation : [c.227]    [c.1145]    [c.1150]    [c.46]    [c.483]    [c.521]    [c.661]    [c.144]    [c.469]   
See chapters in:

Modelling molecular structures  -> A Typical Implementation