Targeting approach


The optimization component of process integration drives the iterations between synthesis and analysis toward an optimal closure. In many cases, optimization is also used within the synthesis activities. For instance, in the targeting approach for synthesis, the various objectives are reconciled using optimization. In the structure-based synthesis approach, optimization is typically the main framework for formulating and solving the synthesis task.  [c.6]

The targeting approach is based on the identification of performance targets ahead of design and without prior commitment to the final network configuration. In the context of synthesizing MENs, two useful targets can be established  [c.47]

Let us now address the question of how accurate the capital cost targets are likely to be. It was discussed earlier how the basic area targeting equation [Eq. (7.6) or Eq. (7.19)] represents a true minimum network area if all heat transfer coefficients are equal but is slightly above the true minimum if there are significant differences in heat transfer coefficients. Providing heat transfer coefficients vary by less than one order of magnitude, Eqs. (7.6) and (7.19) predict an area which is usually within 10 percent of the minimum. However, this does not turn into a 10 percent error in capital cost of the final design, since practical designs are almost invariably slightly above the minimum. There are also two errors inherent in the approach to capital cost targets  [c.232]

These small positive and negative errors partially cancel each other. The result is that capital cost targets predicted by the methods described in this chapter are usually within 5 percent of the final design, providing heat transfer coefficients vary by less than one order of magnitude. If heat transfer coefficients vary by more than one order of magnitude, then a more sophisticated approach can sometimes be justified.  [c.232]

Following this approach, the design is straightforward, and the final design is shown in Fig. 16.176. It achieves the energy targets  [c.382]

The remaining problem analysis technique can be applied to any feature of the network that can be targeted, such as minimum area. In Chap. 7 the approach to targeting for heat transfer area [Eq. (7.6)] was based on vertical heat transfer from the hot composite curve to the cold composite curve. If heat transfer coefficients do not vary significantly, this model predicts the minimum area requirements adequately for most purposes. Thus, if heat transfer coefficients do not vary significantly, then the matches created in the design should come as close as possible to the conditions that would correspond with vertical transfer between the composite curves. Remaining problem analysis can be used to approach the area target, as closely as a practical design permits, using a minimum (or nea minimum) number of units. Suppose a match is placed, then its area requirement can be calculated. A remaining problem analysis can be carried out by calculating the area target for the stream data, leaving out those parts of the data satisfied by the match. The area of the match is now added to the area target for the remaining problem. Subtraction of the original area target for the whole-stream data Anetwork gives the area penalty incurred.  [c.387]

If heat transfer coefficients vary significantly, then the vertical heat transfer model adopted in Eq. (7.6) predicts a network area that is higher than the true minimum, as illustrated in Fig. 7.4. Under these circumstances, a careful pattern of nonvertical matching is required to approach the minimum network area. However, the remaining problem analysis approach can still be used to steer design toward a minimum area under these circumstances. When heat transfer coefficients vary significantly, the minimum network area can be predicted using linear programming. The remaining problem analysis approach can then be applied using these more sophisticated area targeting methods. Under such circumstances, the design is likely to be difficult to steer toward the minimum area, and an automated design method based on the optimization of a reducible structure can be used, as will be discussed later.  [c.387]

Economic Aspects Growers regard the use of fungicides as part of their broad crop management strategy, in both planning and implementation. The conventional approach to crop production and pesticide use has been via economically justified maximum yield responses, and has led to apphcations being either made routinely or targeted to specific risks, with a wide range of frequency of appHcations. Within a particular market segment the pricing of fungicide products from the various manufacturers is extremely uniform and tends to be dictated at least in part by the cost of estabhshed products that have stood the test of time balanced against the needs of the grower to demonstrate a clear cost-benefit advantage from their use. As of 1993, fungicide costs for some of the key market segments ranged from 26 per treatment equivalent to 76 per season for European cereals, 25 per treatment or 150 per season for pome fmit to 18—42 per treatment or 110—250 per season for the prevention of grape downy mildew. These three markets together generated sales of 1.7 biUion at the manufacturers level.  [c.114]

Newer fungicides, in order to retain cost-effectiveness, need to be very highly active, which also serves to achieve efficacy in the field at low dose rates thus keeping environmental pollution problems as small as possible. More in-depth knowledge of fungal biochemistry and the molecular events involved in host/pathogen interactions should facilitate the identification of novel fungal targets for use in a biorational approach to fungicide discovery, through the appHcation of computer-aided molecular design (CAMD) approaches to the molecular modeling (qv) of the target to design new fungicides (82). Recombinant DNA technologies are expected to play an escalating role in the validation of such biorational targets (see Genetic engineering).  [c.114]

Inertial Confinement. Because the maximum plasma density that can be confined is determined by the field strength of available magnets, MEE plasmas at reactor conditions are very diffuse. Typical plasma densities are on the order of one hundred-thousandth that of air at STP. The Lawson criterion is met by confining the plasma energy for periods of about one second. A totally different approach to controlled fusion attempts to create a much denser reacting plasma which, therefore, needs to be confined for a correspondingly shorter time. This is the basis of inertial fusion energy (lEE). In the lEE approach, small capsules or pellets containing fusion fuel are compressed to extremely high densities by intense, focused beams of photons or energetic charged particles as shown in Eigure 4. Because of the substantially higher densities involved, the confinement times for lEE can be much shorter. In fact, no external means are required to effect the confinement the inertia of the fuel mass is sufficient for net energy release to occur before the fuel flies apart. Typical bum times and fuel densities are 10 ° s and 10 —10 ions/m, respectively. These densities correspond to a few hundred to a few thousand times that of ordinary condensed soflds. lEE fusion produces the equivalent of small thermonuclear explosions in the target chamber. An lEE power plant design, therefore, must deal with very different physics and technology issues than an MEE power plant, although some requirements, such as tritium breeding, are common to both. Some of the challenges facing lEE power plants include the highly pulsed nature of the bum, the high rate at which the targets must be made and transported to the beam focus, and the interface between the driver beams and the reactor chamber (15).  [c.154]

Targeting dmgs to the colon has followed two basic approaches, ie, delayed release and exploitation of the colonic flora. Delayed release generally rehes upon enteric coating to ensure safe passage through the stomach, and a delay of 4 to 6 h before dmg release. Enteric coating is a special polymeric coating, such as cellulose acetate phthalate [9004-38-0] that is resistant to gastric fluids at low (1 to 3) pH but dissolves upon exposure to the higher pH of the intestinal contents (pH 5 to 7). The delay capitalizes on the fairly reproducible small intestinal transit time to release dmg in the desired region of the GI tract. The time delay may be modified to release the dmg in different regions of the GI tract. The possible disadvantage of this approach is that some patients have transit times outside the normal range.  [c.226]

There are a large number of life-threatening or chronically debilitating human diseases such as solid tumor cancers, AIDS, antibiotic-resistant microbial infections, asthma, and diabetes that urgently require improved medical treatments. Drug therapy represents one well-established and still attractive approach to treating these serious diseases. In order for the chemotherapeutic approach to be more effective, there is a pressing need to discover and develop new drugs that act against cancer cells, viruses, microbial pathogens, and other molecular disease targets by novel biochemical mechanisms and have diminished side effects. Secondary metabolites produced by the plants, animals, and microorganisms living in the world s oceans represent a vast and relatively unexplored resource of structurally diverse low molecular weight organic molecules that are ideal raw materials for the development of new drugs. Exploitation of this extremely valuable molecular resource is a complex and lengthy process that is shaped by many scientific, legal, business, and environmental issues. The goal of this chapter is to highlight the tremendous opportunities and the considerable challenges involved in developing new pharmaceuticals from the sea.  [c.55]

There are two types of experimental combinatorial chemistry and high throughput screening research directions targeted screening and broad screening [61,62]. The former approach involves the design and synthesis of chemical libraries with compounds that are either analogs of some active compounds or can specifically interact with the biological target under study. This is desired when a lead optimization (or evolution) program is pursued. On the other hand, a broad screening project involves the design and synthesis of a large array of maximally diverse chemical compounds, leading to diverse (or universal)  [c.363]

Chapter 3 reports on a methodology for the allocation of capable component tolerances within assembly stack problems. There is probably no other design effort that can yield greater benefits for less cost than the careful analysis and assignment of tolerances. However, the proper assignment of tolerances is one of the least understood activities in product engineering. The complex nature of the problem is addressed, with background information on the various tolerance models commonly used, optimization routines and capability implications, at both component manufacturing and assembly level. Here we introduce a knowledge-based statistical approach to tolerance allocation, where a systematic analysis for estimating process capability levels at the design stage is used in conjunction with methods for the optimization of tolerances in assembly stacks. The method takes into account failure severity through linkage with FMEA for the setting of realistic capability targets. The application of the method is fully illustrated using a case study from the automotive industry.  [c.416]

The main approach of materials scientists who wished to exploit this approach has been to deposit an array of tiny squares of material of systematically varying compositions, on an inert substrate, originally by sequential sputtering from multiple targets through specially prepared masks which are used repeatedly after 90° rotations. The array is then screened by some technique, as automated as possible to speed things up, to separate the sheep from the goats. Perhaps the first report of such a search was by Xiang et al. (1995), devoted to a search for new superconducting ceramics, with a sample density of as much as 10,000 per square inch. A four-point probe was used to screen the samples. New compositions were found, albeit not with any particularly exciting performance.  [c.444]

Xiang and his many collaborators went on to develop the initial approach in a major way. The stationary masks were abandoned for a technique using precision shutters which could be moved continuously under computer control during deposition sputtering was replaced by pulsed laser excitation from targets. Figure 11.8 schematically shows the mode of operation. The result is a continuously graded thin film instead of separate samples each of uniform composition Xiang calls the end-result a continuous phase diagram (CPD). Composition and structure at any point can be checked by Rutherford back-scattering of ions, and by an x-ray microbeam technique using synchrotron radiation, respectively, after annealing at a modest temperatures to interdiffuse the distinct, sequentially deposited layers. This approach to making a continuously variable thin film was originally tried by Kennedy et al. (1965), curiously enough in the same laboratory as Xiang s present research. At that time, deposition techniques were too primitive for the approach to be successful. Xiang s group (unpublished research) has tried out the technique by making a CPD of binary Ni-Fe alloys and testing magnetic characteristics for comparison with published data. More recently (Yoo et al. 2000), CPDs were used to locate unusual phase transitions in an extensive series of alloyed perovskite manganites of the kind that show colossal magnetoresistance (Section 11.2.4) this  [c.445]

Specific reduction targets for the different processes are not well established. In the absence of specific pollution reduction targets, new plants should always achieve better than the industry averages and should approach the load-based effluent levels.  [c.71]

Chapter Two has provided an overview of the principles of modeling, designing, and operating individual mass exchange units. However, in many industrial situations the problem of selecting, designing, and operating a mass-exchange system should not be confined to assessing the performance of individual mass exchangers. In such situations, there are several rich streams (sources) from which mass has to be removed, and many mass separating agents cmi be used for removing the targeted species. Therefore, adopting a systemic network approach can provide significant technical and economic benefits. In this approach, a mass-exchange system is selected, and designed by simultaneously screening all candidate mass exchange operations to identify the optimum system. This chapter defines the problem of synthesizing mass-exchange networks (MENs) discusses its challenging aspects and provides a graphical approach for the synthesis of MENs.  [c.44]

Because of the vast number of process alternatives, it is important that the synthesis techniques be able to extract the optimal solution(s) from among the numerous candidates without the need to enumerate these options. l vo iruiin synthesis approaches can be employed to determine solutions while circumventing the dimensionality problem structure independent and structure based. The structure-independent (or targeting) approach is based on tackling the synthesis task via a sequence of stages. Within each stage, a design target can be identified and employed in subsequent stages. Such targets are determined ahead of detailed design and without commitment to the final system configuration. The targeting apfuoach offers two main advantages. First, within each stage, the problem dimensionality is reduced to a manageable size, avoiding the combinatorial problems. Second, this approach offers valuable insights into the system performance and characteristics.  [c.4]

As has been mentioned in Section 3.4, the targeting approach adopted for synthesizing MENs attempts to first minimize the cost of MSAs by identifying the flowrates and outlet compositions of MSAs which yield minimum operating cost MOC . This target has been tackled in Chapter Three as well as in the foregoing sections in this chapter. The second step in the synthe.sis procedure is to minimize  [c.111]

Seismic surveys involve the generation of artificial shock waves which propagate through the overburden rock to the reservoir targets and beyond, being reflected back to receivers where they register as a pressure pulse (in hydrophones - offshore) or as acceleration (in geophones - onshore). The signals from reflections are digitised and stored tor processing and the resulting data reconstructs an acoustic image of the subsurface for later interpretation. The objective of seismic surveying is to produce an acoustic image of the subsurface, with as much resolution as possible, where all the reflections are correctly positioned and focused and the image is as close to a true geological picture as can be. This of course is an ideal, but modern (3D and 4D) techniques allow us to approach this ideal.  [c.17]

Scherer et al [205. 206] showed how to prepare, using interferometric methods, pairs of laser pulses with known relative phasing. These pulses were employed in experiments on vapour phase I2, in which wavepacket motion was detected in tenns of fluorescence emission. A more general approach, which can be used in principle to generate pulse sequences of any type, is to transfomi a single input pulse into a shaped output profile, with the intensity and phase of the output under control tln-oughout. The idea being exploited by a number of investigators, notably Warren and Nelson, is to use a programmable dispersive delay line constructed from a pair of diffraction gratings spaced by an active device that is used either to absorb or phase shift selectively the frequency-dispersed wavefront. The approach favoured by Warren and co-workers exploits a Bragg cell driven by a radio-frequency signal obtained from a frequency synthesizer and a computer-controlled arbitrary wavefomi generator [207]. Nelson and co-workers use a computer-controlled liquid-crystal pixel array as a mask [208]. In the fiiture, it is likely that one or both of these approaches will allow execution of currently impossible nonlinear spectroscopies with highly selective infomiation content. One can take inspiration from the complex pulse sequences used in modem multiple-dimension NMR spectroscopy to suppress unwanted interfering resonances and to enliance selectively the resonances from targeted nuclei.  [c.1990]

Schlitter et al. 1994] Schlitter, J., Engels, M., Kruger, P. Targeted molecular dynamics A new approach for searching pathways of conformational transitions. J. Mol. Graph. 12 (1994) 84-89  [c.77]

The Fourier sum, involving the three dimensional FFT, does not currently run efficiently on more than perhaps eight processors in a network-of-workstations environment. On a more tightly coupled machine such as the Cray T3D/T3E, we obtain reasonable efficiency on 16 processors, as shown in Fig. 5. Our initial production implementation was targeted for a small workstation cluster, so we only parallelized the real-space part, relegating the Fourier component to serial evaluation on the master processor. By Amdahl s principle, the 16% of the work attributable to the serially computed Fourier sum limits our potential speedup on 8 processors to 6.25, a number we are able to approach quite closely.  [c.465]

The accuracy of molecular mechanics and that of molecular dynamics simulations share an inexorable dependence on the rigor of the force field used to elaborate the properties of interest. This aspect of molecular modeling can easily fill a volume by itself. The topic of force field development, or force field parameterization, although primarily a mathematical fitting process, represents a rigorous and highly subjective aspect of the discipline (68). A perspective behind this high degree of rigor has been summarized (69). Briefly put, the different schools of thought regarding the development of force fields arose principally from the initial objectives of the developers. For example, in the late 1960s through the 1970s, the AUinger school targeted the computation of the stmcture and energetics of small organic and medicinal compounds (68,70,71). These efforts involved an incremental development of the force field, building up from hydrocarbons and adding new functional groups after certain performance criteria were met, eg, reproduction of experimental stmctures, conformational energies, rotational barriers, and heats of formation. Unlike the consistent force field approach of Lifson and co-workers (59,62—63,65), the early AUinger force fields treated a dozen or more functional groups simultaneously, and were not derived by an analytical least squares fit to aU the data (61). However, because the focus of Lifson was the analysis and prediction of the properties of hydrocarbons or peptides, it was not surprising that a consistent force field was possible. The number of variables to be optimized concurrentiy to permit calculation of aU the stmcture elements, conformational energies, and vibrational spectra concurrentiy was, and stiU is, a massive quantity. However, the calculation for a limited number of functional groups could be accompHshed, albeit slowly. If the goal is to reproduce and predict vibrational spectra, the full second derivative force  [c.164]

The Fusioa PoHcy Advisory Committee of the Departmeat of Eaergy has ideatifted the heavy-ioa accelerator as the leading candidate for an IFE reactor driver (16). The reasons iaclude mggedness, reHabiHty, high pulse-rate capabiHties, and potential for high efficiency. There are two different technologies being developed for heavy-ion accelerators iaduction acceleration and radio frequency (rf) acceleration. The iaduction accelerator approach is pursued mainly in the United States, at the Lawrence Berkeley Laboratory. The rf accelerator approach is pursued primarily in Europe and Japan (15). The same types of heavy ions can be utilized ia both approaches typically cesium, bismuth, or xeaoa are chosea. To obtaia the required 10 —10 W/m oa target ia a reactor, usiag targets of 1 cm size and accelerator energies limited to 5 GeV to provide the requisite stoppiag distance iaside the target fuel, particle beam currents of around 100,000 A are required. These currents are quite large compared to traditional high energy physics accelerators, and ia experiments where high curreats have beea achieved, the beam divergeace has beea uasatisfactotily large.  [c.155]

The state of California has taken a different conceptual approach to reducing emissions through control of gasoline composition. Instead of defining a performance target, ie, 25% reduction, the State has defined composition targets which are aimed at achieving emissions reductions. These targets, shown in Table 10 are to take effect in 1996. Gasoline producers must meet these targets on every Hter of gasoline. If desired, a company may choose to meet yearly average targets, but these are slightly more severe than the per Hter specifications, and the ranges which may be used for averagers are restricted.  [c.191]

W Zheng, SI Cho, A Tropsha. Rational combinatorial library design 1. Focus-2D A new approach to the design of targeted combinatorial chemical libraries. J Chem Inf Comput Sci 38 251-258, 1998.  [c.368]

Reliability targets are typically set based on previous product failures or existing design practice (Ditlevsen, 1997) however, from the above arguments, an approach based on FMEA results would be useful in setting reliability targets early in the design process.  [c.197]

Pest control activities that depend upon the use of pesticides involve the storage, handling, and application of materials that can have serious health effects. Common construction, maintenance practices, and occupant activities provide pests with air, moisture, food, warmth, and shelter. Caulking or plastering cracks, crevices, or holes to prevent harborage behind walls can often be more effective than pesticide application at reducing pest populations to a practical minimum. Integrated Pest Management (IPM) is a low-cost approach to pest control based upon knowledge of the biology and behavior of pests. Adoption of an DPM program can significantly reduce the need for pesticides by eliminating conditions that provide attractive habitats for pests. If an outside contractor is used for pest control, it is advisable to review the terms of the contract and include IPM principles where possible. The following items deserve particular attention. Schedule pesticide applications for unoccupied periods, if possible, so that the affected area can be flushed with ventilation air before occupants return. Pesticides should only be applied in targeted locations, with minimum treatment of exposed surfaces. They should be used in strict conformance with manufacturers instructions and EPA labels. General periodic spraying may not be necessary. If occupants are to be present, they should be notified prior to the pesticide application. Particularly susceptible individuals could develop serious illness even though they are only minimally exposed. Select  [c.212]

A chemical process is an integrated system of interconnected units and streams, and it should be treated as such. Process integration is a holistic approach to process design, retrofitting, and operation which emphasixes the unity of the i ocess. In light of the strong interaction among process units, streams, and objectives, process integration offers a unique framework for fundamentally understanding the global insights of the process, methodically determining its attainable pmfonnance targets, and systematically making decisions leading to the realization of these targets. There are three key components in any comprehensive process integration methodology syntiiesis, analysis, and optimization.  [c.3]

It is instructive to draw some conclusions from this case study and emphasize the design philosophy of mass integration. First, the target for debottlenecking the biotreatment facility was determined ead of design, llien, systematic tools were used to generate optimal solutions that realize the target. Next, an analysis study is needed to refine the results. This is an efficient apf oach to understanding the global insights of the process, setting performance targets, realizing these targets and saving time and effort as a result of focusing on the big picture first and then dealing with the details later. This is a fundamentally different approach than using the designer s subjective decisions to alter the process and check the consequences using detailed analysis. It is also diiferent fiom using simple end-of-pipe treatment solutions. Instead, the various species are optimally allocated throughout the process. Therefore, objectives such as yield enhancement, pollution prevention and cost savings can be simultaneously addressed. Indeed, pollution prevention (when undeitaken with the proper techniques) can be a source of profit for the company, not an economic burden.  [c.95]

The graphical pinch analysis presented in Chapter Three provides the designer with a very useful tool that represents the global transfer of mass from the waste streams to the MSAs and determines performance targets such as MOC of the MSAs. Notwithstanding the usefulness of the pinch diagram, it is subject to the accuracy problems associated with any graphical approach. This is particularly true when there is a wide range of operating compositions for the waste and the lean streams. In such cases, an algebraic method is recommended. This chapter presents an algebraic procedure which yields results that are equivalent to those provided by the graphical pinch analysis. In addition, this chapter describes a systematic technique for matching waste-lean pairs and configuring MENs that realize the MOC solutions.  [c.105]


See pages that mention the term Targeting approach : [c.47]    [c.47]    [c.47]    [c.281]    [c.441]    [c.683]    [c.726]    [c.324]    [c.260]    [c.489]    [c.444]    [c.141]    [c.149]    [c.351]    [c.292]   
Pollution prevention through process integration (1997) -- [ c.4 , c.47 , c.89 , c.218 , c.252 , c.253 ]