Select an Approach


Select an experimental approach visualization, full-scale measurements, reduced-scale measurements  [c.1106]

The literature on catalytic hydrogenation is very extensive, and it is tempting to think that after all this effort there must now exist some sort of cosmic concept that would allow one to select an appropriate catalyst from fundamentals or from detailed knowledge of catalyst functioning. For the synthetic chemist, this approach to catalyst selection bears little fruit. A more reliable, quick, and useful approach to catalyst selection is to treat the catalyst simply as if it were an organic reagent showing characteristic properties in its catalytic behavior toward each functionality. For this purpose, the catalyst is considered to be only the primary catalytic metal present. Support and  [c.2]

An important feature of the analytical approach, which we have neglected thus far, is the presence of a "feedback loop involving steps 2, 3, and 4. As a result, the outcome of one step may lead to a reevaluation of the other two steps. For example, after standardizing a spectrophotometric method for the analysis of iron we may find that its sensitivity does not meet the original design criteria. Considering this information we might choose to select a different method, to change the original design criteria, or to improve the sensitivity.  [c.705]

In 1975, the first successful production of MAbs was reported (44). By fusing normal antibody-producing cells with a B-ceU tumor (myeloma), hybridoma cell lines resulted which produced antibodies having a specificity to only one deterrninant on an antigen ie, all the antibodies produced from the cell line are identical. These studies resulted in a standard approach to MAb production. In this approach, the hybridoma cells are produced in large quantities in culture and screened to select specific clones producing the desired MAb using an appropriate assay. The selected clones are then expanded in culture (or in animals), the cells are collected, and the MAbs are extracted and purified.  [c.28]

A chelant—polymer combination is an effective approach to controlling iron oxide. Adequate chelant is fed to complex hardness and soluble iron, with a slight excess to solubilize iron contamination. Polymers are then added to condition and disperse any remaining iron oxide contamination.  [c.263]

The development of potent enzyme inhibitors has led to an iacreased understanding of enzyme mechanisms and has provided effective therapeutic agents for the treatment of diseases. Enzymes are natural biocatalysts that promote specific reactions essential for the viabiUty of living organisms. They have unique recognition sites that allow them to select their substrates out of the vast pool of biologically important compounds ia living cells. The substrate biads to the active site, that portion of the enzyme responsible for promoting the chemistry iavolved ia converting the substrate to product. Inhibitors of enzymes prevent this chemical reaction from occurring by altering the active site and thereby rendering the enzyme at least temporarily iaactive. Generally, the more tightly the inhibitor biads to the active site, the better but it also should be relatively specific for the target enzyme. An iacreased understanding of the enzyme s specificity for substrate and inhibitor binding enables a more rational design of potent inhibitors, selective for a particular enzyme. There has been movement away from the development of enzyme inhibitors by screening of natural products to the more efficient approach of so termed rational dmg discovery. This article presents several different strategies for rational dmg discovery based on enzyme inhibition. It focuses on the basic concepts and classifies enzyme inhibitors. Other related in-depth reviews have appeared (1 4).  [c.318]

An important feature of the analytical approach, which we have neglected thus far, is the presence of a "feedback loop involving steps 2, 3, and 4. As a result, the outcome of one step may lead to a reevaluation of the other two steps. For example, after standardizing a spectrophotometric method for the analysis of iron we may find that its sensitivity does not meet the original design criteria. Considering this information we might choose to select a different method, to change the original design criteria, or to improve the sensitivity.  [c.705]

Perhaps the easiest way to develop risk estimates for several design options is to pick a piece of input data common to all options and scale the input data for the designs relative to one of them. Consider, for example, three systems (A, B, and C) that each have different material handing requirements. System B will require twice as many material transfers as System A however, the maximum amount of material that could be released from System B as a result of any one accident is one-third as much that could be released from System A. System C will require four times as many material transfers as System A, but the material involved is only half as toxic as the material in System A. Using material transfer frequencies of 1/week, 2/week, and 4/week for Systems A, B, and C, respectively, an analyst can then calculate accident sequence frequencies and consequences in a normal fashion. The result is a directly derived set of relative risk comparisons from which a decision to select the best design can be made. One advantage of this approach of scaling input data is that the analyst does not have to first calculate absolute risk estimates before normalizing them to arrive at the desired relative risk comparisons. A more universal approach is to select a constant divisor (e.g., f0,000/y) for all calculated risk results. The resulting risk index numbers can then be compared globally to show differences in design alternatives, processes, or facilities.  [c.18]

Clearly, the wide variety for force fields requires the user to carefully consider those that are available and choose that which is most appropriate for his or her particular application. Most important in this selection process is a knowledge of the infonnation to be obtained from the computational study. If atomic details of specific interactions are required, then all-atom models with the explicit inclusion of solvent will be necessary. Eor example, experimental results indicate that a single point mutation in a protein increases its stability. Application of an all-atom model with explicit solvent in MD simulations would allow for atomic details of interactions of the two side chains with the environment to be understood, allowing for more detailed interpretation of the experimental data. Furthermore, the use of free energy perturbation techniques would allow for more quantitative data to be obtained from the calculations, although this approach requires proper treatment of the unfolded states of the proteins, which is difficult (see Chapter 9 for more details). In other cases, a more simplified model, such as an extended-atom force field with the solvent treated implicitly via the use of an / -dependent dielectric constant, may be appropriate. Examples include cases in which sampling of a large number of conformations of a protein or peptide is required [7]. In these cases the use of the free energy force fields may be useful. Another example is a situation in which the interaction of a number of small molecules with a macromolecule is to be investigated. In such a case it may be appropriate to treat both the small molecules and the macromolecule with one of the small-molecule-based force fields, although the quality of the treatment of the macromolecule may be sacrificed. In these cases the reader is advised against using one force field for the macromolecule and a second, unrelated, force field for the small molecules. There are often significant differences in the assumptions made when the parameters were being developed that would lead to a severe imbalance between the energetics and forces dictating the individual macromolecule and small molecule structures and the interactions between those molecules. If possible, the user should select a model system related to the particular  [c.15]

In an experimental analysis it is not feasible to insert into Eq. (9) the atomic positions for all the atoms in the crystal for every instant in the time of the experiment. Rather, the intensity must be evaluated in tenns of statistical relationships between the positions. A convenient approach is to consider a real crystal as a superposition of an ideal periodic structure with slight perturbations. When exposed to X-rays the real crystal gives rise to two scattering components the set of Bragg reflections arising from the periodic strucmre, and scattering outside the Bragg spots (diffuse scattering) that arises from the structural perturbations  [c.241]

What similarity between the target and template sequences is needed to have a chance of obtaining a useful comparative model This depends on the question that is asked of a model (Section VI). When only the lowest resolution model is required, it is tempting to use one of the statistical significance scores for a given match that is reported by virtually any sequence comparison program to select the best template. However, it is better to proceed with modeling even when there is only a remote chance that the best template is suitable for deriving a model with at least a correct fold. The usefulness of the template should be assessed by the evaluation of the calculated 3D model. This is the best approach, because the evaluation of a 3D model is generally more sensitive and robust than the evaluation of an alignment (Section V) [9].  [c.279]

Only CSC s DFA/MA method uses an explicit assembly sequencing method to augment the DFA process and the approach employed here is developed, with slight modifications, from this. Figure 2.16 shows an assembly sequence diagram for a simple castor wheel assembly. The development of assembly sequence diagrams for more complex assemblies does become a prohibitive task, but its generation should be one of the standard engineering design tasks.  [c.63]

The casings on axial compressors are somewhat unusual, because of the disproportionately large inlet and outlet nozzles. This makes the compres sors appear to be only nozzles connected by a long tube. Casings can be fabricated or cast, with the fabricated obviously being steel, while the castings can be cast iron or cast steel. In some designs, the casing is an outer shell containing an inner shell, which acts as the stator vane carrier. In other designs, the stators are directly carried on the casing, which are of a one-part construction. With this latter design, the casing is made up of three distinct parts, bolted at two vertical joints. The parts are the inlet section, the center body with the stators, and the discharge section. The three sections are also split horizontally for maintenance. With the three piece-bolted construction, a mixture of fabrications and castings may be used. The mounting feet are attached to the outside casing and so located as to provide a more or less centerline support. As mentioned earlier, some designs use a rectangular inlet section to provide more axial clearance. The reason for using an entire separate casing is that it forms a separate pressure casing that can readily be hydrotested. The disadvantage is that there is more material involved making the cost higher. The compressors that use the integral stator section or single case approach have somewhat of a cost advantage, and in general, may have a slight advantage in being able to keep the stator carriers round, because of the end-bolting to the other casing components. The disadvantage is that there are more joints to seal and maintain. Checking out the entire casing for strength and leakage in a hydro and gas test becomes somewhat more complex.  [c.247]

Steric and stereoelectronic effects control the direction of approach of an electrophile to the enolate. Electrophiles approach from the least hindered side of the enolate. Numerous examples of such effects have been observed. In ketone and ester enolates that are exocyclic to a conformationally biased cyclohexane ring there is a slight  [c.438]

The LEP method gives useful estimates of activation energy, but it produces the interesting result (for the H -I- H2 system) that there is a shallow basin at the transition state on a reaction coordinate diagram this would be seen as a slight dip at the top of the energy barrier. This basin implies that the triatomic species at the transition state has some stability relative to motion in all directions—that it is in some sense an intermediate. Quantum mechanical variational calculations do not reveal this basin, and it is probably an artifact of the LEP procedure. To overcome this defect of the LEP method, Sato replaced the assumption of the constant ratio of coulombic to total energy with an alternative route to the estimation of the coulombic and exchange integrals. Other changes were introduced, so the LEPS method is itself quite arbitrary, but it eliminated the basin at the top of the energy barrier. However, the profile along the reaction coordinate of an LEPS surface reveals that the barrier is exceptionally thin and, therefore, suggests more quantum mechanical tunneling than seems appropriate. Other modified LEP methods have been devised in order to eliminate the transition state basin. It appears to be necessary that the ratio of coulombic to total energy approach unity as the intemuclear distance increases if the basin is to disappear.  [c.196]

The micron ratings of a cartridge are intended to indicate the smallest particle that will be retained by the pores of the filter element. Often a rough-cut pre-filter is installed ahead of a final or polishing filter in order to increase the life of the final unit. Unfortunately, the method for determining the micron rating is not a universal standard between manufacturers. Thus, one manufacturer s 50 micron filter may not perform the same as another manufacturer s with the same rating number. The only reliable approach is to send the manufacturer an actual sample of the fluid and let him test it to select the filter to do your job, or actually test the unit in your plant s field application [37].  [c.277]

When selecting the bandwidth frequency range to use for data collection in a vibration-monitoring system, one might be tempted to select the broadest range available. If enough computing power was available, we could simply gather data over an infinite frequency range, analyze the data, and be assured that no impending failures were missed. However, practicalities of limited computing power prevent us from taking this approach.  [c.715]

Near infrared reflectance analysis (nira) is an example of an approach to automated characterization and intelligent iastmmentation (12—15). In nira, which is used to examine complex samples, the resulting spectral pattern is heavily overlapped and spectral differences resulting from composition are barely observable. However, instead of resorting to conventional data reduction procedures, a combination of spectral correlation and a "self-teaching" algorithm is used. The procedure involves first measuring a set of preanalyzed samples, cross-correlating the spectra to composition using multilinear regression analysis, and using an optimization algorithm to select a set of measurement wavelengths and calibration coefficients for performing the analysis. Nira is capable of providing approximately 0.1% reproducible analysis utilizing spectral data in which gross compositional differences are barely observable. The power of such self-optimizing procedures, using the entire data set, goes much further than merely performing difficult measurements. A number of apphcations of the technique, such as in the analysis of grain and food products (16), have been successful in determining sample composition for which no known spectral feature, or no spectral feature at all, existed.  [c.394]

Classific.atinn and conformity a.s.se.s.sment of the equipment tarticles 9 and 101 For the purpose of conformity assessment the directive distinguishes between hazard categories 1 to IV, whereby category 1 relates to the lowest risk. To each of these categories adequate modules have been assigned. In the lowest risk category 1 the module A has been attributed which foresees no intervention of the notified body whilst categories II to IV impose an ascending intervention of that body. As referred to earlier, a choice is given to the manufacturer to select either a procedure based on product control or based on quality assurance systems. Finally, in order to add to the flexibility already inherent in the New Approach, the modules attributed to a higher hazard category may be used in lower categories.  [c.942]

Genetic algorithms can also be used to derive QSAR equations [Rogers and Hopfinger 1994] The genetic algorithm is supplied with the compounds, their activities and informatioi about their properties cind other relevant descriptors. From this data, the genetic algorithn generates a population of linear regression models, each of which is then evaluated to give the fitness score. A new population of models is then derived using the usual genetic algorithm operators (see Section 9.9.1), with the parameters in the models being selectee on the basis of the fitness. Unlike other methods, the genetic algorithm approach provide a family of models from which one can either select the model with the best score o generate an average model.  [c.717]

Onc-Factor-at-a-Timc Optimization One approach to optimizing the quantitative method for vanadium described earlier is to select initial concentrations for ITiOz and 1T2S04 and measure the absorbance. We then increase or decrease the concentration of one reagent in steps, while the second reagent s concentration remains constant, until the absorbance decreases in value. The concentration of the second reagent is then adjusted until a decrease in absorbance is again observed. This process can be stopped after one cycle or repeated until the absorbance reaches a maximum value or exceeds an acceptable threshold value.  [c.669]

Use of Genetic Markers. Polymorphisms in DNA sequence exist within the prolactin gene locus in cattie (56). These are used as genetic markers for increased milk production (57). The genetic marker is classified as an RFLP and is not associated with any differences in the amino acid sequence of prolactin. The marker can be visualized by digestion of genomic DNA withM MI enzyme. A 200 base-pair deletion near the prolactin gene can be detected using this approach. Offspring that inherit the favorable allele at the prolactin locus have milk production that averages 283 kg/yr more than offspring that inherit the unfavorable allele. This marker is used to select superior female cattie for milk production or to select superior sires for the artificial insemination industry.  [c.244]

Onc-Factor-at-a-Timc Optimization One approach to optimizing the quantitative method for vanadium described earlier is to select initial concentrations for H2O2 and H2SO4 and measure the absorbance. We then increase or decrease the concentration of one reagent in steps, while the second reagent s concentration remains constant, until the absorbance decreases in value. The concentration of the second reagent is then adjusted until a decrease in absorbance is again observed. This process can be stopped after one cycle or repeated until the absorbance reaches a maximum value or exceeds an acceptable threshold value.  [c.669]

How was this type of alloy discovered in the first place Well, the fundamental principles of creep-resistant materials design that we talked about helps us to select the more promising alloy recipes and discard the less promising ones fairly easily. Thereafter, the approach is an empirical one. Large numbers of alloys having different recipes are made up in the laboratory, and tested for creep, oxidation, toughness, thermal fatigue and stability. The choice eventually narrows down to a few alloys and these are subjected to more stringent testing, coupled with judicious tinkering with the alloy recipe. All this is done using a semi-intuitive approach based on previous experience, knowledge of the basic principles of materials design and a certain degree of hunch and luck Small improvements are continually made in alloy composition and in the manufacture of the finished blades, which evolve by a sort of creepy Darwinism, the fittest (in the sense of Table 20.1) surviving.  [c.202]

Pest control activities that depend upon the use of pesticides involve the storage, handling, and application of materials that can have serious health effects. Common construction, maintenance practices, and occupant activities provide pests with air, moisture, food, warmth, and shelter. Caulking or plastering cracks, crevices, or holes to prevent harborage behind walls can often be more effective than pesticide application at reducing pest populations to a practical minimum. Integrated Pest Management (IPM) is a low-cost approach to pest control based upon knowledge of the biology and behavior of pests. Adoption of an DPM program can significantly reduce the need for pesticides by eliminating conditions that provide attractive habitats for pests. If an outside contractor is used for pest control, it is advisable to review the terms of the contract and include IPM principles where possible. The following items deserve particular attention. Schedule pesticide applications for unoccupied periods, if possible, so that the affected area can be flushed with ventilation air before occupants return. Pesticides should only be applied in targeted locations, with minimum treatment of exposed surfaces. They should be used in strict conformance with manufacturers instructions and EPA labels. General periodic spraying may not be necessary. If occupants are to be present, they should be notified prior to the pesticide application. Particularly susceptible individuals could develop serious illness even though they are only minimally exposed. Select  [c.212]

It is never appropriate to add any type of anti-freeze solution to an open cooling tower. Closed (fluid cooler) systems, however, can be protected from freeze-up by the addition of ethylene glycol or other fluids. Fluid cooler casing sections can also be insulated to reduce heat loss thereby protecting the coil from freeze-up. Counterflow, blowthrough towers tend to be more popular as the freeze potential increases. Crossflow towers tend to freeze water on their air inlet louvers under extreme conditions. Fans (propeller type) can be arranged to reverse direction on such towers to melt ice. This process should never be automated. Instead, the operator should weigh the situation and reverse the fan only as long as required. The designer must select components suitable for reverse rotation. Fan discharge dampers are a capacity control accessory item for centrifugal fan cooling towers. They fit in the fan scroll. In the open position, they are much like a thin piece of sheet metal in a moving airstream oriented parallel to airflow. The airstream doesn t know its there. As the dampers close- the sheet metal becomes less parallel to airflow- turbulenee disrupts the air stream. Airfoil dampers essentially ruin fan housing effieiency to achieve a reduction in airflow. Dampers can set and locked when a manual locking quadrant is specified but it is more common to use electric or pneumatic actuators that close the dampers as the exiting water temperature becomes too low. While reducing airflow is the correet method of reducing capacity, dampers are not the best approach. They offer the poorest energy savings and the actuating mechanisms tend to fail long before the average eooling tower life span.  [c.79]

Welding Operations Efforts have been made to use the LVHV design approach for controlling welding fumes. Sometimes, this can be an effective method. Sometimes, however, there can be serious problems with the high-velocity exhaust stripping away shielding gases and causing poor quality welds. It is also difficult for exhaust nozzles to survive without damage in industrial welding environments, where even relatively slight damage can cause significant changes in the high-velocity airflow patterns and adversely affect welding. Most successful point-exhaust applications for welding establish capture velocities lower than for LVHV dust control, but still higher than for conventional exhaust hoods.  [c.854]

As the production methods of MWCNTs is very efficient [8] (see Chaps. 2 and 12), it is an advantage to implement a filling procedure after the synthesis. A promising approach to fill CNT cavities, could exploit the capillary properties that have been revealed by Ajayan and lijima [9]. Subsequent studies by Dujardin et al.[10] allowed the estimations of a surface tension threshold in order to select materials that are good candidates to wet and fill CNTs.  [c.129]

The critical incident technique was first described by Flanagan (1954) and was used during World War II to analyze "near-miss incidents." The war time studies of "pilot errors" by Fitts and Jones (1947) are the classic studies using this technique. The technique can be applied in different ways. The most common application is to ask individuals to describe situations involving errors made by themselves or their colleagues. Another, more systematic approach is to get them to fill in reports on critical incidents on a weekly basis. One recent development of the technique has been used in the aviation world, to solicit reports from aircraft crews in an anonjmrous or confidential way, on incidents in aircraft operations. Such data collection systems will be discussed more thoroughly in Chapter 6.  [c.157]

The MO wave function for CH4 may be improved by adding configurations corresponding to excited determinants, i.e. replacing occupied MOs with virtual MOs. Allowing all excitations in the minimal basis valence space and performing the full optimization corresponds to a [8,8]-CASSCF wave function (Section 4.6). Similarly, the SCVB wave function in eq. (7.10) may be improved by adding ionic VB structures like and CH /H this corresponds to exciting an electron from one of the singly occupied VB orbitals into another VB orbital, thereby making it doubly occupied. The importance of these excited/ionic terms can again be determined by the variational principle. If all such ionic terms are included, the fully optimized SCVB+CI wave function is for all practical purposes identical to that obtained by the MO-CASSCF approach (the only difference is a possible slight difference in the description of the carbon Is core orbital). Both types of wave function provide essentially the same total energy, and thus include the same amount of electron correlation. The MO-CASSCF wave function attributes the electron correlation to interaction of 1764 configurations, the HF reference and 1763 excited configurations, with each of the 1763 configurations providing only a small amount of the correlation energy. The SCVB wave function (which includes only one resonance structure), however, contains 90-f% of the correlation energy, and only a few % is attributed to excited structures. The ability of  [c.200]

A 500-ml three-necked flask equipped with a thermometer, a mechanical stirrer, a condenser, and an addition funnel (openings protected by drying tubes) is charged with 21 g (0.105 mole) of trimethylene dibromide and 16 g (0.1 mole) of diethyl malonate (previously dried over calcium sulfate). A solution of 4.6 g of sodium in 80 ml of absolute alcohol is added through the addition funnel at a rate so as to maintain the reaction temperature at 60-65"". The mixture is then allowed to stand until the temperature falls to 50-55°, then is heated on a steam bath for 2 hours. Sufficient water is added to dissolve the precipitated sodium bromide, and the excess ethanol is removed by distillation on a steam or water bath. A steam delivery tube is inserted into the flask and steam distillation is carried out until all the diethyl 1,1-cyclobutanedicarboxylate and diethyl malonate have come over (400-500 ml of distillate). The distillate is extracted three times with 50-ml portions of ether, and the ether is evaporated on a rotary evaporator (drying is not necessary). The residue is refluxed with a solution of 11.2 g of potassium hydroxide in 50 ml of95% ethanol for 2 hours, then cooled, and the ethanol is removed (rotary evaporator). The residue is dissolved in hot water (approx. 10 ml) and concentrated hydrochloric acid (approx. 8 ml) is added until the solution is just acid to litmus. The solution is boiled briefly to expel carbon dioxide, made slightly alkaline with dilute ammonium hydroxide, and a slight excess of aqueous barium  [c.96]

The best way to select a consultant is to seek recommendations from other companies who are known to have carried out a similar exercise. Failing this, a useful guide is the Association of Consulting Engineers Yearbook, which lists the consultants classified according to the type of engineering work they engage in and their specialist discipline. An alternative approach is to contact associations set up to serve particular industries (e.g. the Production Engineering Research Association (PERA) in Melton Mowbray for engineering production and the Rubber and Plastics Research Association (RAPRA)).  [c.82]

The basis of frequency-domain vibration analysis assumes that we monitor the rotational frequency components of a machine-train. If a single block of data is acquired, non-repetitive or spurious data can be introduced into the database. The microprocessor should be able to acquire multiple blocks of data, average the total and store the averaged value. This approach will enable the data acquisition unit to automatically reject any spurious data and provide reliable data for trending and analysis. Systems that rely on a single block of data will severely limit the accuracy and repeatability of acquired data. They will also limit the benefits that can be derived from the program. The microprocessor should also have electronic circuitry that automatically checks each data set and block of data for accuracy and reject any spurious data that may occur. Auto-rejection circuitry is available in several of the commercially available systems. Coupled with multiple block averaging, this auto-rejection circuitry assures maximum accuracy and repeatability of acquired data. A few of the microprocessor-based systems require the user to input the maximum scale that is used to acquire data. This will severely limit the accuracy of data. Setting the scale too high will prevent acquisition of factual machine data. A setting that is too low will not capture any high-energy frequency components that may be generated by the machine-train. Therefore, the microprocessor should have auto-scaling capability to ensure accurate data. Vibration data can be distorted by high frequency components that fold-over into the lower frequencies of a machine s signature. Even though these aliased frequency components appear real, they do not exist in the machine. Low frequency components can also distort the mid-range signature of a machine in the same manner as high frequency. The microprocessor selected for vibration should include a full range of anti-aliasing filter to prevent the distortion of machine signatures. The features illustrated in the example also apply to nonvibration measurements. For example, pressure readings require the averaging capability to prevent spurious readings. Slight fluctuations in line or vessel pressure are normal in most plant systems. Without the averaging capability, the microprocessor cannot acquire an accurate reading of the true system pressure.  [c.806]

To motivate the approach, suppc>se we had a size N = 4 input vector and the problem was as above to teach the net to learn that the first site has value +1 40% of the time. Of the 2 = 16 possible input vectors, 8 have their first component equal to +1. The reader should have no difficulty in convincing herself that there are an infinite number of ways of assigning probabilities to these 8 vectors such that the 40% probability of occurrence criterion for value +1 at the first site is satisfied. Why should we select one particular distribution over any other in this infinite set Since there is no compelling reason to give any higher weight to any one of the 8 vectors with value +1 at the first site, one obvious choice is simjjly to distribute probabilities equally P(l, 0,0,0) = 5%, P(l, 0,0,1) = 5%,. ., P(l, 1,1,1) = 5%. Recalling elementary information theory (see section 2.1.2), this choice amounts to maximizing the system entropy.  [c.534]


See pages that mention the term Select an Approach : [c.129]    [c.131]    [c.137]    [c.494]    [c.2964]    [c.108]    [c.158]    [c.819]    [c.223]    [c.139]    [c.262]   
See chapters in:

Guidelines for implementing process safety management systems  -> Select an Approach