Case studies approach

Some modifications to the TI method and the introduction of a sound loop-breaking strategy that is algorithmic in nature are shown in Reference 11. Also shown is the concept of level of loop to indicate the number of source and sink streams involved in any heat-load loop. Loops can range from first level (one source and one sink stream) to Nth level. The smaller of the number of source or number of sink streams is N. The evolutionary procedure searches for loops from the lowest level first and tries to break (eliminate) them to reduce the number of exchangers the procedure also introduces stream spHts (11). Reference 5 includes the concept of using A7 to find utiHty rates and thereafter using AT for network synthesis. This indirectiy allows minimization of the number of heat-exchanger sheUs rather than just the number of exchangers. A case study analysis is necessary to implement this method, which is called the double temperature of approach (DTA).  [c.525]

Any formal approach to process synthesis of an industrial problem soon grows into a computational problem of developing an astronomical number of alternatives and then finding a way of selecting among them. The two main approaches to these problems are heuristic and mixed-integer programming. The heuristic approach involves making up different case studies and evaluating them. Mixed-integer programming is a mathematical tool wherein all the defined alternate stmctures are modeled as a mathematical programming problem and special computer codes produce an optimal solution with the stmcture and the values of all the variables that produce the best value of the objective function.  [c.81]

One obvious way of making better violins would be to dismantle a fine instrument, make accurate thickness measurements over the whole of the soundboard, and mass-produce sound-boards to this pattern using computer-controlled machine tools. But there is a problem with this approach because wood is a natural material, soundboard blanks will differ from one another to begin with, and this variability will be carried through to the finished product. But we might be able to make a good violin every time if we could replace wood by a synthetic material having reproducible properties. This case study, then, looks at how we might design an artificial material that will reproduce the acoustically important properties of wood as closely as possible.  [c.314]

This case study discusses the design of a reciprocating mechanical press for the manufacture of can lids drawn from sheet steel material. The authors were involved in the early stages of the product development process to advise the company designing the press in choosing between a number of design alternatives with the goal of ensuring its reliability. The authors used a probabilistic approach to the problem to provide the necessary degree of clarity between the competing solutions.  [c.244]

The press had been designed with a capacity to deliver 280 kN press force and to work at a production rate of 40 lids per minute. Calculations to determine the distribution of forming loads required indicated that the press capacity was adequate to form the family of steel lids to be produced on the machine. One of the major areas of interest in the design was the con-rod and pin (see Figure 4.66). The first option considered was based on a previous design where the con-rod was manufactured from cast iron with phosphor bronze bearings at the big and small ends. However, weaknesses in this approach necessitated the consideration of other options. The case study presents the analysis of the pin and con-rod using simple probabilistic techniques in an attempt to provide in-service reliable press operation. The way a weak link was introduced to ensure ease of maintenance and repair in the event  [c.244]

Chapter 3 reports on a methodology for the allocation of capable component tolerances within assembly stack problems. There is probably no other design effort that can yield greater benefits for less cost than the careful analysis and assignment of tolerances. However, the proper assignment of tolerances is one of the least understood activities in product engineering. The complex nature of the problem is addressed, with background information on the various tolerance models commonly used, optimization routines and capability implications, at both component manufacturing and assembly level. Here we introduce a knowledge-based statistical approach to tolerance allocation, where a systematic analysis for estimating process capability levels at the design stage is used in conjunction with methods for the optimization of tolerances in assembly stacks. The method takes into account failure severity through linkage with FMEA for the setting of realistic capability targets. The application of the method is fully illustrated using a case study from the automotive industry.  [c.416]

This case study helps to illustrate an innovative approach to managing air pollution problems. Savings derived from an emissions trading program represent an excellent way of making funds available for reinvestment into a facility operation, and further, encourages wider application of pollution prevention practices. When considering P2 technologies versus straightforward control hardware, try to identify approaches that will enable possible savings to be converted into working capital that improves efficiency, quality, productivity, and more sweeping reductions in emissions.  [c.514]



In the second case study, variation tree analysis and the events and causal factors chart/root cause analysis method are applied to an incident in a resin plant. This case study illustrates the application of retrospective analysis methods to identify the imderlying causes of an incident and to prescribe remedial actions. This approach is one of the recommended strategies in the overall error management framework described in Chapter 8.  [c.292]

The key steps to integrate PSM and ESH activities using a Quality Management system are summarized below through a very simplified case study. The approach used in the case study is to indicate the input, activity and output of each stage of the integration effort. (Each chapter in these guidelines represents one stage of the integration effort.) For each stage, the input represents the data available and the starting point the activity describes the main efforts and the output means the findings or results.  [c.149]

These capabilities of ANNs make them a unique tool for a large number of industrial applications. In this chapter, the authors demonstrate, with case studies, the advantages of using this approach to physical property predictions in polymer science.  [c.1]

Literature in the area of neural networks has been expanding at an enormous rate with the development of new and efficient algorithms. Neural networks have been shown to have enormous processing capability and the authors have implemented many hybrid approaches based on this technique. The authors have implemented an ANN based approach in several areas of polymer science, and the overall results obtained have been very encouraging. Case studies and the algorithms presented in this chapter were very simple to implement. With the current expansion rate of new approaches in neural networks, the readers may find other paradigms that may provide new opportunities in their area of interest.  [c.31]

The following case study demonstrates a step-by-step approach to performing a comprehensive material and heat balance.  [c.147]

In the case of films on liquids, AV was measured directly rather than through determinations of thermionic work functions. However, just as an adsorbed film greatly affects AV, so must it affect In the case of metals, the effect of adsorbed gases on A V is easily studied by determining changes in the work function, and this approach is widely used today (see Section VIII-2C). Much of Langmuir s early work on chemisorption on tungsten may have been stimulated by a practical appreciation of the important role of adsorbed gases in the behavior of vacuum tubes.  [c.209]

The general type of approach, that is, the comparison of an experimental heat of immersion with the expected value per square centimeter, has been discussed and implemented by numerous authors [21,22]. It is possible, for example, to estimate sv - sl from adsorption data or from the so-called isosteric heat of adsorption (see Section XVII-12B). In many cases where approximate relative areas only are desired, as with coals or other natural products, the heat of immersion method has much to recommend it. In the case of microporous adsorbents surface areas from heats of immersion can be larger than those from adsorption studies [23], but the former are the more correct [24].  [c.576]

One interesting new field in the area of optical spectroscopy is near-field scaiming optical microscopy, a teclmique that allows for the imaging of surfaces down to sub-micron resolution and for the detection and characterization of single molecules [, M]- Wlien applied to the study of surfaces, this approach is capable of identifying individual adsorbates, as in the case of oxazine molecules dispersed on a polymer film, illustrated in figure Bl.22,11 [82], Absorption and emission spectra of individual molecules can be obtamed with this teclmique as well, and time-dependent measurements can be used to follow the dynamics of surface processes.  [c.1794]

P. C. Pienaar and W. E. P. Smith, "A Case Study of the Production of High Grade Manganese Sinter from Low-Grade Mamatwan Manganese Ore," Proceedings of the 6th International Ferroalloys Congress (fnfacon). Cape Town, South Africa, 1992, p. 149.  [c.499]

Knowledge Acquisition. The process of kaowledge acquisitioa is depeadeat oa the results of problem analysis, ie, the representation and reasoning methods chosen for the problem guide the knowledge engineer through the eHcitation (59). For example, knowledge acquisition for a diagnostic appHcation would focus on the various root causes relevant to the domain, associations between toot causes and symptoms, and ways to organize the space of root causes to better focus the search. However, there are a few general methods that are useful, iadependent of the task. These iaclude protocol analysis, repertory grids, and iaduction from examples (36). Protocol analysis is a technique for obtaining case studies and foUowiag the detailed steps ia the expert s chaia of reasoniag. Repertory grids are a way of organizing associatioas ia the form of tables, noting differeaces betweea associatioas, or other attributes of the associatioas, eg, level of coafideace. From an object-oriented poiat of view, repertory grids are particularly useful for identifyiag features of iaterest, and for assigning values to those features. Lasdy, iaduction from examples is a useful approach when the representation is simple and relatively unstmctured. Given actual problem-solving examples of data and the conclusions resultiag from the data, an iaduction system can generalize a decision tree or a set of unstmctured rules. Knowledge engineers and experts ate the primary participants ia the kaowledge acquisitioa step.  [c.538]

In this the final case study we have touched on a probabilistic approach in support of designing against fatigue failure, a topic which is actually outside the scope of the book. A fatigue analysis for the con-rod would need to take into account all factors affecting the fatigue life, such as stress concentrations and surface finish. However, it has indicated that a probabilistic design approach has a useful role in such a setting. Readers interested in more on stress concentrations and probabilistic fatigue design are directed to Carter (1986), Haugen (1980) and Mischke (1992).  [c.249]

It is instructive to draw some conclusions from this case study and emphasize the design philosophy of mass integration. First, the target for debottlenecking the biotreatment facility was determined ead of design, llien, systematic tools were used to generate optimal solutions that realize the target. Next, an analysis study is needed to refine the results. This is an efficient apf oach to understanding the global insights of the process, setting performance targets, realizing these targets and saving time and effort as a result of focusing on the big picture first and then dealing with the details later. This is a fundamentally different approach than using the designer s subjective decisions to alter the process and check the consequences using detailed analysis. It is also diiferent fiom using simple end-of-pipe treatment solutions. Instead, the various species are optimally allocated throughout the process. Therefore, objectives such as yield enhancement, pollution prevention and cost savings can be simultaneously addressed. Indeed, pollution prevention (when undeitaken with the proper techniques) can be a source of profit for the company, not an economic burden.  [c.95]

Generally speaking, typical major incident conditions correspond to a release of some ten thousands of kilograms of some hydrocarbon at the site of a chemical plant or refinery that is characterized by the presence of obstructed and partially confined areas in the form of densely spaced equipment. The relative agreement with results derived from the multienergy method indicates that application of this concept is a reasonable approach for this case study.  [c.275]

The next component of the systems approach is the process of learning lessons from operational experience. In Chapter 6, and the case studies in Chapter 7, several techniques are described which can be used to increase the effectiveness of the feedback process. Incident and near-miss reporting systems are designed to extract information on the underlying causes of errors from large numbers of incidents. Chapter 6 provides guidelines for designing such systems. The main requirement is to achieve an acceptable compromise between collecting sufficient information to establish the xmderlying causes of errors without requiring an excessive expenditure of time and effort.  [c.21]

Another key issue remaining to be resolved is whether a one-dimensional GLE as in A3.8.11 is the optimal choice of a dynamical model in the case of strong damping, or whether a two- or multi-dimensional GLE that explicitly includes coupling to solvation and/or intramolecular modes is more accurate and/or more insightfiil. Such an approach might, for example, allow better contact with nonlinear optical experiments that could measure the dynamics of such additional modes. It is also entirely possible that the GLE may not even be a good approximation to the tme dynamics in many cases because, for example, the friction strongly depends on the position of the reaction coordinate. In fact, a strong solvent modification of the PMF usually ensures that the friction will be spatially dependent [25]. Several analytical studies have dealt with this issue (see, for example, [26, 27 and 28] and literature cited therein). Spatially-dependent friction is found to have an important effect on the dynamical correction in some instances, but in others the Grote-Hynes estimate is predicted to be robust [29]. Nevertheless, the question of the nonlinearity and the accurate modelling of real activated rate processes by the GLE remains an open one.  [c.890]

The great sensitivity and bandwidth of electro-optic approaches to optical-THz conversion also enable a variety of new experiments in condensed matter physics and chemistry to be conducted, as is outlined in figure Bl.4.6. The left-hand side of this figure outlines the experimental approach used to generate ultrafast optical and THz pulses with variable time delays between them [M]. A mode-locked Ti sapphire laser is amplified to provide approximately 1 W of 100 fs near-IR pulses at a repetition rate of 1 kHz. The 850 mn light is divided into tliree beams, two of which are used to generate and detect the THz pulses, and the third of which is used to optically excite the sample with a suitable temporal delay. The right-hand panel presents the measured relaxation of an optically excited TBNC molecule in liquid toluene. In such molecules, the charge distribution changes markedly in the ground and electronically excited states. In TBNC, for example, the excess negative charge on the central porphyrin ring becomes more delocalized in the excited state. The altered charge distribution must be acconnnodated by changes in the surrounding solvent. This so-called solvent reorganization could only be indirectly probed by Stokes shifts in previous optical-optical pump-probe experiments, but the optical-THz approach enables the solvent response to be directly investigated. In this case, at least tln-ee distinct temporal response patterns of the toluene solvent can be seen that span several temporal decades [M]. For solid-state spectroscopy, ultrafast THz studies have enabled the investigation of coherent oscillation dynamics in the collective (phonon) modes of a wide variety of materials for the first time [49].  [c.1249]

Within this contimiiim approach Calm and Flilliard [48] have studied the universal properties of interfaces. While their elegant scheme is applicable to arbitrary free-energy fiinctionals with a square gradient fomi we illustrate it here for the important special case of the Ginzburg-Landau fomi. For an ideally planar mterface the profile depends only on the distance z from the interfacial plane. In mean field approximation, the profile m(z) minimizes the free-energy fiinctional (B3.6.11). This yields the Euler-Lagrange equation  [c.2370]

Of tire several trapping possibilities described in tire last section, by far tire most popular choice for collision studies has been tire magneto-optical trap (MOT). An MOT uses spatially dependent resonant scattering to cool and confine atoms. If tliese atoms also absorb tire trapping light at tire initial stage of a binary collision and approach each otlier on an excited molecular potential, tlien during tire time of approach tire colliding partners can undergo a fine-stmcture-changing collision (FCC) or relax to tire ground state by spontaneously emitting a photon. In eitlier case, electronic energy of tire quasimolecule converts to nuclear kinetic energy. If botli atoms are in tlieir electronic ground states from tire beginning to tire end of tire collision, only elastic and hyperfine changing (HCC) collisions  [c.2472]

One can study a slow (minutes or longer) chemical reaction by mixing two chemicals in the sample compartment of a standard UV-vis spectrophotometer and measuring the spectmm as a function of time. Though perhaps not often thought of as such, this is a fonn of transient spectroscopy, albeit a slow one. To carry out such a measurement for a reaction which is complete on a time scale of milliseconds to seconds one needs to mix the chemicals and measure the spectra much more rapidly. For gases, this can be done by releasing reactants into a discharge flow apparatus, where they are mixed by diffusion and turbulence while being carried down a tube in an inert carrier gas such as helium. This is not, strictly speaking, a transient kinetic method, however, as the progress in time of the reaction is measured by the steady state detection of concentration as a function of distance travelled down the tube. For liquids, achieving satisfactory mixing times (not to mention conserving reactant) usually requires the use of a stopped-flow apparatus, as in figure C3.1.2 in which chemicals are rapidly forced into a sample cuvette by syringes whose plungers are quickly actuated at a specific time. A probe detection system is triggered immediately after the sample is mixed in this transient technique. Conductance may be monitored in the case of ionic solutions, whereas spectrophotometry provides a more general method for detennining concentrations. Electronic detection methods provide the time resolution needed (oscilloscopes and transient recorders with response frequencies up to a few GFtz are available) to monitor the conductance or spectral changes that accompany the reaction taking place. The rapid mixing approach is limited to the study of reactions taking place on time scales of milliseconds or longer simply because it takes this long for mixing to occur (although ultrarapid techniques have been developed to mix reactants on a 100 microsecond time scale [7].  [c.2949]

In this chapter, we demonstrate the approach of the CBOA, and show that to carry out different orders of perturbation, the ability to calculate the matrix elements of the derivatives of Coulomb interaction with respect to nuclear coordinates is essential. Therefore, we studied the case of the diatomic molecule, and here we demonstrate the basic skill of computing the relevant matrix elements in Gaussian basis sets. The formulas for diatomic molecules, up to the second derivatives of the Coulomb interaction, are shown here to demonstrate that some basic techniques can be developed to cany out the calculation of the matrix elements of even higher derivatives. The formulas obtained may be complicated. First, they are shown to be nonsingular. Second, the Gaussian basis set with angular momentum can be dealt with in similar ways. Thud, they are expressed as multiple finite sums of certain simple functions, of order up to the angular momentum of the basis functions, and thus they can be computed efficiently and accurately. We show the application of this approach on the H2 molecule. The calculated equilibrium position and force constant seem to be reasonable. To obtain more reliable results, we have to employ a larger basis set to higher orders of perturbation to calculate the equilibrium geometiy and wave functions.  [c.401]

The essential equivalence of all three approaches for handling the R-T effect presented in this section have been demonstrated through the computations in which the same input data have been used, that is, in [78] BDD and JM are compared and in [21] and [86] our approach with these two. The result of the latter two studies showed that ]M had exaggerated claiming that for a direct diagonalization of the vibronic matrix it would be necessary to chose an enormous basis in order to avoid tiuncation errors [24]. We were able to reproduce their results by diagonalizing matrices of dimensions < 100. With this observation we do not want to question the general utility of the Hamiltonian- or matrix transformations implemented in the approaches by BDD and JM in the approaches tailored to lean on experimental findings such a subtle handling is of much more relevance than in the ab initio calculations where in some steps the brute force philosophy can be applied without undesirable consequences.  [c.513]

A convenient technique to study the sign flip of the wave function is the line-integral approach suggested by Baer [85,86] (an alternative, though more combersome approach, will be to monitor the sign of the wave function along the entire loop [74]). Calculations have been reported [5] using such a line-integral approach for H3, DH2, and HD2 using the 2x2 diabatic DMBE potential energy surface [1]. First, we have shown that the phase obtained by employing the line-integral method is identical (up to a constant) to the mixing angle of the orthogonal transfomiation that diagonalizes the diabatic potential matrix (see Appendix A). We have also studied this angle numerically along the line fomied by fixing the two hyperspherical coordinates p and 0 and letting


See pages that mention the term Case studies approach : [c.36]    [c.191]    [c.799]    [c.143]    [c.2277]    [c.2953]    [c.44]    [c.478]   
Guidelines for Preventing Human Error in Process Safety (1994) -- [ c.17 , c.22 ]