Thomas-Fermi averaged NMR

If the problem is dominated by equipment with a single specification (i.e., a single material of construction, equipment type, and pressure rating), then the capital cost target can be calculated from Eq. (7.21) with the appropriate cost coefficients. However, if there is a mix of specifications, such as different streams requiring different materials of construction, then the approach must be modified.  [c.229]

The project phasing covered so far is still the most common approach used in industry. However in recent years new concepts have been tested. Parallel Engineering has emerged as a project management style aimed at significantly reducing the time span from discovery to first oil and thus fast tracking new developments. In the North Sea, conventional developments on average have taken some nine years to first oil. Parallel engineering may help to half this time frame by carrying out appraisal, conceptual design and construction concurrently. The approach carries a higher risk for the parties involved and this has to be balanced with the potentially much higher rewards resulting from acceleration of first oil. For example, if conceptual design is carried out prior to appraisal results being available, considerable uncertainty will have to be managed by the  [c.294]

An interesting side point is that it is possible to recast the time-dependent approach, as described here, in a purely tinie-independent fashion, since from the equations above it follows that [74]  [c.2301]

One of the complexities that arises in studying the vibrational energy loss from highly excited molecules is the very, very large number of vibrational states (e.g. 10 vibrational states per cm ) in molecules of even moderate size at chemically significant energies (100-400 kJ mor ). Because of this, directly probing the vibrational states of S, with even the highest resolution laser devices, is essentially impossible. Nevertheless, such collisions can be monitored in great detail by using a simple trick provided that the bath molecule B is relatively small. This trick amounts to realizing that the collision of S with B can be viewed through the eyes of the bath molecule B [5, 6]. Wlren B is a small molecule with well resolved and assigned vibrational and rotational spectroscopic transitions, more infonnation about the quenching process (C3.3.3) can be obtained from probing the bath B than from probing the donor S. If we return for a moment to the collision between the bread tmck and the milk tmck of figure C3.3.1 a typical approach for the police to use in reconstmcting such an accident is to take pictures of the post-collision scene to establish the speed and position of the two tmcks before the initial scattering event occurred. The more detail available in the post-collision picture, the better the chance of accurately reconstmcting the collision event.  [c.2998]

The ordinary way to get acquainted with objects like the non-adiabatic coupling terms is to derive them from first principles, via ab initio calculations [4-6], and study their spatial structure—somewhat reminiscent of the way potential energy surfaces are studied. However, this approach is not satisfactory because the non-adiabatic coupling terms are frequently singular (in addition to being vectors), and therefore theoretical means should be applied in order to understand their role in molecular physics. During the last decade, we followed both courses but our main interest was directed toward studying their physical-mathematical features [7-13]. In this process, we revealed (1) the necessity to form sub-Hilbert spaces [9,10] in the region of interest in configuration space and (2) the fact that the non-adiabatic coupling matrix has to be quantized for this sub-space [7-10].  [c.636]

An important fact to remember about the field of thermodynamics is that it is blind to details concerning the structure of matter. Thermodynamics are concerned with observable, measurable quantities and the relationships between them, although there is a danger of losing sight of this fact in the somewhat abstract mathematical formalism of the subject. In discussing elasticity in Chap. 3, we took the position that entropy is often more intelligible from a statistical, atomistic point of view than from a purely phenomenological perspective. It is the latter that is pure thermodynamics the former is the approach of statistical thermodynamics. In this chapter, too, we shall make extensive use of the statistical point of view to understand the molecular origin of certain phenomena.  [c.506]

One potential problem with this approach is that heat loss from a small scale column is much greater than from a larger diameter column. As a result, small columns tend to operate almost isotherm ally whereas in a large column the system is almost adiabatic. Since the temperature profile in general affects the concentration profile, the LUB may be underestimated unless great care is taken to ensure adiabatic operation of the experimental column.  [c.263]

A concurrent engineering framework allows a more efficient flow of information from the various tools and techniques used and effectively communicates the design through requirements based performance measures. The primary advantage of employing concurrent principles in terms of the use of tools and techniques is that the overlap of the engineering activities, which is natural in any case, enhances a team-based approach. The application of the tools and techniques in practice has also been discussed together with a review of each, including their effective positioning in the product development process, implementation and management issues and likely benefits from their usage.  [c.276]

Both kinetic and equilibrium experimental methods are used to characterize and compare adsorption of aqueous pollutants in active carbons. In the simplest kinetic method, the uptake of a pollutant from a static, isothermal solution is measured as a function of time. This approach may also yield equilibrium adsorption data, i.e., amounts adsorbed for different solution concentrations in the limit t —> qo. A more practical kinetic method is a continuous flow reactor, as illustrated in Fig. 5.  [c.107]

The term Monte Carlo is used to describe any approach to a problem where a probabilistic analogue to a given mathematical problem is set up and solved by stochastic sampling. This invariably involves the generation of many random numbers, so giving rise to the name. A general simulation for particle processes has been developed and the precise mathematical connection between differential population balances and the Monte Carlo approach has been established (Shah etal., 1977 Ramkrishna, 1981). Sengupta and Dutta (1990) investigated growth rate dispersion in an MSMPR, with crystal shape and growth rate as the internal co-ordinates. In their work, they simply simulated one crystal at a time. This approach allowed them to build up a picture of the PSD of an exit stream from an MSMPR by giving each simulated crystal a residence time, randomly selected from the residence time distribution. Wright and Ramkrishna (1992) examined the self-preserving particle size distribution (PSD) in aggregating batch systems.  [c.248]

The key to effective interviewing is to spend more time listening than talking. Approach each interview with an open mind, with as few preconceived ideas as possible about the facility, process, or individual you re reviewing. Remember that your goal in interviewing is not to fill out a form but to elicit essential information that will help guide your PSM system development.  [c.87]

Company Practice. Some companies have clearly defined methods and practices for systems design these can range from a very team-oriented approach to a highly focused effort directed by a single individual. In others, systems design reflects the task at hand and does not follow a formal protocol.  [c.129]

A key advantage of the business process redesign approach is that while it draws on past experience (in the form of the team s collective expertise) it is not bound by it. This helps minimize the risk that inadequate practices may become institutionalized through habit or neglect, and forces the team to take a fresh look at the critical processes under review. At the same time, this approach requires more concentrated effort than either TQM or model programs and may not be necessary in cases where incremental improvement is all that s required to address PSM gaps.  [c.140]

The key to effective interviewing is to spend much more time listening than talking. Approach each interview with an open mind and with as few preconceived ideas as possible about the facility, process or individual you re interviewing. Remember that the goal in interviewing is not to fill out a form, but to elicit essential information that will help guide your system development.  [c.52]

If the data consist of Cb as a function of time, another approach can be used. As above, the smaller rate constant (say kj) estimated from a semilogarithmic plot of Cb at later times when Ca is negligible. This plot is extrapolated back to t = 0. This line is described by the equation [from Eq. (3-27)],  [c.72]

Assuming fj, < 1/2, this solution implies a monotonic approach to equilibrium with time. From a purely statistical point of view, this is certainly correct the difference in number between the two different balls decreases exponentially toward a state in which neither color is preferred. In this sense, the solution is consistent with the spirit of Boltzman s H-theorem, expressing as it does the idea of motion towards disorder. But the equation is also very clearly wrong. It is wrong because it is obviously inconsistent with the fundamental properties of the system it violates both the system s reversibility and periodicity. While we know that the system eventually returns to its initial state, for example, this possibility is precluded by equation 8.142. As we now show, the problem rests with equation 8.141, which must be given a statistical interpretation.  [c.461]

Creating and optimizing a reducible structure. In this approach, a structure known as a superstructure or hyperstructure is first created that has embedded within it all feasible process operations and all feasible interconnections that are candidates for an optimal design. Initially, redundant features are built into the structure. As an example, consider Fig. 1.7. This shows one possible structure of a process for the manufacture of benzene from the reaction between toluene and hydrogen. In Fig. 1.7, the hydrogen enters the process with a small amount of methane as an impurity. Thus in Fig. 1.7 the option is embedded of either purifying the hydrogen feed with a membrane or passing directly to the process. The hydrogen and toluene are mixed and preheated to reaction temperature. Only a furnace has been considered feasible in this case because of the high temperature required. Then two alternative reactor options, isothermal and adiabatic reactors, are embedded, and so on. Redundant features have been included in an effort to ensure that all features that could be part of an optimal solution haVe been included.  [c.9]

However, use of total vapor rate is still only a guide and might not give the correct rank order in some cases. In fact, given some computational aids, it is a practical proposition to size and cost all the alternative sequences using a shortcut sizing calculation, such as the Fenske-Gilliland-Underwood approach, together with cost correlations. Even though practical problems might involve a large number of components, it is rare for them to have more than six products, which means 42 possible sequences from Table 5.1. In addition, process constraints often reduce this number.  [c.142]

The problem with representing a reactor profile is that, unlike utility profiles, the reactor profile might involve several streams. The reactor profile involves not only streams such as those for indirect heat transfer shown in Fig. 13.1 but also the reactor feed and effluent streams, which can be an important feature of the reactor heating and cooling characteristics. The various streams associated with the reactor can be combined to form a grand composite curve for the reactor. This can then be matched against the grand composite curve for the rest of the process. The following example illustrates the approach.  [c.332]

Clearly, in designs different from those in Figs. 16.13 and 16.14 when streams are split to satisfy the CP inequality, this might create a problem with the number of streams at the pinch such that Eqs. (16.3) and (16.4) are no longer satisfied. This would then require further stream splits to satisfy the stream number criterion. Figure 16.15 presents algorithms for the overall approach.  [c.377]

The remaining problem analysis technique can be applied to any feature of the network that can be targeted, such as minimum area. In Chap. 7 the approach to targeting for heat transfer area [Eq. (7.6)] was based on vertical heat transfer from the hot composite curve to the cold composite curve. If heat transfer coefficients do not vary significantly, this model predicts the minimum area requirements adequately for most purposes. Thus, if heat transfer coefficients do not vary significantly, then the matches created in the design should come as close as possible to the conditions that would correspond with vertical transfer between the composite curves. Remaining problem analysis can be used to approach the area target, as closely as a practical design permits, using a minimum (or nea minimum) number of units. Suppose a match is placed, then its area requirement can be calculated. A remaining problem analysis can be carried out by calculating the area target for the stream data, leaving out those parts of the data satisfied by the match. The area of the match is now added to the area target for the remaining problem. Subtraction of the original area target for the whole-stream data Anetwork gives the area penalty incurred.  [c.387]

Product angular and velocity distributions can be measured with REMPI detection, similar to Doppler probmg in a laser-induced fluorescence experiment discussed in section B2.3.3.5. With appropriate time- and space-resolved ion detection, it is possible, in principle, to detemiine the three-dimensional velocity distribution of a product (see equation (B2.3.1 bit. The time-of-arrival of a particular mass in the TOFMS will be broadened by the velocity of the neutral molecule being detected. In some modes of operation of a TOFMS, e.g. space-focusing conditions [M], the shift of the arrival time from tlie centre of a mass peak is proportional to the projection of the molecular velocity along the TOFMS axis. In addition, Doppler tuning of the probe laser allows one component of the velocity perpendicular to the TOFMS axis to be detemiined. A more general approach for the two-dimensional velocity distribution in the plane perpendicular to the TOFMS direction involves the use of imaging detectors [66].  [c.2083]

The LMTO method [58, 79] can be considered to be the linear version of the KKR teclmique. According to official LMTO historians, the method has now reached its third generation [79] the first starting with Andersen in 1975 [58], the second connnonly known as TB-LMTO. In the LMTO approach, the wavefimction is expanded in a basis of so-called muffin-tin orbitals. These orbitals are adapted to the potential by constmcting them from solutions of the radial Scln-ddinger equation so as to fomi a minimal basis set. Interstitial properties are represented by Hankel fiinctions, which means that, in contrast to the LAPW teclmique, the orbitals are localized in real space. The small basis set makes the method fast computationally, yet at the same time it restricts the accuracy. The localization of the basis fiinctions diminishes the quality of the description of the wavefimction in die interstitial region.  [c.2213]

Of tire several trapping possibilities described in tire last section, by far tire most popular choice for collision studies has been tire magneto-optical trap (MOT). An MOT uses spatially dependent resonant scattering to cool and confine atoms. If tliese atoms also absorb tire trapping light at tire initial stage of a binary collision and approach each otlier on an excited molecular potential, tlien during tire time of approach tire colliding partners can undergo a fine-stmcture-changing collision (FCC) or relax to tire ground state by spontaneously emitting a photon. In eitlier case, electronic energy of tire quasimolecule converts to nuclear kinetic energy. If botli atoms are in tlieir electronic ground states from tire beginning to tire end of tire collision, only elastic and hyperfine changing (HCC) collisions  [c.2472]

Hydrocarbons typicaky have a specific gravity of less than 1, and refined products usuaky float on the water table if they penetrate sok that deeply. In the parlance of the remediation industry, such floating spkls are often caked NAPLs (nonaqueous phase Hquids). Indeed they are sometimes known as LNAPLs for light nonaqueous phase Hquids, to distinguish them from more dense materials, such as halogenated compounds, which are more likely to sink in groundwater. Stand-alone bioremediation is an option for these situations, but "pump and treat" is the more usual treatment. Contaminated water is brought to the surface, free product is removed by flotation, and the cleaned water re-injected into the aquifer or discarded. Adding a bioremediation component to the treatment, typicaky by adding oxygen and low levels of nutrients, is an appealing and cost-effective way of stimulating the degradation of the residual hydrocarbon not extracted by the pumping. This approach is becoming widely used.  [c.29]

Significant growth in acrylonitrile end use has come from ABS and SAN resins and adiponittile (see Acrylonitrile polymers). ABS resins are second to acryflc fibers as an outlet for acrylonitrile. These resins normally contain about 25% acrylonitrile and are characterized by thein chemical resistance, mechanical strength, and ease of manufacture. Consumption of ABS resins increased significantly in the 1980s with its growing application as a specialty performance polymer in constmction, automotive, machine, and appliance applications. Opportunities stiU exist for ABS resins to continue to replace more traditional materials for packaging, building, and automotive components. SAN resins typically contain between 25 and 30% acrylonitrile. Because of thein high clarity, they are used primarily as a substitute for glass in drinking cups and tumblers, automobile instmment panels, and instmment lenses. Together, ABS and SAN resins account for about 20% of domestic acrylonitrile consumption. The largest increase among the end uses for acrylonitrile over the past 10 years has come from adiponittile, which has grown to become the third largest outlet for acrylonitrile. It is used by Monsanto as a precursor for hexamethylenediamine (HMDA, CgH N2 [124-09-4]) and is made by a proprietary acrylonitrile electrohydrodimerization process (25). HMD A is used exclusively for the manufacture of nylon-6,6. The growth of this acrylonitrile outlet in recent years stems largely from replacement of adipic acid (C H qO [124-04-9]) with acrylonitrile in HDMA production rather than from a significant increase in nylon-6,6 demand. A non-electrochemical catalytic route has also been developed for acrylonitrile dimerization to adiponittile (26,27,80,81). This technology, if it becomes commercial, can provide additional replacement opportunity for acrylonitrile in nylon manufacture. The use of acrylonitrile for HMD A production should continue to grow at a faster rate than the other outlets for acrylonitrile, but it will not approach the size of the acryflc fiber market for acrylonitrile consumption.  [c.186]

Bulk-Fused Silica.. Bulk-fused silica is commercially produced by a variety of techniques, including vapor deposition. The physical properties of the product depend strongly on the method used. If the target is kept above 1800°C during the Si02 soot deposition, simultaneous sintering of the soot occurs, yielding ia a single step a soHd, bubble-free glass. This is achieved if the heat from the soot-generating burners also sinters the soot as it hits a hot-fused sand target. Layer by layer, a boule of soHd fused siUca is deposited which, because hydrogen-containing fuels are used, contains ca 1200 ppm OH. Very large boules, weighing over 500 kg, can be obtained by using many soot deposition burners and/or mnning furnaces for many days at a time. This approach is used to manufacture numerous high siUca glasses. Optical blanks, windows, cmcibles, tubing, and mirrors for large telescopes are produced by further processing these boules with conventional cutting, grinding, polishing, and flameworking techniques. This material is also used extensively for windows in spacecraft, because of its refractory nature, thermal shock resistance, and optical homogeneity.  [c.314]

P. C. Pienaar and W. E. P. Smith, "A Case Study of the Production of High Grade Manganese Sinter from Low-Grade Mamatwan Manganese Ore," Proceedings of the 6th International Ferroalloys Congress (fnfacon). Cape Town, South Africa, 1992, p. 149.  [c.499]

Classical Prostaglandins. Prostanoids of the A to F series are known as the classical prostaglandins to distinguish them from those discovered and characterized at later dates. Although the bulk of the synthetic activity occurred in the 1970s, various improvements and new approaches are stiU appearing in the Hterature in the 1990s. The numerous syntheses of the classical prostaglandins may be grouped into three basic strategies cleavage of polycychc intermediates, the conjugate addition approach, and the cyclization of aUphatic precursors (108). The bulk of the efforts have been devoted to the biologically more important E and F compounds, but direct synthesis of PGAs, -Cs, and -Ds have been reported.  [c.157]

Soybeans, the principal oilseed crop in the United States, are beheved to have been domesticated in the eastern half of northern China around the eleventh century BC or eadier. They were later introduced and estabflshed in Japan and other parts of Asia, brought to Europe, and introduced to North America in 1765 (1). Soybeans became an estabflshed oilseed crop in the late 1920s, attaining commercial importance during World War II. Cotton (qv) has a long history that can be traced back as far as 3000 BC through spun cotton yam found in the Indus valley. It is indigenous to many parts of the world, and its estabhshment as an oilseed in the United States is associated with the invention of the cotton gin by EH Whitney in 1794. Peanuts or groundnuts likely originated in South America and were later introduced to Africa and Asia. Subsequent cultivation of peanuts in North America was started with plants imported from Africa (see Nuts). Sunflowers are native to North America and probably originated in the southwestern United States. They were introduced to Spain by the early Spanish explorers and then spread to Russia, where they became estabflshed as an oilseed crop. Sunflowers became a significant U.S. oilseed crop as late as 1967, upon the development of varieties having high oil content and improved agronomic characteristics such as increased resistance to diseases and pests.  [c.291]

The length and time scales that are relevant to polymer structure and properties are shown schematically in Figure 12.4. Bearing in mind the spatial and temporal limitations of MD methods, it is clear that a range of approaches is needed, including quantum-mechanical high-resolution methods. In particular, configurations of long-chain molecules and consequences such as rubberlike elasticity depend heavily on MC methods, which can be invoked with algorithms designed to allow a correspondence between number of moves and elapsed time (from a review by Theodorou 1994). A further simplification that allows space and lime limitations to weigh less heavily is the use of coarse-graining, in which explicit atoms in one or several monomers are replaced by a single particle or head . This form of words comes from a further concise overview of the hierarchical simulation approach to  [c.478]

There is another aspect to the question of the reactivity of the carbonyl group in r ck)hexanone. This has to do with the preference for approach of reactants from the axial ir equatorial direction. The chair conformation of cyclohexanone places the carbonyl coup in an unsynunetrical environment. It is observed that small nucleophiles prefer to roach the carbonyl group of cyclohexanone from the axial direction even though this is 1 more sterically restricted approach than from the equatorial side." How do the ctfcnaices in the C—C bonds (on the axial side) as opposed to the C—H bonds (on the equatorial side) influence the reactivity of cyclohexanone  [c.173]

It may happen that AH is not available for the buffer substance used in the kinetic studies moreover the thermodynamic quantity A//° is not precisely the correct quantity to use in Eq. (6-37) because it does not apply to the experimental solvent composition. Then the experimentalist can determine AH. The most direct method is to measure AH calorimetrically however, few laboratories Eire equipped for this measurement. An alternative approach is to measure K, under the kinetic conditions of temperature and solvent this can be done potentiometrically or by potentiometry combined with spectrophotometry. Then, from the slope of the plot of log K a against l/T, AH is calculated. Although this value is not thermodynamically defined (since it is based on the assumption that AH is temperature independent), it will be valid for the present purpose over the temperature range studied.  [c.258]

The global trend in rainfall showed a slight increase (about 1%) during the twentieth centuiy, though the distribution of this change was not uniform either geographically or over time. Rainfall has increased over land in high latitudes of the Northern Hemisphere, most notably in the fall. Rainfall has decreased since the 1960s over the subtropics and tropics from Africa to Indonesia. In addition, some evidence suggests increased rainfall over the Pacific Ocean (near the equator and the international dateline) in recent decades, while rainfall farther from the equator has declined slightly.  [c.245]

If the objective function is considered two-dimensional, consisting of Equations (7-13) and (7-14) and the vector X includes only T and a, then the only change in the iteration is that the derivatives of with respect to composition are ignored in establishing the Newton-Raphson corrections to T and a. The new compositions can then be determined from Equations (7-8) and (7-9). Such a simplified procedure sacrifices little in convergence rate for vapor-liquid systems, where the contributions of compfosition-derivatives to changes in T and a are almost always smad 1. This approach requires only two evaluations of per iteration and still avoids creeping since it is essentially second-order in the limit as convergence is approached.  [c.117]

Whether this approach works in practice is easily tested. We can take a problem and design all possible nonintegrated sequences and then heat integrate those sequences and compare. Freshwater and Ziogou and Stephanopoulos, Linnhoff, and Sophos have carried out extensive numerical studies on sequences of simple distillation columns both with and without heat integration. One interesting result from the study of Freshwater and Ziogou was that the configuration that achieved the greatest energy saving by integration often already had the lowest energy requirement prior to integration. When this was not so, the difference in energy consumption between the integrated configuration with the lowest energy import and the one based on the nonintegrated configuration that required the least energy was usually minimal.  [c.142]

The temperatures or enthalpy change for the streams (and hence their slope) cannot he changed, but the relative position of the two streams can be changed by moving them horizontally relative to each other. This is possible because the reference enthalpy for the hot stream can be changed independently from the reference enthalpy for the cold stream. Figure 6.16 shows the same two streams moved to a different relative position such that AT ,in is now 20°C. The amount of overlap between the streams is reduced (and hence heat recovery is reduced) to 10 MW. More of the cold stream extends beyond the start of the hot stream, and hence the amount of steam is increased to 4 MW. Also, more of the hot stream extends beyond the start of the cold stream, increasing the cooling water demand to 2 MW. Thus this approach of plotting a hot and a cold stream on the same temperature-enthalpy axis can determine hot and cold utility for a given value of Let us now extend this approach to many hot  [c.161]

See pages that mention the term Thomas-Fermi averaged NMR : [c.248]    [c.2300]    [c.2901]    [c.380]    [c.667]    [c.36]    [c.270]    [c.729]    [c.267]    [c.272]    [c.528]    [c.80]    [c.918]    [c.98]   
Molecular modelling Principles and applications (2001) -- [ c.0 ]