Aaberg capturing


Sometimes it is easier to remove the analyte and use a change in mass as the analytical signal. Imagine how you would determine a food s moisture content by a direct analysis. One possibility is to heat a sample of the food to a temperature at which the water in the sample vaporizes. If we capture the vapor in a preweighed absorbent trap, then the change in the absorbent s mass provides a direct determination of the amount of water in the sample. An easier approach, however, is to weigh the sample of food before and after heating, using the change in its mass as an indication of the amount of water originally present. We call this an indirect analysis since we determine the analyte by a signal representing its disappearance.  [c.233]

There have also been reports that some plants, including B.juncea stimulate volatilization of dimethyl selenide, although it seems that this is an indirect effect where the plant roots stimulate bacterial evolution of the gas (93). Presumably the selenium is eventually oxidized in the atmosphere and returned to the soil as rain. Since much of the world is marginally selenium deficient, such a process might have no deleterious environmental effects. A similar volatilization approach may eventually be used for mercury contamination. Ref. 94 describes the effective transfer of bacterial mercury reduction genes into a plant. In the future, this approach might offer an option for cleaning mercury-contaminated soils, albeit with some form of mercury capture technology.  [c.37]

The entire domain of "new-lead" discovery has expanded considerably. This development has affected what have traditionally been divergent approaches, namely QSAR and stmcture-based design, leading them to become integrated so as to provide a more powerful approach (135). Several recently pubHshed comprehensive volumes capture the state of the art and can be consulted to determine precedents relevant to any particular study (134-137).  [c.168]

Several lenses are used in a transmission electron microscope. The condenser lenses provide uniform illumination of the sample over the area of interest. The objective lens provides the primary image and therefore, determines the lateral resolution of the image. The objective lens aperture is important in controlling the contrast of the image. The final magnification of the image is performed by one or more projector lenses. The final image is typically recorded on a fluorescent or phosphorescent screen where it can be captured by a video camera for viewing. As noted above, all of these lenses are subject to serious aberrations which ultimately limit the resolution of the microscope to greater than the diffraction limit (the theoretical resolution limit for this approach.) Moreover, these lens aberrations restrict the angular range of the electron beam resulting in the need for very tall instmments. Despite these shortcomings, tern is a very powerful surface imaging tool with atomic resolution in some cases, providing sample magnifications between 100—500,000 X.  [c.272]

Experimental tests of this mechanism can determine the reaction order with respect to each component and verify the molecularities assumed, but are unable to separate even the factors k K, let alone measure / and as long as the assumption of pre-equihbrium remains vaUd. Better time resolution in the experiment captures the approach of [i] toward equihbrium and, consequently, violates that assumption.  [c.514]

Quantitative arguments for rooftop-mounted arrays deal primarily with the cost savings captured by avoiding the need for an array support stmcture, since the price of the roof has already been paid. Counter-arguments include the lack of economies of scale (as with central systems), costs associated with individual engineering, permitting, and installation of each system, and the fact that roof-mounted systems can be fixed as opposed to tracking, resulting in a lower capacity factor per installed peak kilowatt. In some areas, such as Japan, population density and the high cost of farmland virtually preclude the use of centralized systems, and strongly favor the distributed rooftop-mounted approach.  [c.475]

The Gleaning Step. Although toner transfer is a highly efficient process, some toner residue remains on the photoreceptor surface and must be removed prior to the next cycle otherwise successive corona charging and image exposure would be progressively and adversely affected (3). For such toner can capture further toner, and the absorption of visible light could reduce subsequent exposures of the photoreceptor and degrade achievable image quahty. The cleaning process employs direct physical contact with the toner and photoreceptor (2). One approach involves the use of bmshes the second uses a blade. Residual toner is bound to the photoreceptor surface by a combination of dispersive and electrostatic forces and the physical forces exerted by the bmsh or blade must overcome these (62). Because dispersion forces increase with the area of contact, careful design of the preceding steps of development and transfer to avoid the deformation of toner is important in solving this cleaning problem.  [c.140]

Sometimes it is easier to remove the analyte and use a change in mass as the analytical signal. Imagine how you would determine a food s moisture content by a direct analysis. One possibility is to heat a sample of the food to a temperature at which the water in the sample vaporizes. If we capture the vapor in a preweighed absorbent trap, then the change in the absorbent s mass provides a direct determination of the amount of water in the sample. An easier approach, however, is to weigh the sample of food before and after heating, using the change in its mass as an indication of the amount of water originally present. We call this an indirect analysis since we determine the analyte by a signal representing its disappearance.  [c.233]

Bio-based glue resins have been investigated for many years. However, a real breakthrough in their industrial utilization, at least in Europe, has not yet occurred. However, their successful industrial use has been in full development for many years in several countries of the Southern Hemisphere, such as Australia, South Africa, Zimbabwe and to a much lesser extent Chile, Argentina and Brazil [16,17]. The use and application of adhesives based on natural and renewable resources by industry and the general public is often thought of as a new approach that requires novel technologies and methods to implement. Despite the increasing trend toward the use of synthetic adhesives, processes based on the chemical modification of natural products offer opportunities for producing a new generation of high-performance, high-quality products. The distinct advantages in the utilization of natural materials, e.g. lower toxicity, biodegradability and availability, need to be paralleled by more efficient and lower-cost methods of production. We have to use factors such as regional and species variation as an aid in selecting the optimum feedstock for a particular process. Additionally, we have to develop cost-effective manufacturing techniques that will enable these materials to capture a wider percentage of the world market.  [c.1069]

In the DOE environment, the term lesson learned is defined as a good work practice or innovative approach that is captured and shared to promote application. It may also be an adverse work practice or experience that is captured and shared to avoid recurrence. This term is used by DOE and other federal and private-sector institutions, to describe the following  [c.40]

Plane jets could be used to create a closed volume in which a contaminant source could be placed. In some ways, these systems are similar to Aaberg exhaust hoods (Section 10.4.4). The objective is to use plane jets instead of walls around an exhaust opening to create a vortex which enhances the capture efficiency of the exhaust.  [c.1007]

The nature of the preceding analysis does not permit the application of the technique to design of local capture hoods but rather to the design of remote or canopy fume hoods. For this approach to be valid, the hoods must usually be at least two source diameters away from the emission source.  [c.1271]

Table 13.17 lists some of the important considerations for the different fume capture techniques. From the point of view of cost effectiveness, the usual preference is source collection or a low-level hood, provided an acceptable scheme can be developed within the process, operating, and layout constraints. The cost of fume control systems is almost a direct function of the gas volume being handled. Flence, the lower volume requirements for the source capture or low-level hood approach often results in significant capital and operating cost savings for the fume control system.  [c.1275]

While steady-state data provide a snapshot of the machine, dynamic or real-time data provide a motion picture. This approach provides a better picture of the dynamics of both the machine-train and its vibration profile. Data acquired using steady-state methods would suggest that vibration profiles and amplitudes are constant. However, this is not tme. All dynamic forces, including mnning speed, vary constantly in all machine-trains. When real-time data acquisition methods are used, these variations are captured and displayed for analysis.  [c.687]

This approach provides two benefits. One is that it simplifies database development since one parameter set is used for multiple machine-trains. Therefore, less time is required to establish them. The other is that this approach permits direct comparison of multiple machine-trains. Since all machine-trains in a class share a common APS, the data can be directly compared. For example, the energy generated by a gear set is captured in a narrowband window established to monitor gear mesh. With the same APS, the gear mesh narrowbands can be used to compare all gear sets within that machine-train classification.  [c.715]

This chapter is organized into two main parts. To give the reader an appreciation of real fluids, and the kinds of behaviors that it is hoped can be captured by CA models, the first part provides a mostly physical discussion of continuum fluid dynamics. The basic equations of fluid dynamics, the so-called Navier-Stokes equations, are derived, the Reynolds Number is defined and the different routes to turbulence are described. Part I also includes an important discussion of the role that conservation laws play in the kinetic theory approach to fluid dynamics, a role that will be exploited by the CA models introduced in Part II.  [c.463]

In the TST limit, the remainmg task strictly speaking does not belong to the field of reaction kinetics it is a matter of obtaining sufficiently accurate reactant and transition state structures and charge distributions from quantum chemical calculations, constructing sufficiently realistic models of the solvent and the solute-solvent interaction potential, and calculating from these ingredients values of Gibbs free energies of solvation and activity coefficients. In many cases, a microscopic description may prove a task too complex, and one rather has to use simplifying approximations to characterize influences of different solvents on the kinetics of a reaction in tenns of some macroscopic physical or empirical solvent parameters. In many cases, however, this approach is sufficient to capture the kinetically significant contribution of the solvent-solute interactions.  [c.834]

Many of the fiindamental physical and chemical processes at surfaces and interfaces occur on extremely fast time scales. For example, atomic and molecular motions take place on time scales as short as 100 fs, while surface electronic states may have lifetimes as short as 10 fs. With the dramatic recent advances in laser tecluiology, however, such time scales have become increasingly accessible. Surface nonlinear optics provides an attractive approach to capture such events directly in the time domain. Some examples of application of the method include probing the dynamics of melting on the time scale of phonon vibrations [82], photoisomerization of molecules [88], molecular dynamics of adsorbates [89, 90], interfacial solvent dynamics [91], transient band-flattening in semiconductors [92] and laser-induced desorption [93]. A review article discussing such time-resolved studies in metals can be found in  [c.1296]

A rather different approach based in part on normal modes is the substructuring of Turner et al. [85]. This technique, originating in aerospace dynamics, partitions a multibody system into a collection of rigid and flexible particles. The motion of the atoms within these bodies is then propagated via selected low-frequency normal-mode components the dynamic interactions between bodies are modeled rigorously. Large overall computational gains might be possible, but significant work is needed to devise system-dependent substructuring protocols. Although it is difficult to show general agreement with small timestep dynamic simulations by this approach, slow-scale motions might be more easily captured than with traditional methods.  [c.246]

The commercial viability of silica aerogels as thermal insulators depends on the ability to produce them at a competitive price. After all, in the 1950s, the production of Monsanto s Santocel stopped after a lower-cost process to manufacture fumed silica was developed (59). Recently an initial economic analysis that consists of six factors ia the manufacturiag of aerogels starting material, solvent, energy, wage, equipment, and facility was pubHshed (60). The results show that the dominant cost is the cost of the starting material and that aerogels could be competitive with commercial iasulating materials on a cost per R value basis. Indeed, BASF has developed a silica aerogel which has the registered trademark Basogel (61). Siace supercritical dryiag, even with carbon dioxide at a lower temperature, is an energy-iatensive process, NanoPore, Inc. is developiag an ambient approach to make silica aerogels (49,59). Technological progresses ia the next several years will be critical ia determining whether aerogels can capture a significant share of the commercial iasulation market, which is probably their largest potential area of appHcation. At least two U.S. companies are currently developiag aerogels as iasulating materials. Aspen Systems manufactures siHca aerogels ia the forms of powders, monoliths, and blankets. Their present (1996) price range is from 100 to 2,000 per cubic foot, depending on the size of the order (62). Aerojet Corporation has collaborated with different end-users ia evaluatiag the market poteatial of organic aerogels (59), which have evea lower thermal coaductivities than their siHca counterparts.  [c.7]

In the late 1980s attempts were made in California to shift fuel use to methanol in order to capture the air quaHty benefits of the reduced photochemical reactivity of the emissions from methanol-fueled vehicles. Proposed legislation would mandate that some fraction of the sales of each vehicle manufacturer be capable of using methanol, and that fuel suppHers ensure that methanol was used in these vehicles. The legislation became a study of the California Advisory Board on Air QuaHty and Fuels. The report of the study recommended a broader approach to fuel quaHty and fuel choice that would define environmental objectives and allow the marketplace to determine which vehicle and fuel technologies were adequate to meet environmental objectives at lowest cost and maximum value to consumers. The report directed the California ARB to develop a regulatory approach that would preserve environmental objectives by using emissions standards that reflected the best potential of the cleanest fuels.  [c.434]

Because this method is important in relation to the preparation method of metal evaporated tape it will be discussed in more detail. The incidence angle of the atomic fiux plays an important role in the nucleation and growth process. Atoms arriving under an angle at the substrate usually show a different behavior to those which approach perpendicularly. Ekst of all, the shadowing effect plays an important role in the film formation. If there is no adatom (physically adsorbed atom) mobiHty and the sticking coefficient probabiHty equals one, an incoming atom is captured as soon as it touches the substrate or surface atom. Atoms already deposited and surface inregularities throw a shadow. No direct impingement is possible in this shadowed area. When non2ero adatom mobiHty occurs, which is normally the case due to the kinetic energy of the atoms and the substrate temperature, they can then move to energetically favorable positions including the shadowed areas. Therefore, an increasing mobiHty partly annuls the shadowing effect. The degrees of shadowing and mobiHty are determined by the deposition conditions. The direction of the adatom movement is considered to have two contributors, namely surface diffusion (responsible for the movement in all directions), and the angle-of-incidence effect which causes the atoms to have a momentum component parallel to the incidence plane. This is tme for the situation where the substrate is fixed in relation to the incoming fiux.  [c.178]

Stopped-fiow methods are a simple extension of classical benchtop methods, and have the very important advantage of usiag the minimum amount of reagents. This is especially desirable for biochemical iavestigations. The key to their iavention was the development of electronic methods for measuting and recording concentrations ia real time. In first generation iastmments, the signal proportional to concentration was displayed on an oscilloscope and photographed. The photograph was subsequentiy analy2ed. Relatively few poiats of limited precision could be extracted, and signal averagiag was tedious. Contemporary designs use analogue-to-digital conversion of the electrical signal and direct transfer to a computer. Modem digitizers can capture thousands of sequential data poiats, easily attaining a dynamic range of 10 ia one measurement, taldngless than a minute. Instmments usually iaclude a two-way valve for each of the reactant sytinges, so that these maybe refilled easily to allow signal-averagiag of five or tea measuremeats. Stopped-fiow instmments are usually orieated toward Hquid solutioa studies, and have become the standard kinetic procedure for biochemistry and bench-scale organic and inorganic solution chemistry, for reactions occurring over times longer than about 1 ms.  [c.510]

The principal factors affecting coal agglomeration are similar to those affecting flotation, eg, amount and type of collecting oil, degree and type of agitation, pulp density, size consist, and wetting properties of the coal. The most significant operating parameter, however, is the oil concentration. As progressively larger amounts of bridging Hquid are added to a suspension of fine particles, a variety of agglomerated products can result. Economic considerations require however, that the smallest quantity possible of oil be used, with the consequent production of very small microagglomerates. This approach requires both an efficient mixer to disperse the oil agglomerant, and centrifugal drying to reach product moisture requirements. A skimming or bubble flotation step has also been added to capture very small agglomerates lost in the screening operation. These process steps are seen in the flow sheet shown in Figure 16, and were used on commercial plants operated in the eastern United States (96) to produce 20—30 t/h of clean coal product agglomerates from 50% minus 325 mesh coal particles contained in a thickener underflow, material which was previously discarded as waste.  [c.122]

The greatest successes with stmcture-based design to date have been accompHshed by utilizing x-ray crystal stmctures of enzymes complexed with bound inhibitors rather than uncomplexed enzyme stmctures. The stmctures of enzyme complexes are thought to capture the enzyme conformations that are especially favorable for ligand binding (37). Data of this nature may either be used to modify the inhibitor bound in the active site to improve its affinity for the enzyme or to search for novel inhibitors after removing the stmcture of the bound inhibitor from the displayed complex. The approach taken for stmcture-based design is shown schematically in Eigure 6. With many iterations of this cycle, it is hoped that lead compounds can be developed and subsequentiy improved. It is also important to determine if the lead compound inhibits in the predicted manner or if inhibition occurs serendipitously.  [c.325]

Once an undesirable material is created, the most widely used approach to exhaust emission control is the appHcation of add-on control devices (6). Eor organic vapors, these devices can be one of two types, combustion or capture. AppHcable combustion devices include thermal iaciaerators (qv), ie, rotary kilns, Hquid injection combusters, fixed hearths, and uidi2ed-bed combustors catalytic oxidi2ation devices flares or boilers/process heaters. Primary appHcable capture devices include condensers, adsorbers, and absorbers, although such techniques as precipitation and membrane filtration ate finding increased appHcation. A comparison of the primary control alternatives is shown in Table 1 (see also Absorption Adsorption Membrane technology).  [c.500]

The description of the nonclassical norbomyl cation developed by Wnstein implies that the nonclassical ion is stabilized, relative to a secondary ion, by C—C a bond delocalization. H. C. Brown of Purdue University put forward an alternative interpreta-tioiL He argued that all the available data were consistent with describing the intermediate as a rapidly equilibrating classical ion. The 1,2-shift that interconverts the two ions was presumed to be rapid relative to capture of the nucleophile. Such a rapid rearrangement would account for the isolation of racemic product, and Brown proposed that die rapid migration would lead to preferential approach of the nucleophile fiom the exo direction.  [c.329]

Welding Operations Efforts have been made to use the LVHV design approach for controlling welding fumes. Sometimes, this can be an effective method. Sometimes, however, there can be serious problems with the high-velocity exhaust stripping away shielding gases and causing poor quality welds. It is also difficult for exhaust nozzles to survive without damage in industrial welding environments, where even relatively slight damage can cause significant changes in the high-velocity airflow patterns and adversely affect welding. Most successful point-exhaust applications for welding establish capture velocities lower than for LVHV dust control, but still higher than for conventional exhaust hoods.  [c.854]

Sizing the enclosure is more important than might first appear. If the enclosure walls are close to the compacting pile of material, material splash effects will cause losses through openings in these wails. Therefore, the use of a larger enclosure allows the velocity of these air streams to decrease before reaching the walls. Since quantitative estimates cannot be made as to the magnitude of the material splash effects, field observations of an existing system and experience are the only guides. Air entrainment becomes a factor when the enclosure has large areas or complete sides that must remain open. Winds or local air currents can then enter and exit the enclosure, thereby removing dust. The flow rate can be calculated in a straightforward manner from the wind velocity, open area, and loss coefficient of the opening. However, the ingress airflow rate is usually found to be quite large so that it may not be practical to attempt to counteract it by enclosure exhaust alone. Positioning the exhaust off-take dose to the active zone of dust generation may capture the most concentrated portion of airborne dust before recirculation and mixing with entrained air can occur. This approach reduces the exhaust volume needed for air induction and control velocity.  [c.905]

For an existing process plant, the designer has the opportunity to take measurements of the fume or plume flow rates in the field. There are two basic approaches which can be adopted. For the first approach, the fume source can be totally enclosed, and a temporary duct and fan system installed to capture the contaminant. For this approach, standard techniques can be used to measure gas flow rates, gas compositions, gas temperatures, and fume loadings. From the collected fume samples, the physical and chemical characteristics can be established using standard techniques. For most applications, this approach is not practical and not very cost effec tive. For the second approach, one of three field measurement techniques, described next, can be used to evaluate plume flow rates and source heat fl uxes.  [c.1269]

The use of canopy hoods or remote capture of fume is usually considered only after the rejection of source or local hood capture concepts. The common reasons for rejecting source or local hood capture are usually operating interference problems or layout constraints. In almost all cases, a canopy hood system represents an expensive fume collection approach from both capital and opetating cost considerations. Remote capture depends on buoyant ait curtents to carry the contaminated gas to a canopy hood. The rising fume on its way to the hood is often subjected to cross-drafts within the ptocess buildings or deflected away from the hood by objects such as cranes. For many of these canopy systems, the capture efficiency of fume may be as low as 30-50%.  [c.1279]

Advocates of the global approach would argue that human activities are essentially goal-directed (the cognitive view expressed in Chapter 2), and that this cannot be captured by a simple decomposition of a task into its elements. They also state that if an intention is correct (on the basis of an appropriate diagnosis of a situation), then errors of omission in skill-based actions are imlikely, because feedback will constantly provide a comparison between the expected and actual results of the task. From this perspective, the focus would be on the reliability of the cognitive rather than the action elements of the task.  [c.225]

In most of the studies on the kinetics of emulsion polymerization, the role of monomer droplets in the particle formation step is usually neglected. This approach usually arises from the smaller surface area of the monomer droplets relative to that of monomer swollen micelles and the uniform character of the final particles. After development of the Smith-Ewarl model, the detailed investigations on particle formation step indicated that the particle formation can occur by different mechanisms rather than that proposed in the Smith-Ewart model. Based on the results of these investigations, a growing radical within continuous phase can be adsorbed by the existing forming particles, or can enter into the monomer droplets, or can precipitate within the continuous phase to form a new stable particle. Fitch and Tsai [13] proposed the homogeneous nucleation mechanism involving the formation of primary particles from the growing radicals within the continuous medium. According to the proposed model, the hydropho-bicity of growing radicals increases with the increasing chain length. Therefore, the growing radicals precipitate in the continuous medium when they reach a certain molecular weight to form primary particles. The interaction of growing radicals with the micellar structure is not taken into account in the homogenous nucleation mechanism. Therefore, in a real system, the formation of primary particles by the interaction between the micellar structure and the growing radicals accompanies this mechanism. However, Ugelstad and coworkers [26,27] clearly demonstrated that even the monomer droplets can compete with the monomer swollen micelles or with the forming particles for capturing of radicals from the aqueous phase when they can be made sufficiently small in size.  [c.193]

The basis of frequency-domain vibration analysis assumes that we monitor the rotational frequency components of a machine-train. If a single block of data is acquired, non-repetitive or spurious data can be introduced into the database. The microprocessor should be able to acquire multiple blocks of data, average the total and store the averaged value. This approach will enable the data acquisition unit to automatically reject any spurious data and provide reliable data for trending and analysis. Systems that rely on a single block of data will severely limit the accuracy and repeatability of acquired data. They will also limit the benefits that can be derived from the program. The microprocessor should also have electronic circuitry that automatically checks each data set and block of data for accuracy and reject any spurious data that may occur. Auto-rejection circuitry is available in several of the commercially available systems. Coupled with multiple block averaging, this auto-rejection circuitry assures maximum accuracy and repeatability of acquired data. A few of the microprocessor-based systems require the user to input the maximum scale that is used to acquire data. This will severely limit the accuracy of data. Setting the scale too high will prevent acquisition of factual machine data. A setting that is too low will not capture any high-energy frequency components that may be generated by the machine-train. Therefore, the microprocessor should have auto-scaling capability to ensure accurate data. Vibration data can be distorted by high frequency components that fold-over into the lower frequencies of a machine s signature. Even though these aliased frequency components appear real, they do not exist in the machine. Low frequency components can also distort the mid-range signature of a machine in the same manner as high frequency. The microprocessor selected for vibration should include a full range of anti-aliasing filter to prevent the distortion of machine signatures. The features illustrated in the example also apply to nonvibration measurements. For example, pressure readings require the averaging capability to prevent spurious readings. Slight fluctuations in line or vessel pressure are normal in most plant systems. Without the averaging capability, the microprocessor cannot acquire an accurate reading of the true system pressure.  [c.806]

AL thus studies life by using artificial components (such as c mputcr programs) to capture the behavioral essence of living systems. The supposition is that if the artificial parts arc organized correctly, in a way that respects the organization of the living system, then the artificial system will exhibit the same characteristic dynamical behavior as the natural system on higher levels as well. Notice that this bottom-up, synthesist approach stands in marked contrast to more conventional top-down, analytical approaches. AL-based computer simulations are characterized by these five general properties [laiig89]  [c.558]


See pages that mention the term Aaberg capturing : [c.2313]    [c.28]    [c.391]    [c.324]    [c.266]    [c.853]    [c.909]    [c.957]    [c.959]    [c.960]    [c.51]   
Industrial ventilation design guidebook (2001) -- [ c.1448 ]