Common methods of mesh generation

Common methods of mesh generation  [c.195]

In the finite element solution of engineering problems the main tasks of mesh generation, processing (calculations) and graphical representation of results are usually assigned to independent computer programs. These programs can either be embedded under a common shell (or interface) to enable the user to interact with all three parts in a single environment, or they can be implemented as separate sections of a software package. Development and organization of graphics programs requires expertise in areas of computer science and software design which are outside the scope of a text dealing with finite element techniques and hence are not discussed in the present chapter. Detailed explanation of mesh generation techniques and mathematical background of the available methods - although of general importance in numerical computations - are also not related to the main theme of the present book. In the following sections of this chapter therefore, after a brief description of the main aspects of mesh generation, other topics that are of central importance in the finite element modelling of polymer processes are discussed.  [c.191]

By varying the angle of incidence, the X-ray, electron, or ion beam energy, etc. many techniques are capable of acquiring depth profiles. Those profiles are generated by combining several measurements, each representative of a different integrated depth. The higher energy ion scattering techniques (Medium Energy Ion Scattering, MEIS, and Rutherford Backscattering, RBS), however, are unique in that the natural output of the methods is composition as a function of depth. By far the most common way of depth profiling is the destructive method of removing the surface, layer by layer, while also taking data. For the mass spectrometry-based techniques of Chapter 10, removal of surface material is intrinsic to the sputtering and ionization process. Other methods, such as Auger Electron Spectroscopy, AES, or X-Ray Photoemission, XPS, use an ancillary ion beam to remove material while constantly ionizing the newly exposed surface. Under the most favorable conditions depth resolutions of around 20 A can be achieved this way, but there are many artifacts to be aware of and the depth resolution usually degrades rapidly with depth. Some aspects of sputter depth profiling are touched upon in the article Surface Roughness in Chapter 12, but for a more complete discussion of the capabilities and limitations of sputter depth profiling the reader is referred to a paper by D. Marton and J. Fine in Thin Solid Films, 185, 79, 1990 and to other articles cited there.  [c.3]

Whereas cellulose films are biodegradable, that is they are readily attacked by bacteria, films and packaging from synthetic polymers are normally attacked at a very low rate. This has led to work carried out to find methods of rendering such polymers accessible to biodegradation. The usual approach is to incorporate into the polymer (either into the polymer chain or as a simple additive) a component which is an ultraviolet light absorber. However, instead of dissipating the absorbed energy as heat it is used to generate highly reactive chemical intermediates which destroy the polymer. Iron dithiocarbamate is one such photo-activator used by G. Scott in his researches at the University of Aston in Birmingham, England. Once the photo-activator has reduced the molecular weight down to about 9000 the polymer becomes biodegradable. Some commercial success has been achieved using starch as a biodegradable filler in low-density polyethylene. With the introduction of auto-oxidisable oil additives that make the polymer sensitive to traces of transition metals in soils and garbage, film may be produced which is significantly more biodegradable than that from LDPE itself.  [c.154]

The unique virtue of flash photolysis is that is can be extended another 11 orders of magnitude toward shorter times. Down to times of a few nanoseconds, the most common procedures employ essentially the same principles used for millisecond experiments. The same excitation energy is dehvered in a shorter flash, and faster electronics are used to monitor changes in a continuous probe of concentrations. The probe is modified as necessary to permit faster measurements, using a brighter lamp to maintain an adequate signal-to-noise ratio while measuring faster transmittance changes. In the pre-laser era, flash photolysis technique developed in the direction of generating very energetic excitation flashes capable of making substantial concentration changes throughout a large volume so that kinetic changes could be monitored in a single flash. Signal averaging was rarely employed, except when using certain luminescence methods. It was difficult to make bright excitation flashes shorter than a microsecond, except for luminescence. Once pulsed lasers became available, pulse durations of 10 ns were easily attained. Several technologies, such as Q-switching, pulsed electrical excitation of gas discharge lasers, including the important uv-emitting excimer lasers, cavity-dumping, and even pulsed excitation of semiconductor lasers, conveniently generate pulses having durations near 10 ns. The first two methods can produce pulses with hundreds of millijoiiles of energy at rates of 1 to 1000 Uz the last two generate smaller energy pulses at repetition rates ranging from 1 to 1000 kH2.  [c.512]

Color apphcations require more complex image detection schemes owing to the need for spectral differentiation. Signal samples from at least three distinct spectral bands are required to accurately reproduce a color image. Typically a monolithic color filter array consisting of alternating windows of appropriately colored filter media is attached to the CCD to accomplish color differentiation. A common approach to obtain the scene information required for faithful color image reproduction is to increase the horizontal pixel count of the CCD by a factor of three (29). In this method, a filter with alternating red, blue, and green stripes is then attached so that each pixel column aligns with a filter stripe. This color CCD utilizes three adjacent pixels, one red, one green, and one blue, to represent a single spatial sampling of the scene. In effect, the device can be thought of as three integrated monochrome CCDs. The color CCD has three on-chip output channels, one for each spectral component. As each pixel is read, the simultaneously generated signal information from each channel is remixed in a fashion that correcdy simulates a color display. More ambitious methods exist for achieving color image reproduction (30,31). These methods utilize the horizontal pixel count found in monochrome CCDs but require intricate clock control methods to appropriately intermix the spectral components for display during the actual integration interval. The intermixing is produced by the use of a repetitive mosaic pattern of colored filters, typically cyan, yellow, and green. The imaging results from these devices rival those obtained from the three-channel method.  [c.429]

A limitation of the linear dimensional size descriptor is that only particles having simple or defined shapes, such as spheres or cubes, can be uniquely defined by a linear dimension. The common solution to this problem is to describe a nonspherical particle to be equivalent in diameter to a sphere having the same mass, volume, surface area, or settling speed (uniquely defined parameters) as the particle in question (Fig. 1). Therefore, a particle can be described as behaving as a sphere of diameter d Although this approach makes unique nonspherical particle size characterization possible, it does not come without important consequences because the reported size of a particle is dependent on the physical parameter used in the measurement. A flaky particle falling through a Hquid under the influence of gravity is expected to behave as a sphere having a somewhat smaller diameter than that of the same particle measured on the basis of volume equivalence. In reporting particle size data it is therefore necessary to specify the method by which the data were generated. Shape is a parameter which usually influences equivalent sizes, but is not taken into account in most measurement techniques. The variations in diameter equivalence for any specific nonspherical particle can generally be attributed to its shape. Furthermore, it is reasonable to expect variations in particle shape to cause apparent size variations within a particle population, thereby causing artifacts such as widening of the measured size distribution of the population. Because of this shape dependence, a limited amount of shape information can be inferred from ratios of spherical equivalence, referred to as shape factors, as obtained by different methods.  [c.126]

A connnon feature of all mass spectrometers is the need to generate ions. Over the years a variety of ion sources have been developed. The physical chemistry and chemical physics communities have generally worked on gaseous and/or relatively volatile samples and thus have relied extensively on the two traditional ionization methods, electron ionization (El) and photoionization (PI). Other ionization sources, developed principally for analytical work, have recently started to be used in physical chemistry research. These include fast-atom bombardment (FAB), matrix-assisted laser desorption ionization (MALDI) and electrospray ionization (ES).  [c.1329]

In order to create the solid phase, all industrial crystallizers utilize one or other methods for generating supersaturation e.g. by cooling and/or evaporation (see Figure 3.1). The term precipitation is often applied to crystallizing systems and usually refers to supersaturation being generated by the addition of a third component that induces a chemical reaction to produce the solute or lowers its solubility. A common characteristic of such systems is the rapid formation of the solid phase. Such crystallization modes generally (though not always) create supersaturation at much higher levels than by simple cooling or evaporation. In this context, therefore, the term precipitation is usually meant to imply fast crystallization . Just to complicate matters, some precipitates are amorphous. Crystallization, of course, implies a regular internal array of atoms or ions. Furthermore, the meteorologist regards precipitation as the formation of rain or snow, so perhaps dense phase change would be a more general definition. Whilst noting that it s not a precise definition, it will generally be assumed here that the term precipitation implies fast crystallization, usually brought about as a consequence of chemical reaction or rapid change in solubility by addition of a third component.  [c.61]

There are tliree basic light sources used in mass spectrometry the discharge lamp, the laser and the synclirotron light source. Since ionization of an organic molecule typically requires more than 9 or 10 eV, light sources for photoionization must generate photons m tlie vacuum-ultraviolet region of the electromagnetic spectrum. A connnon experimental difficulty with any of these methods is that there can be no optical windows or lenses, the light source being directly connected to the vacuum chamber holding the ion source and mass spectrometer. This produces a need for large capacity vacuum pumping to keep the mass spectrometer at operating pressures. Multiphoton ionization with laser light in the visible region of the spectrum overcomes this difficulty.  [c.1330]

The classic method of quantifying using Beer s law (eq. 10) is stiU the preferred method where appHcable. A number of multivariate analysis methods have been developed to make use of data from many wavenumbers (even the complete spectmm) when a single wavenumber is inadequate. Most of the methods use a set of caUbration spectra (a training set) to generate a model of how the spectral data are related to parameters of the caUbration set. The training set must span all sample variabiUty the model later encounters while analysing unknowns multivariate techniques are excellent at interpolating between standards, but they are poor at extrapolating beyond the limits of the training set. Once the model is generated, an unknown may be analy2ed by applying the model to its spectmm. The most common of the multivariate techniques are also factor analysis methods, which are matrix algebra methods in which the starting data set is considered the product of a data set of factors (scores) and a matrix of factor loadings (loading vectors) (see Chemometrics) (18,60-62).  [c.201]

Color-Proofing Methods. There are three principal analogue color-proofing methods commonly used in the industry. (/) Monochrome proofing. A monochrome proof is a single-color image on paper or polyester film that depicts image placement, imposition, limited registration detail, and production errors. Some commercially available monochrome systems ate Dylux (Du Pont), CopiArt CP-3 (Fuji), and Dry Silver (3M). (2) Overlay proofing. The proofs ate an assembly of individual monochrome images on individual polyester film sheets, typically yellow, magenta, cyan, and black colors, which can be overlaid in register to produce a four-color image representing information from each color separation generated from the scanner. The proofs can be used as progressives by the printer in the form of one-, two-, or three-color overlays checks on registration by the stripper and for lower quahty color approval work. Distortions in color perception arising from light reflections off the polyester image-bearing sheet detract from overlay proofing as a tool for accurate color prediction. Some commercially available overlay systems include Color Key (3M), Cromacheck (Du Pont), and NAPS /PAPS (Hoechst-Celanese). (2) Surprint proofing. These proofs are the highest quahty four-color images available for off-press proofing uses. The proofs are assembled in such a way that the individual colored images He direcdy in contact with one another and, unlike overlay proofs, are not capable of being leafed through or pulled apart into individual images after the assembly process. Surprint proofs provide the best print prediction or simulation of color obtainable from the press and are used extensively in the color approval process as the contract proof between the trade shop, printer, and customer. Some representative, commercially available surprint proofing systems ate MatchPrint (3M), Ctomalin (Du Pont), WaterProof (Du Pont), Color Art (Fuji), and PressMatch (Hoechst-Celanese). A proof can be made in the printing operation anytime a question arises as to the appearance or quahty of the image to be produced by the press. The proof approval process can be comphcated because of the many iterations for color acceptance by the customer. The separations used to make the proof can be color-corrected many times before it is shown to a customer even then the customer can request mote changes, which may necessitate the remake of the separations. In any event, the product of the process is a set of final film separations to make the printing plates and a final proof to be used as the guide and contract for the printing process.  [c.38]

Hquid—Hquid extraction, and various forms of crystalli2ation (qv), adsorption (qv), and membrane permeation (see EXTRACTION, LIQUID-LIQUID Membrane technology). Simple single-feed distillation is most widely used because of predictable, reHable, flexible, robust, and efficient operation and because of mature equiHbrium-based design techniques which do not require extensive piloting. Furthermore, simple distillation is one of the few methods which requires only the input of energy to effect the separation. Other common Hquid separation methods including extraction, a2eotropic distillation, extractive distillation, and solution crystaUi2ation require the introduction of an additional mass separating agent (MSA). The MSA must then be recovered and recycled for economical operation, adding further complexity to the separations system design. The generation of separations schemes for Hquid mixtures can be thought of as a problem of finding appHcations for distillation and the identification and resolution of situations where distillation caimot be used.  [c.446]

Foi systems of dissolved solids, such as inorganic salts ia watei 01 essentially nonvolatile 01 high melting oiganics, crystallisation (qv) is the piedominant separation method. VLE-based separation methods such as distillation are largely relegated to an auxiUary role, functioning primarily for solvent removal or solvent recovery from mother Hquors. However, as for VLE-based methods, the appHcation of crystallisation is constrained by certain critical features characteristic of the soHd—Hquid equiUbrium (SLE) of the system. Three of the more common critical features for dissolved soHds systems are (/) simple eutectics, ie, in a system exhibiting a eutectic, pure crystals of one component are deposited as the solution is cooled at the eutectic point, a soHd mixture of fixed composition is formed and no further separation is achieved (a eutectic is somewhat analogous to an a2eotrope in that it limits recovery and product purity) (2) soHd solutions, ie, upon cooling, a multicomponent soHd solution is deposited, but unlike a eutectic system, crystals of one component are never deposited in a pure state analogously to a VLE pinch, soHd solution formation does preclude the use of crystallisation, but high product purity can only be obtained by multistage crystallisation and (J) compound formation, ie, the dissolved soHd and solvent form one or more intermolecular compounds, but the sohd caimot be obtained in a completely solvent-free form without further treatment (in aqueous solutions these are called hydrates in nonaqueous systems, solvates). The particular compound formed is often a function of concentration and temperature. Sohd—Hquid-phase behavior is quite diverse and in many systems can be quite compHcated. A review of sohd—Hquid-phase behavior is available (22). Methodologies for the generation of crystalli ation-ha sed flow sheets for separation of multicomponent systems exhibiting eutectics, solid solution behavior, and compound formation have been presented (45—48). Ah. methods make extensive use of phase diagrams for problem representation and analysis.  [c.459]

Limits foi exhaust emissions from iadustry, transportation, power generation (qv), and other sources are iacreasiagly legislated (see also Exhaust Control, Automotive) (1,2). One of the principal factors driving research and development ia the petroleum (qv) and chemical processing industries in the 1990s is control of industrial exhaust releases. Much of the growth of environmental control technology is expected to come from new or improved products that reduce such air pollutants as carbon monoxide [630-08-0] (qv), CO, volatile organic compounds (VOCs), nitrogen oxides (NO, or other ha2ardous air pollutants (see Air pollution). The mandates set forth in the 1990 amendments to the Clean Air Act (CAAA) push pollution control methodology well beyond what, as of this writing, is in general practice, stimulating research in many areas associated with exhaust system control (see Air pollution control methods). In all, these amendments set specific limits for 189 air toxics, as well as control limits for VOCs, nitrogen oxides, and the so-called criteria pollutants. An estimated 40,000 faciUties, including estabUshments as diverse as bakeries and chemical plants ate affected by the CAAA (3).  [c.500]

In the case of gases, flow lines can be revealed through the use of smoke trac-es or the addition of a hghtweight powder such as balsa dust to the stream. One of the best smoke generators is the reac-tion of titanium tetrachloride with moisture in the air. A woodsmoke-generation system is described by Yu, Sparrow, and Eckert [Int. J. Heat Mass Transfer, 15, 557-558 (1972)]. Tufts of wool or nylon attached at one end to a sohd surface can be used to reveal flow phenomena in the vicinity of the surface. Optical methods commonly employed depend upon changes in the refractive index resulting from the presence of heated wires or secondaiy streams in the flow field or upon changes in density in the primaiy gas as a result of compressibility effects. The three common techniques are the shadowgraph, the schlieren, and the interferometer. All three theoretically can give quantitative information on the velocity profiles in a two-dimensional system, but in practice only the interferometer is commonly so used. The optical methods are described by Ladenburg et al. (op. cit., pp. 3-108). For additional information on other methods, see Goldstein, Modern Developments in Fluid Dynamics, vol. 1, London, 1938, pp. 280-296.  [c.889]

Figure 6-1 is an Arrhenius plot for the chair-chair conformational inversion of cyclohexane, determined by NMR methods by Anet and Bourn. (Of course logarithms to the base 10 can also be used, with the advantage of providing rapid order-of-magnitude interpretations.) The equation of the straight line is ln(fc/s ) = 30.5 — 5600/7", from which is found A = 1.76 x 10 s and E = 11.2 kcal mol. Although the general appearance of Fig. 6-1 is common to nearly all Arrhenius plots, this example possesses several unusual features. One of these is the low temperatures of the rate studies ( — 24.0 to — 116.7°C) most kinetic studies are around room temperature, so the abscissa on an Arrhenius plot is typically of the order MT 0.003. More importantly, the temperature range in Fig. 6-1 is unusually wide, and the study generated a large number of points. It is much more common to see Arrhenius studies of three to five points covering a range of 20-40°C. Notice the excellent linearity of the plot in Fig. 6-1.  [c.246]

See pages that mention the term Common methods of mesh generation : [c.274]    [c.506]    [c.1338]   
See chapters in:

Practical aspects of finite element modelling of polymer processing  -> Common methods of mesh generation