The Microscopic Approach


In this section we consider electromagnetic dispersion forces between macroscopic objects. There are two approaches to this problem in the first, microscopic model, one assumes pairwise additivity of the dispersion attraction between molecules from Eq. VI-15. This is best for surfaces that are near one another. The macroscopic approach considers the objects as continuous media having a dielectric response to electromagnetic radiation that can be measured through spectroscopic evaluation of the material. In this analysis, the retardation of the electromagnetic response from surfaces that are not in close proximity can be addressed. A more detailed derivation of these expressions is given in references such as the treatise by Russel et al. [3] here we limit ourselves to a brief physical description of the phenomenon.  [c.232]

The microscopic understanding of tire chemical reactivity of surfaces is of fundamental interest in chemical physics and important for heterogeneous catalysis. Cluster science provides a new approach for tire study of tire microscopic mechanisms of surface chemical reactivity [48]. Surfaces of small clusters possess a very rich variation of chemisoriDtion sites and are ideal models for bulk surfaces. Chemical reactivity of many transition-metal clusters has been investigated [49]. Transition-metal clusters are produced using laser vaporization, and tire chemical reactivity studies are carried out typically in a flow tube reactor in which tire clusters interact witli a reactant gas at a given temperature and pressure for a fixed period of time. Reaction products are measured at various pressures or temperatures and reaction rates are derived. It has been found tliat tire reactivity of small transition-metal clusters witli simple molecules such as H2 and NH can vary dramatically witli cluster size and stmcture [48, 49, M and 52].  [c.2393]

Several lenses are used in a transmission electron microscope. The condenser lenses provide uniform illumination of the sample over the area of interest. The objective lens provides the primary image and therefore, determines the lateral resolution of the image. The objective lens aperture is important in controlling the contrast of the image. The final magnification of the image is performed by one or more projector lenses. The final image is typically recorded on a fluorescent or phosphorescent screen where it can be captured by a video camera for viewing. As noted above, all of these lenses are subject to serious aberrations which ultimately limit the resolution of the microscope to greater than the diffraction limit (the theoretical resolution limit for this approach.) Moreover, these lens aberrations restrict the angular range of the electron beam resulting in the need for very tall instmments. Despite these shortcomings, tern is a very powerful surface imaging tool with atomic resolution in some cases, providing sample magnifications between 100—500,000 X.  [c.272]

There is no doubt that molecular dynamics simulations in which a large number of solvent molecules are treated explicitly represent one of the most detailed approaches to the smdy of the influence of solvation on complex biomolecules [1]. The approach, which is illustrated schematically in Figure 1, consists in constructing detailed atomic models of the solvated macromolecular system and, having described the microscopic forces with a potential function, applying Newton s classical equation F = ma to literally simulate the dynamic motions of all the atoms as a function of time [1,2]. The calculated classical trajectory, though an approximation to the real world, provides ultimate detailed information about the time course of the atomic motions, which is difficult to access experimentally. However, statistical convergence is an important issue because the net influence of solvation results from an averaging over a large number of configurations. In addition, a large number of solvent molecules are required to realistically model a dense system. Thus, in practical situations a significant fraction of the computer time is used to calculate the detailed trajectory of the solvent molecules even though it is often the solute that is of interest.  [c.133]

It is possible to go beyond the SASA/PB approximation and develop better approximations to current implicit solvent representations with sophisticated statistical mechanical models based on distribution functions or integral equations (see Section V.A). An alternative intermediate approach consists in including a small number of explicit solvent molecules near the solute while the influence of the remain bulk solvent molecules is taken into account implicitly (see Section V.B). On the other hand, in some cases it is necessary to use a treatment that is markedly simpler than SASA/PB to carry out extensive conformational searches. In such situations, it possible to use empirical models that describe the entire solvation free energy on the basis of the SASA (see Section V.C). An even simpler class of approximations consists in using infonnation-based potentials constructed to mimic and reproduce the statistical trends observed in macromolecular structures (see Section V.D). Although the microscopic basis of these approximations is not yet formally linked to a statistical mechanical formulation of implicit solvent, full SASA models and empirical information-based potentials may be very effective for particular problems.  [c.148]

The second approach to fracture is different in that it treats the material as a continuum rather than as an assembly of molecules. In this case it is recognised that failure initiates at microscopic defects and the strength predictions are then made on the basis of the stress system and the energy release processes around developing cracks. From the measured strength values it is possible to estimate the size of the inherent flaws which would have caused failure at this stress level. In some cases the flaw size prediction is unrealistically large but in many cases the predicted value agrees well with the size of the defects observed, or suspected to exist in the material.  [c.120]

Analytically, the computation of ensemble averages along this route is a formidable task, even if microscopically small representations of the system of interest are considered, because f r X) is generally a very complicated function of the spatial arrangement of the N molecules. However, with the advent of large-scale computers some forty years ago the key problem in statistical physics became tractable, at least numerically, by means of computer simulations. In a computer simulation the evolution of a microscopically small sample of the macroscopic system is determined by computing trajectories of each molecule for a microscopic period of observation. An advantage of this approach is the treatment of the microscopic sample in essentially a first-principles fashion the only significant assumption concerns the choice of an interaction potential [25]. Because of the power of modern supercomputers which can literally handle hundreds of millions of floating point operations per second, computer simulations are nowadays  [c.21]

Research over the past decade has demonstrated that a multidimensional TST approach can also be used to calculate an even more accurate transmission coefficient than for systems that can be described by the fiill GLE with a non-quadratic PMF. This approach has allowed for variational TST improvements [21] of the Grote-Hynes theory in cases where the nonlinearity of the PMF is important and/or for systems which have general nonlinear couplmgs between the reaction coordinate and the bath force fluctuations. The Kramers turnover problem has also been successfiilly treated within the context of the GLE and the multidimensional TST picture [22]. A multidimensional TST approach has even been applied [H] to a realistic model of an Sj 2 reaction and may prove to be a promising way to elaborate the explicit microscopic origins of solvent friction. Wliile there has been great progress toward an understanding and quantification of the dynamical corrections to the TST rate constant in the condensed phase, there are several quite significant issues that remain largely open at the present time. For example, even if the GLE were a valid model for calculating the dynamical corrections, it remains unclear how an accurate and predictive microscopic theory can be developed for the friction kernel q(t) so that one does not have to resort to a molecular dynamics simulation [17] to calculate this quantity. Indeed, if one could compute the solvent friction along the reaction coordinate in such a maimer, one could instead just calculate the exact rate  [c.890]

A consideration of the transition probabilities allows us to prove that microscopic reversibility holds, and that canonical ensemble averages are generated. This approach has greatly extended the range of simulations that can be perfonned. An early example was the preferential sampling of molecules near solutes [77], but more recently, as we shall see, polymer simulations have been greatly accelerated by tiiis method.  [c.2259]

A similar approach, in spirit, has been proposed [212] for the study of two-component classical systems, for example poly electrolytes, which consist of mesoscopic, highly-charged, poly ions, and microscopic.  [c.2276]

The classical microscopic description of molecular processes leads to a mathematical model in terms of Hamiltonian differential equations. In principle, the discretization of such systems permits a simulation of the dynamics. However, as will be worked out below in Section 2, both forward and backward numerical analysis restrict such simulations to only short time spans and to comparatively small discretization steps. Fortunately, most questions of chemical relevance just require the computation of averages of physical observables, of stable conformations or of conformational changes. The computation of averages is usually performed on a statistical physics basis. In the subsequent Section 3 we advocate a new computational approach on the basis of the mathematical theory of dynamical systems we directly solve a  [c.98]

Polymers are difficult to model due to the large size of microcrystalline domains and the difficulties of simulating nonequilibrium systems. One approach to handling such systems is the use of mesoscale techniques as described in Chapter 35. This has been a successful approach to predicting the formation and structure of microscopic crystalline and amorphous regions.  [c.307]

The most recent approach to reductive nanofabrication that can indeed constmct nanoscale stmctures and devices uses microscopic tools (local probes) that can build the stmctures atom by atom, or molecule by molecule. Optical methods using laser cooling (optical molasses) are also being developed to manipulate nanoscale stmctures.  [c.203]

Video-Enhanced Imaging. The idea of video-enhanced light microscopy as an approach to increased resolution (15,16) is a misconception. Video-enhanced imaging does, however, make otherwise invisible detail visible, thus enhancing the ability to achieve the best resolution that is possible with the light scope. It also greatly improves the visibility of very fine (down to 10—20-nm) detail. Eor these reasons it must be mentioned as a means of achieving improved resolution, although it is perhaps more appropriately treated as a contrast-enhancing tool. This topic has been discussed under the methods of increasing contrast, where it is without peer. It is used in particular by biologists who have serious contrast problems that make it impossible for them to see what their microscopes have resolved.  [c.332]

Purity of the Product If a crystal is produced in a region of the phase diagram where a single-crystal composition precipitates, the crystal itself will normally be pure provided that it is grown at relatively low rates and constant conditions. With many products these purities approach a value of about 99.5 to 99.8 percent. The difference between this and a purity of 100 percent is generally the result of small pockets of mother hquor called occlusions trapped within the crystal. Although frequently large enough to be seen with an ordinary microscope, these occlusions can be submicroscopic and represent dislocations within the struc ture of the crystal. They can be caused by either attrition or breakage during the growth process or by shp planes within the crystal structure caused by interference between screw-type dislocations and the remainder of the crystal faces. To increase the purity of the crystal beyond the point where such occlusions are normally expected (about O.I to 0.5 percent by volume), it is generally necessary to reduce the impurities in the mother hquor itself to an acceptably low level so that the mother hquor contained within these occlusions will not contain sufficient impurities to cause an impure product to be formed. It is normally necessary to recrystaUize materi from a solution which is relatively pure to surmount this type of purity problem.  [c.1656]

Ten years later, the deformation-mechanism map concept led Ashby to a further, crucial development - materials selection charts. Here, Young s modulus is plotted against density, often for room temperature, and domains are mapped for a range of quite different materials... polymers, woods, alloys, foams. The use of the diagrams is combined with a criterion for a minimum-weight design, depending on whether the important thing is resistance to fracture, resistance to strain, resistance to buckling, etc. Such maps can be used by design engineers who are not materials experts. There is no space here to go into details, and the reader is referred to a book (Ashby 1992) and a later paper which covers the principles of material selection maps for high-temperature service (Ashby and Abel 1995). This approach has a partner in what Sigmund (2000) has termed topology optimization a tool for the tailoring of structures and materials this is a systematic way of designing complex load-bearing structures, for instance for airplanes, in such a way as to minimise their weight. Sigmund remarks in passing that any material is a structure if you look at it through a microscope with sufficient magnification .  [c.201]

About the same time that thermodynamics was evolving, James Clerk Maxwell (1831-1879) and Ludwig Boltzmann (1844-1906) developed a theory, describing the way molecules moved - molecular dynamics. The molecules that make up a perfect gas move about, colliding with each other like billiard balls and bouncing off the surface of the container holding the gas. The energy, associated with motion, is called Kinetic Energy and this kinetic approach to the behavior of ideal gases led to an interpretation of the concept of temperature on a microscopic scale.  [c.2]

A microscopic description characterizes the structure of the pores. The objective of a pore-structure analysis is to provide a description that relates to the macroscopic or bulk flow properties. The major bulk properties that need to be correlated with pore description or characterization are the four basic parameters porosity, permeability, tortuosity and connectivity. In studying different samples of the same medium, it becomes apparent that the number of pore sizes, shapes, orientations and interconnections are enormous. Due to this complexity, pore-structure description is most often a statistical distribution of apparent pore sizes. This distribution is apparent because to convert measurements to pore sizes one must resort to models that provide average or model pore sizes. A common approach to defining a characteristic pore size distribution is to model the porous medium as a bundle of straight cylindrical or rectangular capillaries (refer to Figure 2). The diameters of the model capillaries are defined on the basis of a convenient distribution function.  [c.65]

Linear response theory is an example of a microscopic approach to the foundations of non-equilibrium thennodynamics. It requires knowledge of tire Hamiltonian for the underlying microscopic description. In principle, it produces explicit fomuilae for the relaxation parameters that make up the Onsager coefficients. In reality, these expressions are extremely difficult to evaluate and approximation methods are necessary. Nevertheless, they provide a deeper insight into the physics.  [c.708]

The two sources of stochasticity are conceptually and computationally quite distinct. In (A) we do not know the exact equations of motion and we solve instead phenomenological equations. There is no systematic way in which we can approach the exact equations of motion. For example, rarely in the Langevin approach the friction and the random force are extracted from a microscopic model. This makes it necessary to use a rather arbitrary selection of parameters, such as the amplitude of the random force or the friction coefficient. On the other hand, the equations in (B) are based on atomic information and it is the solution that is approximate. For ejcample, to compute a trajectory we make the ad-hoc assumption of a Gaussian distribution of numerical errors. In the present article we also argue that because of practical reasons it is not possible to ignore the numerical errors, even in approach (A).  [c.264]

Figure 5 Continuum reaction field approaches for electrostatic free energies, (a) A two-step approach. The mutation introduces a positive charge near the center of a protein (shown in tube representation). The mutation in the fully solvated protein (left) is decomposed into two steps. Step I The mutation is performed with a finite cap of explicit water molecules (shown m stick representation) the system is otherwise suirounded by vacuum. Step II The two finite models (before and after mutation) are transferee into bulk solvent, treated as a dielectric continuum. The transfer free energy is obtained from continuum electrostatics. (From Ref. 25.) (b) Molecular dynamics with periodic boundary conditions on-the-fly reaction field calculation. One simulation cell is shown. For each charge q, interactions with groups within are calculated in microscopic detail everything beyond is viewed as a homogeneous dielectric medium, producing a reaction field on [55], The mutation is introduced using MD or MC simulations. As shown, for many of the charges the medium beyond r E is not truly homogeneous, being made up of both solvent and solute groups, (c) Spherical boundary conditions with continuum reaction field [56], The region within the sphere (large circle) is simulated with MD or MC and explicit solvent the region outside is treated as a dielectric continuum, which produces a reaction field on each charge within the sphere. If the sphere IS smaller than the protein (as here), the outer region is heterogeneous and the reaction field calculation IS rather difficult. Figure 5 Continuum reaction field approaches for electrostatic free energies, (a) A two-step approach. The mutation introduces a positive charge near the center of a protein (shown in tube representation). The mutation in the fully solvated protein (left) is decomposed into two steps. Step I The mutation is performed with a finite cap of explicit water molecules (shown m stick representation) the system is otherwise suirounded by vacuum. Step II The two finite models (before and after mutation) are transferee into bulk solvent, treated as a dielectric continuum. The transfer free energy is obtained from continuum electrostatics. (From Ref. 25.) (b) Molecular dynamics with periodic boundary conditions on-the-fly reaction field calculation. One simulation cell is shown. For each charge q, interactions with groups within are calculated in microscopic detail everything beyond is viewed as a homogeneous dielectric medium, producing a reaction field on [55], The mutation is introduced using MD or MC simulations. As shown, for many of the charges the medium beyond r E is not truly homogeneous, being made up of both solvent and solute groups, (c) Spherical boundary conditions with continuum reaction field [56], The region within the sphere (large circle) is simulated with MD or MC and explicit solvent the region outside is treated as a dielectric continuum, which produces a reaction field on each charge within the sphere. If the sphere IS smaller than the protein (as here), the outer region is heterogeneous and the reaction field calculation IS rather difficult.
Until the 18th century, man-made materials such as bronze, steel and porcelain were not anatomised indeed, they were not usually perceived as having any anatomy , though a very few precocious natural philosophers did realise that such materials had structure at different scales. A notable exemplar was Rene de Reaumur (1683-1757) who deduced a good deal about the fine-scale structure of steels by closely examining fracture surfaces in his splendid History of Metallography, Smith (1960) devotes an entire chapter to the study of fractures. This approach did not require the use of the microscope. The other macroscopic evidence for fine structure within an alloy came from the examination of metallic meteorites. An investigator of one collection of meteorites, the Austrian Aloys von Widmanstatten (1754-1849), had the happy inspiration to section and polish one meteorite and etch the polished section, and he observed the image shown in Figure 3.4, which was included in an atlas of illustrations of meteorites published by his assistant Carl von Schreibers in Vienna, in 1820 (see Smith 1960, p. 150). This micro structure is very much coarser  [c.72]

Basically there are two approaches to the fracture of a material. These are usually described as the microscopic and the continuum approaches. The former approach utilises the fact that the macroscopic fracture of the material must involve the rupture of atomic or molecular bonds. A study of the forces necessary to break these bonds should, therefore, lead to an estimate of the fracture strength of the material. In fact such an estimate is usually many times greater than the measured strength of the material. This is because any real solid contains multitudes of very small inherent flaws and microcracks which give rise to local stresses far in excess of the average stress on the material. Therefore although the stress calculated on the basis of the cross-sectional area might appear quite modest, in fact the localised stress at particular defects in the material could quite possibly have reached the fracture stress level. When this occurs the failure process will be initiated and cracks will propagate through the material. As there is no way of knowing the value of the localised stress, the strength is quoted as the average stress on the section and this is often surprisingly small in comparison with the theoretical strength.  [c.120]

There is a number of other ways of obtaining an estimate of surface area, including such obvious ones as direct microscopic or electron-microscopic examination. The rate of charging of a polarized electrode surface can give relative areas. Bowden and Rideal [46] found, by this method, that the area of a platinized platinum electrode was some 1800 times the geometric or apparent area. Joncich and Hackerman [47] obtained areas for platinized platinum very close to those given by the BET gas adsorption method (see Section XVII-5). The diffuseness of x-ray diffraction patterns can be used to estimate the degree of crystallinity and hence particle size [48,49]. One important general approach, useful for porous media, is that of permeability determination although somewhat beyond the scope of this book, it deserves at least a brief mention.  [c.580]

The non-consen>ed variable (.t,0 is a broken symmetry variable, it is the instantaneous position of the Gibbs surface, and it is the translational synnnetry in z direction that is broken by the inlioinogeneity due to the liquid-vapour interface. In a more microscopic statistical mechanical approach 121, it is related to the number density fluctuation 3p(x,z,t) as  [c.727]

In the TST limit, the remainmg task strictly speaking does not belong to the field of reaction kinetics it is a matter of obtaining sufficiently accurate reactant and transition state structures and charge distributions from quantum chemical calculations, constructing sufficiently realistic models of the solvent and the solute-solvent interaction potential, and calculating from these ingredients values of Gibbs free energies of solvation and activity coefficients. In many cases, a microscopic description may prove a task too complex, and one rather has to use simplifying approximations to characterize influences of different solvents on the kinetics of a reaction in tenns of some macroscopic physical or empirical solvent parameters. In many cases, however, this approach is sufficient to capture the kinetically significant contribution of the solvent-solute interactions.  [c.834]

The treatment of this section has been based on an assumed nonlinear surface response and has dealt entirely with electromagnetic considerations of excitation and radiation from the interface. A complete theoretical picture, however, includes developing a microscopic description of the surface nonlinear susceptibility. In the discussion in section Bl.5.4. we will introduce some simplified models. In this context, an important first approxunation for many systems of chemical interest may be obtained by treating the surface nonlinearity as arising from the composite of individual molecular contributions. The molecular response is typically assumed to be that of the isolated molecule, but in the sunnnation for the surface nonlinear response, we take into account the orientational distribution appropriate for the surface or interface, as we discuss later. Local-field corrections may also be included [4T, 42]. Such analyses may then draw on the large and well-developed literature concerning the second-order nonlinearity of molecules [43, 44]. If we are concerned with the response of the surface of a clean solid, we must typically adopt a different approach one based on delocalized electrons. This is a challenging undertaking, as a proper treatment of the linear optical properties of surfaces of solids is already diflScult [45]. Nonetheless, in recent years significant progress has been made in developing a fiindamental theory of the nonlinear response of surfaces of both metals [46, 47,  [c.1278]

The atomic force microscope (ATM) provides one approach to the measurement of friction in well defined systems. The ATM allows measurement of friction between a surface and a tip with a radius of the order of 5-10 nm figure C2.9.3 a)). It is the tme realization of a single asperity contact with a flat surface which, in its ultimate fonn, would measure friction between a single atom and a surface. The ATM allows friction measurements on surfaces that are well defined in tenns of both composition and stmcture. It is limited by the fact that the characteristics of the tip itself are often poorly understood. It is very difficult to detennine the radius, stmcture and composition of the tip however, these limitations are being resolved. The AFM has already allowed the spatial resolution of friction forces that exlribit atomic periodicity and chemical specificity [3, K), 13].  [c.2745]

Ageno [ ], Blumenfeld [M] and possibly others have emphasized that biological systems are constmctions a living cell is much closer to a mechanical clock than to a bowl of consomme. To characterize the latter, a statistical approach is adequate, in which the motions of an immense number of individual particles are subsumed into a few macroscopic parameters such as temperature and pressure. But one does not usually need to know the pressure when analysing the working of a clock. The energy contained in a given system can be divided into two categories (a) the multitude of microscopic or thennal motions sufficiently characterized by the temperature and (b) the (usually small number of) macroscopic, highly correlated motions, whose existence turns the constmction into a machine. The total energy  [c.2827]

The uranium content of a sample can be determined by fluorimetry, a-spectrometry, neutron activation analysis, x-ray microanalysis with a scanning-transmission electron (sem) microscope, mass spectrometry, and by cathodic stripping voltammetry (8). In most cases, measurements of environmental or biological materials requke preliminary sample preparations such as ashing and dissolution ki acid, followed by either solvent extraction or ion exchange. For uranium isotope analysis, kiductively coupled plasma—mass spectrometry may also be used (81). Another uranium detection technique that has become very popular within the last few years is x-ray absorption near edge stmcture (xanes) spectroscopy. This method can provide information about the oxidation state or local stmcture of uranium ki solution or ki the soHd state. The approach has recently been used to show that U(VI) was reduced to U(IV) by bacteria ki uranium wastes (82), to determine the uranium speciation ki soils from former U.S. DOE uranium processkig faciHties (83,84), and the mode of U(VI) binding to montmorillonite clays (85,86).  [c.323]

Density. The density of carbon fibers ranges from approximately 1.5 g/cm. For PAN-based carbon fibers the values are approximately 1.77 g/cm for standard modulus (240 GPa modulus) and aproach 2.0 g/cm for ultrahigh modulus grades exceeding 500 GPa (72.5 x 10 psi). Densities of pitch-based carbon fibers are higher than PAN, ie, between 2.05—2.20 g/cm. Carbon fiber density is typically less than that of single-crystal graphite, 2.26 g/cm, because of the imperfect packing of the graphene layer planes and because of internal microporosity present within polymer-derived carbon fibers. These pores are very small, generally less than 3 nm in diameter, and are not penetrated by the helium or mercury used in commercial pycnometers or visible under scanning electron microscopes. Specific information on the size and distribution of porosity can be obtained using small-angle x-ray diffraction that would suggest a microporosity of PAN fibers ranging from 2-5%. By comparing the interlayer spacing to the geometric density it is  [c.6]

The best known of the free energy force fields is the Empirical Conformational Energy Program for Peptides (ECEPP) [40]. ECEPP parameters (both internal and external) were derived primarily on the basis of crystal structures of a wide variety of peptides. Such an approach yields significant savings in computational costs when sampling large numbers of conformations however, microscopic details of the role of solvent on the biological molecules are lost. This type of approach is useful for the smdy of protein folding [41,42] as well as protein-protein or protein-ligand interactions [43].  [c.15]

Rossmann suggested that the canyons form the binding site for the rhi-novirus receptor on the surface of the host cells. The receptor for the major group of rhinoviruses is an adhesion protein known as lCAM-1. Cryoelectron microscopic studies have since shown that ICAM-1 indeed binds at the canyon site. Such electron micrographs of single virus particles have a low resolution and details are not visible. However, it is possible to model components, whose structure is known to high resolution, into the electron microscope pictures and in this way obtain rather detailed information, an approach pioneered in studies of muscle proteins as described in Chapter 14.  [c.338]

Specific SEM techniques have been devised to optimize the topographical data that can be obtained. Stereo imaging consists of two images taken at different angles of incidence a few degrees from each other. Stereo images, in conjunction with computerized frame storage and image processing, can provide 3D images with the quality normally ascribed to optical microscopy. Another approach is con-focal microscopy. This method improves resolution and contrast by eliminating scattered and reflected light from out-of-focus planes. Apertures are used to eliminate all light but that from the focused plane on the sample. Both single (confocal scanning laser microscope, CLSM) and multiple (tandem scanning reflected-light microscope, TSM or TSRLM) beam and aperture methods have been employed.  [c.702]

As mentioned earlier, Heycock and Neville, at the same time as Ewing and Rosenhain were working on slip, pioneered the use of the metallurgical microscope to help in the determination of phase diagrams. In particular, the delineation of phase fields stable only at high temperatures, such as the p field in the Cu-Sn diagram (Figure 3.7) was made possible by the use of micrographs of alloys quenched from different temperatures, like those shown in Figure 3.11. The use of micrographs showing the identity, morphology and distribution of diverse phases in alloys and ceramic systems has continued ever since after World War II this approach was immeasurably reinforced by the use of the electron microprobe to provide compositional analysis of individual phases in materials, with a resolution of a micrometre or so. An early text focused on the microslructure of steels was published by the American metallurgist Albert Sauveur (1863-1939), while an  [c.86]

In practice, it is only the electron-bombardment approach which can be used to study the distribution of elements in a sample on a microscopic scale. The instrument was invented in its essentials by a French physicist, Raimond Castaing (1921-1998) (Figure 6.7). In 1947 he joined ONERA, the French state aeronautics laboratory on the outskirts of Paris, and there he built the first microprobe analyser as a doctoral project. (It is quite common in France for a doctoral project to be undertaken in a state laboratory away from the university world.) The suggestion came from the great French crystallographer Andre Guinier, who wished to determine the concentration of the pre-precipitation zones in age-hardened alloys, less than a micrometre in thickness. Castaing s preliminary results were presented at a conference in Delft in 1949, but the full flowering of his research was reserved for his doctoral thesis (Castaing 1951). This must be the most cited thesis in the history of materials science, and has been described as "a document of great interest as well  [c.227]

The methods of compositional tinalysis, using either energy-dispersive or wavelength-dispersive analysis are also now available on transmission electron microscopes (TEMs) the instrument is then called an analytical transmission electron microscope. Another method, in which the energy loss of the image-forming electrons is matched to the identity of the absorbing atoms (electron energy loss spectrometry, EELS) is also incretisingly applied in TEMs, and recently this approach has been combined with scanning to form EELS-gencrated images.  [c.230]

Using this integrated approach ranging from microscopic to macroscopic, we obtain a relationship between the structure of the interface and its strength for a broad range of interfaces. In this section, we develop the vector percolation theory of fracture and apply it to several cases involving (1) bulk fracture (2) fracture by disentanglement (3) polymer welding and healing (4) copolymer reinforcement of incompatible polymer interfaces (5) polymer-solid adhesion and (6) thermosets.  [c.368]


See pages that mention the term The Microscopic Approach : [c.232]    [c.691]    [c.691]    [c.139]    [c.731]    [c.74]    [c.464]    [c.488]    [c.238]    [c.887]    [c.2953]    [c.419]   
See chapters in:

Physical chemistry of surfaces  -> The Microscopic Approach