Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Basic concept

The basic principle of all interferometers may be summarized as follows (Fig. 4.24). The indicent lightwave with intensity Iq is divided into two or more partial beams with amplitudes Ak, which pass different optical path lengths Sic = nxk (where n is the refractive index) before they are again superimposed at the exit of the interferometer. Since all partial beams come from the same source, they are coherent as long as the maximum path difference does not exceed the coherence length (Sect. 2.8). The total amplitude of the transmitted wave, which is the superposition of all partial waves, depends on the amplitudes Ak and on the phases 0 = 0o + 27r A / of the partial waves. It is therefore sensitively dependent on the wavelength X. [Pg.121]

The maximum transmitted intensity is obtained when all partial waves interfere constructively. This gives the condition for the optical path difference /S sik = sj — Sk, namely [Pg.121]

The condition (4.33) for maximum transmission of the interferometer applies not only to a single wavelength X but to all Xm for which [Pg.121]

It is important to realize that from one interferometric measurement alone one can only determine X modulo m SX because all wavelengths A = Aq H-m X are equivalent with respect to the transmission of the interferometer. One therefore has at first to measure A within one free spectral range using other [Pg.121]

It is more conveniently expressed in terms of frequency. With v = c/A, (4.33) yields As = mclvm and the free spectral frequency range [Pg.140]

The basic principle of all interferometers may be summarized as follows (see Fig.4.22). The incident lightwave with intensity Iq is divided into two or more partial beams with amplitudes Aj which pass different optical path lengths = nxj (n = refractive index) before they are again superimposed at the exit of the interferometer. Since all partial beams come from the [Pg.138]

According to (2.30), the transmitted intensity Ij is proportional to the square of the total amplitude, [Pg.139]

Examples of devices in which only two partial beams interfere, are the Michel son interferometer and the Mach-Zehnder interferometer. Multiple-bem interference is used for instance in the Fabry-Perot interferometer and in multilayer dielectric coatings of highly reflecting mirrors. [Pg.139]

Some interferometers use the optical birefringence of some crystals to produce two partial waves with mutually orthogonal polarization. The phase difference between the two waves is generated by the different refractive index for the two polarizations. An example of such a polarization interferometer is the Lyot filter [4.13] used in dye lasers to narrow the spectral linewidth (see Sect.7.3). [Pg.139]

Two important concepts of the theory are cluster and percolation threshold. Percolation theory defines a cluster as a group of neighbor positions occupied by the same component. Two positions are considered neighbors when they share a side of their position in the lattice. An infinite cluster or percolating cluster is defined as a cluster that extends though the system and connects all the sides of the lattice [71]. [Pg.114]

The percolation threshold (p) is the concentration of a component at which there is a maximum probability of appearance of an infinite or percolating cluster of this [Pg.114]

Furthermore, the concentration point at which a component is starting to percolate the system is usually related to a change in the properties of the system, which will now be more affected by this component. Ihis is known as a critical point. Close to the critical point important changes can take place, for example, changes in the release mechanism of the active agent and modification of the tablet structme. [Pg.115]

Some of the basic concepts related to MCRs are briefly described in the following text in order to familiarize the reader with this field and its characteristics. [Pg.3]

1 Clarifying Terminology One-Pot, Domino/ Cascade, Tandem, and MCRs [Pg.3]

The previous terms are probably familiar for most chemists, but they have crucial differences that are important to know in order to distingnish each term from the others. The term one-pot reaction inclndes reactions that involve multiple chemical transformations between reagents that are carried out in a single reactor. Thns, MCRs fall into the category of one-pot reactions dne to the sole reactor required for carrying out the reaction and that there are multiple chemical transformations involved. [Pg.3]

Furthermore, Fogg and dos Santos categorized the different types of mnlticatalyzed one-pot reactions in 2004 [25], some years after Tietze set the definition of domino reactions [12]. In this categorization, domino/cascade catalysis, tandem catalysis, and multicatalytic one-pot reactions were distinguished depending on certain factors, snch as the moment when the (pre)catalysts are added and the number [Pg.3]

With all the aforementioned concepts defined, it has been made clear that MCRs are one-pot reactions that might also fall under the category of domino/cascade or tandem reactions. A reaction is a domino/cascade or tandem MCR when it has the characteristics of one of these types of reactions in addition to inclnding three or more reagents that react to form a final prodnct. [Pg.3]

Microemulsions Background, New Concepts, Applications, Perspectives. Edited by Cosima Stubenrauch 2009 Blackwell Publishing Ltd. ISBN 978-1-405-16782-6 [Pg.84]

Actually, the surfactant does not like to be in water nor in oil because one part of the molecule is always lyophobic, which is why micelles are formed to hide it away from the solvent. Hence, it may be said that in type I phase behaviour the surfactant dislikes more oil than water, and in type II it dislikes more water than oil. Then, in type III phase behaviour, the surfactant equally dislikes both phases and would seek a third alternative, e.g. forming a bicontinuous microemulsion. In thermodynamic terms, it simply means that the chemical potential of the surfactant in such a microemulsion phase is lower than when it is adsorbed at the curved interface of a drop. [Pg.86]

In this chapter, we will focus on the formulation of systems in which the surfactant has equal affinity for both O and W phases. These formulations do form bicontinuous microemulsions of zero mean curvature and have important properties such as minimum interfacial tension and maximum solubilisation. Such condition has been called optimum formulation in the 1970s, because it matches the attainment of an ultralow interfacial tension that guarantees an enhanced oil recovery from petroleum reservoirs, which was the driving force behind the research effort on microemulsions (see Chapter 10, Section 10.3 of this book) [3,4]. High solubilisation performance micro emulsions which are able to cosolubilise approximately equal amount of oil and water with less than 15-20% surfactant are attainable only at an optimum formulation. [Pg.86]

Abstract The basic concepts of adsorption phenomena of gases on the surface of solid materials are presented and discussed in brief. Different types of adsorption processes are characterized by their molecular mechanism and energy of adsorption or desorption respectively. Technically important classes of sorbent materials are mentioned and characterized. The concepts of mass and volume of an adsorbed phase are illustrated with regard to the experimental techniques available today for investigation. [Pg.17]

In this chapter we will discuss some of the basic concepts which are used to describe adsorption phenomena of pure and mixed gases on the surface of solids. We here prefer a physical point of view, restricted to physisorption phenomena where adsorb molecules (admolecules) always are preserved and are not subject to chemical reactions or catalysis. Also, we always have industrial applications of physisorption processes in mind, i. e. we prefer simple and phenomenological concepts based on macroscopic experiments often being embedded within the framework of thermodynamics. That is, we prefer to take only those aspects of the molecular situation of an adsorption system into account which have been or at least can be proved experimentally and are not subject to mere speculation. [Pg.17]

Adsorption phenomena can be due to several different molecular mechanisms. These are described and characterized in brief by their respective enthalpies in Sect. 2. Several classes of sorbent materials used for different industrial purposes such as separation of gas mixtures, recovery of volatile solvents or energetic purposes like adsorption-based air conditioning systems, are presented in Sect. 3. This section is complemented by an overview of most often used methods to characterize porous sorbent materials given in Sect. 4. The basic concepts of mass and - to a lesser extent - volume of a sorbed phase are discussed in Sect. 5. In Sect. 6 a short overview of the [Pg.17]

The purpose of this section is to introduce the basic concept of power conversion. Power eonversion eneompasses a wide range of applications, electronic components, cireuit topologies, and related technical issues. An in-depth discussion of this technology is beyond the scope of this chapter and is best left to texts and papers authored by specialists in power electronics. Three instructive publications are an introduetion to the teehnology by Bose [4], a technology review by Bose [26], and a Sandia National Labs overview report [27]. [Pg.318]

Power eonverter terminology can be confusing. Traditionally, a.c. to d.c. converters were referred to as rectifiers, d.c. to a.c. converters as inverters, d.c. to d.c. converters as ehoppers, a.c. to a.c. (at same frequency) as a.c. power controllers, and a.e. to a.e. (at dilferent frequencies) as cyclo-converters [26]. Power eleetronie systems often eombine multiple conversion processes and are often simply referred to as converters or power-conditioning systems. [Pg.318]

A basie eonverter eoneept is shown in Fig. 10.5 [27]. As illustrated, the battery can supply an alternating, single-phase, square-wave voltage (current) to the load if the P and N switehes are alternately closed and opened. The idealized load current and voltage waveforms would be rich in harmonics and, thus, a poor substitute for a [Pg.318]

The load could be a utility distribution line coupled to the converter via a transformer. (Utilities would hkely require transformer coupling to provide d.c. a.c. isolation.) If that were the case, then the same topology could be used to charge the battery by opening and closing the P and N switch pairs at the appropriate times. In fact, if the switches were replaced by appropriately oriented diodes, the topology would be that of the familiar full-wave rectifier. [Pg.319]

In many utility applications, the d.c. energy storage source would interface a [Pg.319]

Chapter 3 introduced the basic concepts of bond pricing and analysis. This chapter builds on those concepts and reviews the work conducted in those fields. Term-structure modeling is possibly the most heavily covered subject in the financial economics literature. A comprehensive summary is outside the scope of this book. This chapter, however, attempts to give a solid background that should allow interested readers to deepen their understanding by referring to the accessible texts listed in the References section. This chapter reviews the best-known interest rate models. The following one discusses some of the techniques used to fit a smooth yield curve to market-observed bond yields. [Pg.67]

Term-stmcture modeling is based on theories concerning the behavior of interest rates. Such models seek to identify elements or factors that may explain the dynamics of interest rates. These factots are random, or stochastic. That means their future levels cannot be predicted with certainty. Interest rate models therefore use statistical processes to describe the factors stochastic properties and so arrive at reasonably accurate representations of interest rate behavior. [Pg.67]

The first term-structure models described in the academic literature explain interest rate behavior in terms of the dynamics of the short rate. This term refers to the interest rate for a period that is infinitesimally small. (Note that spot rate and zero-coupon rate are terms used often to [Pg.67]

The Radical Pair Theory of CIDNP A. Basic Concepts [Pg.56]

Overhauser limit (Gloss and Gloss, 1969 Gloss, 1969 Gloss and Trifunac, 1969) (v) polarization was observed in radical systems where the radical lifetimes were longer than the nuclear relaxation times in the individual radicals mvolved (Ward and Lawler, 1967 Lawler, 1967 Fischer, 1969). [Pg.57]

All these observations necessitated theorists to discard the Overhauser explanation. The radical pair model was put forward (Gloss and Gloss, [Pg.57]

The basic postulates of radical pair theory may be summarized as [Pg.57]

In dealing with the mechanical behavior of the mesoscopic systems, the following concepts could be of importance  [Pg.441]

Critical temperatures (Tc) that are related to atomic cohesive energy represent the thermal stability of a specimen such as solid-liquid, liquid-vapor, or ferromagnetic, ferroelectric, and superconductive phase transitions, or glass transition in amorphous states. [Pg.442]

From an experimental point of view, the values of the bulk modulus B and stress a can be measured by equaling the external mechanical stimulus to the responses of the interatomic bonding of the solid. [Pg.442]

and Y are proportional to the binding energy per unit volume under [Pg.442]

From an atomistic and dimensional point of view, the terms of B, Y, and r as well as the surface tension and surface energy are the same in dimension (Pa, or Jm ) because they are aU intrinsically proportional to the sum of bond energy per unit volume. The numerical expressions in Eq. (22.2) apply in principle to any substance in any phase and any deformation processes including elastic, plastic, recoverable, or non-recoverable without contribution from extrinsic artifacts. The fact that the [Pg.442]

8 SOUND REDUCTION IN AIR-HANDLING SYSTEMS 9.8.1 Basic Concepts [Pg.790]

The subject of acoustics involving sound transmission is of prime importance in industrial ventilation. Correct system design will ensure that the designer provides a system that will not give rise to complaints regarding noise levels. [Pg.790]

Consider the continuous oscillations of a tuning fork. These oscillations generate successive compressions and rarefactions outward through the air. The human ears, w hen receiving these pressure variations, transfer them to the brain, where they are interpreted as sound. Therefore, the phenomenon of sound is a pressure variation in a fixed point in the air or in another elastic medium, such as water, gas, or solid. [Pg.790]

This pressure variation can be considered as the transfer of a pressure wave in space. In the same w ay, w hen a stone is thrown into a lake, the ripples generated move radially from the point of entry of the stone. But this observation is only apparent, because a floating buoy will stay in the same horizontal position. It does not move radially in the space the perturbation, however, moves. [Pg.790]

These phenomena can be considered as work processes. In fact, pressure is the force per surface unit, and work is the product of force and displace ment of the force. Work is equivalent to energy, and it cannot be created or destroyed. [Pg.791]

In the years following these early studies, the basic concepts have remained largely the same, except that detection limits have been improved with technological advances. Recent work has focused on compositional ratios or signatures of the light hydrocarbon gases and their relationship to known hydrocarbon products in the investigated area (Weismann 1980 Jones and Drozd, 1983). [Pg.141]

Emphasis has also been placed on the fundamental principles of surface seepage, and the interpretation of the data. It is the opinion of the authors that the overall acceptance of microseep technology in the West has been hindered not only by the emphasis and success of seismic methods but also because of the lack of a comprehensive and public surface geochemistry database. There are, by comparison, more publications on geochemical survey data and basic concepts in the Soviet and Russian literature. As a consequence, many of our discussions rely on experience gained in the private sector in the West, supplemented by literature published in the East. [Pg.141]

Identifying secondary responses generated by leakage of hydrocarbons at the surface has merit and has been reported by many investigators. These include the use of (1) soil microbes (Soli, 1954, 1957 Kartsev et al., 1959 Sealey, 1974a Sealey, 1974b) (2) reduction effects (Pirson et al., 1969 Donovan, 1974 Ferguson, 1975) (3) carbon and [Pg.141]

Others have noted changes in resistivity or radioactive signatures above accumulations due to the seepage and possible interaction of ascending fluids and solutions with the encapsulating medium. In some cases the actual removal or addition of soluble chemical species has been noted. [Pg.142]

It appears therefore that the direct detection of hydrocarbon gases is not the only means of identifying areas of active microseepage, but that a myriad of other possible secondary techniques can be used either as adjuncts, or as solitary techniques in themselves, to infer the presence of hydrocarbons in the subsurface environment. Most of these utilise the detection and subsequent analysis of gaseous hydrocarbons, while other methods employ the detection and analysis of liquid hydrocarbons, nonhydrocarbon gases, the presence and relative concentration of bacteria, and even the presence (or absence) of inorganic compounds and elements. For the most part, however, methods that directly measure the hydrocarbon content of soils or soil atmospheres have met with the most acceptance. [Pg.143]

Problems at the end of the chapter consist of three different types (a) Basic Concepts (True False), which seek to test the reader s comprehension of the key concepts in the chapter (b) Short Exercises, which seek to test the reader s ability to compute the required parameters for a simple data set using simple or no technological aids, and this section also includes proofs of theorems and (c) Computational Exercises, which require not only a solid comprehension of the basic material but also the use of appropriate software to easily manipulate the given data sets. [Pg.131]

Determine if the following statements are true or false and state why this is the case. [Pg.131]

If the residuals are distributed so that they are increasing in magnitude as the X value increases, then it can be concluded that the model is adequate. [Pg.131]

Weighted, least squares can correct for the error structure. [Pg.131]

If then it can be concluded that there is no relationship between the [Pg.132]

The conservation of energy and momentum is the fundamental requirement which determines the behavior of the SE s in metals, semiconductors, and ionic compounds irradiated by particles. Although we shall not deal with the basic physics of elementary collision processes in our context of chemical kinetics, let us briefly summarize some important results of collision dynamics which we need for the further discussion. If a particle of mass mP and (kinetic) energy EP collides with a SE of mass ms in a crystal, the fraction of EP which is transferred in this collision process to the SE is given by [Pg.317]

EP is the initial energy in the laboratory frame and q denotes the electric charge [Pg.317]

Subsequent to the collision, the most important event concerning kinetics is the displacement of regular SE s and the formation of Frenkel-type point defects. The corresponding formation reaction is [Pg.318]

We conclude that a crystal which is continuously irradiated with particles of sufficient kinetic energy and in which no further reactions (e.g., phase formations) take place becomes more and more supersaturated with point defects. Recombination starts if the defects can move fast enough by thermal activation. A steady state is reached when the rates of defect production and annihilation (by recombination) are equal. In the homogeneous crystal, the change in local defect concentration (cd) over time is given by (see Section 5.3.3) [Pg.318]

The second term on the right hand side of Eqn. (13.5) describes the rate of recombination. In the case of diffusion controlled recombination, fc and k may be calculated in terms of defect diffusivities and steady state concentrations. Without radiation, cd = 0, and the Frenkel equilibrium, requires that cv -cA = K/k. If a steady state is attained under irradiation, the rate of radiation produced defects (cp) add to the thermal production rate, and the sum is equal to the recombination rate. Therefore, [Pg.318]

The leim absorption as used in this chapter refers to the transfer of one or more components of a gas phase to a liquid phase in which it is soluble. Snipping is exactly the reverse, the transfer of a component from a liquid phase in which it is dissolved to a gas phase. The same basic principles apply to both operations however, for convenience, the discussions that follow rerer primerily to the absorption case. [Pg.340]

Technically, the liquid used in a gas absorption process is referred to as the absorbent and the component absorbed is called the absorbnte, As used in this text, these terms are considered synonymous with solvent and solute, respectively. In practical usage, the absorbent often is designated as the lean solution or the rich solution depending ou whether it is entering or leaving the absorber. [Pg.340]

The operatiou or absorptiou can he categorized ou the basis of the nature of the interaction between absorbent and absorbale into the following three general lypas  [Pg.340]

Reversible Reaction. This type of absorption is characterized by the occurrence of a chemical reacting between the gaseous component being absoibed and a component in the liquid phsee to form a compound the exerts a significant vapor preasure of the absorbed component. An example is the absorptina of carbon dioxide into a monoethanoismine solution. This type of system is quite difficult to analyze bacause the vapor-liquid equilibrium curve is not linear and the race of absorption may be affected by chemical reactiou rates. [Pg.340]

TABLE 6.1-1. Typical Applications of Absorption for Product Recovery [Pg.341]

Physical Solution. In this case, the component being absorbed is more soluble in the liquid absorbent than are the other gases with which it is mixed but does not react chemically with the at rbent. As a result, the equilibrium concentration in the liquid phase is primarily a function of partial pressure in the gas phase. One example of this type of absorption operation is the recovery of light hydrocarbons with oil. This type of system has been the subject of a great many studies and is analyzed quite readily. It becomes more complicated when many components are involved and attempts are made to include subtle interactions and heat effects in the analysis. [Pg.340]

The primary purpose of this section is to establish a basis for understanding the nature of elementary excitations in solid azide compounds. The discussion deals with basic concepts and overlaps the field of thermochemistry. This facilitates a selective digression into the energetics of azide compound reactivity. While comments on the interpretation of experimental results are made here, a comprehensive review of the experimental literature appears in Section D. [Pg.207]

For an ionic solid the lattice energy is defined as the energy necessary to separate the constituent ions infinitely far apart. It can be expressed as [Pg.208]

The electronic levels of crystalline solids separate into bands of allowed and forbidden energies [53]. A solid whose highest occupied band (valence band) is completely filled and separated from the lowest unoccupied (conduction) band is an insulator. Ionic solids are typically insulators. In this one-electron band description, the lowest electronic excitation corresponds to a transition from the top of the valence to the bottom of the conduction band (a band-gap excitation). Direct band-gap transitions do not involve simultaneous emission or absorption of a phonon, whereas indirect ones do. [Pg.210]

The conduction band is assumed to arise primarily from states of the neutral cation, with perhaps some admixture of states. Its position is approximately at - (7i - U), where Ii is the cation s first ionization energy, and typically lies near or above vacuum. But the band is broadened considerably by overlap of neutral cation wave functions, and this effect can bring the bottom edge of the conduction band below the vacuum level. The magnitude of the difference is the electron affinity of the crystalline solid, x- [Pg.210]

The highest valence band can be of either N3 or cation (M ) nature, or perhaps a mixing of both. The azide band lies A + U below vacuum (A is the azide ion s electron affinity). The degree of band broadening depends on interazide [Pg.210]

Several textbooks and articles have been written on various aspects of the field of toxicology, most notably Casarett Doull s Toxicology The Basic Science of Poisons [7], which offers clear, concise descriptions of key concepts of toxicology. [Pg.326]

However, a useful starting point for a nontoxicologist is an open access toxicology tutorial at http //sis.nlm.nih.gov/enviro/toxtutor/Toxl. The following sections sununarize key points in the online tutorial and can also be studied in more detail in the Casarette Doull s textbook. [Pg.327]

Many contributing factors determine the effect of a given toxicant in a particnlar situation. The age, species, and sex of an exposed organism each inflnences the toxicant s action. Additionally, the chemical form, dosage, and ronte of exposure (dermal, gastrointestinal tract, etc.) of the toxicant are critical factors. Together, these variables govern the amount of the snbstance that enters the body and thereby its ultimate effect. Toxicokinetics will be discnssed more fully in snbseqnent sections. [Pg.327]

Adverse effects can also be classified as chronic, subchronic, snbacnte, or acute. Chronic toxicity refers to cumulative damage after months or years of exposure to a toxicant. Subchronic usually describes an incidence of exposnre that lasts several weeks or months. Subacute indicates an exposnre event that is limited, bnt repeated more than once. Acute toxicity is the term for an immediate and often severe effect that is apparent after a single dose. A single compound may exert different effects at different exposure levels. For example, one acnte effect of benzene is central nervous system depression, while chronic benzene exposnre may canse bone marrow toxicity. [Pg.327]

Structure or number. In this way, toxicants may act on whole organisms, specific cell types, or a single biological molecnle. The diversity of toxicants is reflected in the myriad effects they produce in living systems. [Pg.328]

With thermal separation of mixtures, usually in open systems, an exchange of heat and mass occurs at the phase interface. When phase equilibrium is reached, no further heat or mass transfer takes place. Basic concepts and equations are now introduced [1.20], in order to describe phase equilibria  [Pg.14]

Therefore, the Gibbs free enthalpy is that part of the enthalpy which, at a reversible change of state, can be converted into other types of energy. [Pg.14]

For a closed system (constant mass or for a pure phase), the total differential for the free enthalpy dG is [Pg.14]

For a reversible change of state it follows from the first law of thermodynamics that  [Pg.14]

With constant mass, the free enthalpy is only a function of pressure and temperature. [Pg.15]

Accident scenarios leading to vapor cloud explosions, flash fires, and BLEVEs were described in the previous chapter. Blast effects are a characteristic feature of both vapor cloud explosions and BLEVEs. Fireballs and flash fires cause damage primarily from heat effects caused by thermal radiation. This chapter describes the basic concepts underlying these phenomena. [Pg.47]

Pohl (1986), Okano et al. (1987), Park and Quate (1987), Kuk and Silverman (1989), and Tiedje and Brown (1990). Vibration and the vibration isolation problem is ubiquitous in mechanical engineering, and there are excellent textbooks about it (Timoshenko, Young, and Weaver, 1974 Frolov and Furman, 1990). We start this chapter with a description of the basic concepts in vibration isolation through the analysis of a one-dimensional system, followed by a discussion of environmental vibration and various examples of vibration isolation systems for STM and AFM. [Pg.237]

Much of the physics of vibration isolation in STM can be illustrated by a vibrating system with one degree of freedom, as shown on Fig. 10.1 (Frolov and Furman, 1990 Park and Quate, 1987). Also, the formalism developed in this section will be useful for the understanding of the feedback system we will discuss later. [Pg.237]

The frame for the instrument always has vibrations transmitted from the ground and the air. The displacement of the frame is described by a function of time, X t). The STM is represented by a mass M, mounted on the frame. The problem of vibration isolation is to devise a proper mounting to minimize the vibration transferred to the mass, that is, to minimize its displacement of the mass M, x(t). The basic method for vibration isolation is to mount the mass to the frame through a soft spring, as shown in Fig. 10.1. The restoring force of the spring acting on the mass is [Pg.237]

By introducing standard parameters, the natural frequency fo (or the natural circular frequency too = 2tt ) and the damping constant y. [Pg.238]

The right-hand side, fit), represents the effect of force transmitted from the frame to the mass. [Pg.238]

We all have needs, requirements, wants, and expectations. Needs are essential for life, to maintain certain standards, or essential for products and services, to fulfill the purpose for which they have been acquired. Requirements are what we request of others and may encompass our needs but often we don t fully realize what we need until after we have made our request. For example, now that we own a mobile phone we discover we really need hands-free operation when using the phone while driving a vehicle. Hence our requirements at the moment of sale may or may not express all our needs. Our requirements may include wants - what we would like to have but do not need nice to have but not essential. Expectations are implied needs or requirements. They have not been requested because we take them for granted - we regard them to be understood within our particular society as the accepted norm. They may be things to which we are accustomed, based on fashion, style, trends, or previous experience. Hence one expects sales staff to be polite and courteous, electronic products to be safe and reliable, policemen to be honest, etc. [Pg.19]

A product which possesses features that satisfy customer needs is a quality product. Likewise, one that possesses features which dissatisfy customers is not a quality product. So the final arbiter on quality is the customer. The customer is the only one who can decide whether the quality of the products and services you supply is satisfactory and you will be conscious of this either by direct feedback or by loss of sales, reduction in market share, and, ultimately, loss of business. [Pg.20]

There are other considerations in understanding the word quality, such as grade and class. These are treated in ISO 8402 1994 but will be addressed briefly here so as to give a complete picture. [Pg.20]

Let us look at some examples to illustrate the point. Food is a type of entity. Transport is another entity. Putting aside the fact that in the food industry the terms class and grade [Pg.20]

Now take another example from the service industry accommodation. There are various categories, such as rented, leased, and purchased. In the rented category there are hotels, inns, guest houses, apartments, etc. It would be inappropriate to compare hotels with guest houses or apartments with inns. They are each in a different class. Hotels are a class of accommodation within which are grades such as 5 star, 4 star, 3 star, etc., indicating the facilities offered. [Pg.21]

A polymer is a large molecule built up from numerous smaller molecules. These large molecules may be linear, slightly branched, or highly interconnected. In the latter case the structure develops into a large three-dimensional network. [Pg.1]

The small molecules used as the basic building blocks for these large molecules are known as monomers. For example the commercially important material poly(vinyl chloride) is made from the monomer vinyl chloride. The repeat unit in the polymer usually corresponds to the monomer from which the polymer was made. There are exceptions to this, though. Poly(vinyl alcohol) is formally considered to be made up of vinyl alcohol (CH2CHOH) repeat units but there is, in fact, no such monomer as vinyl alcohol. The appropriate molecular unit exists in the alternative tautomeric form, ethanal CH3CHO. To make this polymer, it is necessary first to prepare poly(vinyl ethanoate) from the monomer vinyl ethanoate, and then to hydrolyse the product to yield the polymeric alcohol. [Pg.1]

The size of a polymer molecule may be defined either by its mass (see Chapter 6) or by the number of repeat units in the molecule. This latter indicator of size is called the degree of polymerisation, DP. The relative molar mass of the polymer is thus the product of the relative molar mass of the repeat unit and the DP. [Pg.1]

There is no clear cut boundary between polymer chemistry and the rest of chemistry. As a very rough guide, molecules of relative molar mass of at least 1000 or a DP of at least 100 are considered to fall into the domain of polymer chemistry. [Pg.1]

The vast majority of polymers in commercial use are organic in nature, that is they are based on covalent compounds of carbon. This is also true of the [Pg.1]

For thermodynamic reasons, an electrochemical reaction can occur only within a dehnite region of potentials a cathodic reaction at electrode potentials more negative, an anodic reaction at potentials more positive than the equilibrium potential of that reaction. This condition only implies a possibility that the electrode reaction will occur in the corresponding region of potentials it provides no indication of whether the reaction will actually occur, and if so, what its rate will be. The answers are provided not by thermodynamics but by electrochemical kinetics. [Pg.79]

In the case of redox reactions, polarization also depends on the natme of the nonconsumable electrode at which a given reaction occms (for the equilibrium potential, to the contrary, no such dependence exists). Hence, the term reaction will be understood as reaction occurring at a specified efectrode.  [Pg.79]

Fundamentals of Electrochemistry, Second Edition, By V. S. Bagotsky Copyright 2006 John Wiley Sons, Inc. [Pg.79]

In the electrochemical literature, the concept of electrode polarization has three meanings  [Pg.80]

The phenomenon of change in electrode potential under current flow [Pg.80]

To create a chiral center at an sp3-hybridized carbon requires a chiral environment to stereodirect the reaction. This chiral environment may exist as a chirally substituted sp3-hybridized carbon, on which appropriate substitution creates a new molecule with the same or inverted chirality at the former chiral center or as a chiral arrangement near a prochiral (prechiral) sp2 carbon. [Pg.97]

In the case of an sp3-hybridized carbon containing three different substituents (prochiral), the substitution of a fourth kind of substituent for one of the two identical substituents creates a new chiral center. [Pg.98]

Stereodirection of either the substitution or the addition requires a chiral environment. This chiral environment may be either in the same molecule [Pg.98]

Modifying reagents (S,S)-Tartaric acid [(S,S)-TA] (S)-Malic acid (R,R pGlutamic ac id (R)-Valine [Pg.98]

The bulk rock composition vector is a linear combination of the mineral compositions (equivalently, the mixture composition vector is a linear combination of the end-member composition vector). The non-negative coefficients of this linear [Pg.2]

Proto-oncogenes found in mammalian cells have homology to genes found in transforming retroviruses. Well more than 60 proto-oncogenes have been recog- [Pg.3]

Let us first consider interfaces at equilibrium. Any stress (osmotic or shear stress) imposed to the emulsion increases the amount of interface, leading to a modification of the free energy. For a monodisperse collection of N droplets of radius a, the total interfacial area of the undeformed droplets is So = 47t No. If the emulsion is compressed up to each droplet is pressing against its neighbors through [Pg.127]

A similar approach can be adopted for the bulk shear modulus. When a small strain is applied to a solid, the latter is stressed, and one can measure the resulting stress. At low deformation, the bulk shear stress, t, is proportional to the strain, r, following Hooke s law  [Pg.127]

Clearly, UV and EB radiation have a great deal in common, as shown above. However, there are also differences. Besides the nature of interacting with matter, where high-energy electrons penetrate, and photons cause only surface effects, there are issues concerning the capital investment and chemistry involved. [Pg.2]

Without any doubt, the UV irradiation process is the lower-cost option, since the equipment is simpler, smaller, and considerably less expensive to [Pg.2]

Ionization of organic molecules requires higher energy. The ionization process generates positive ions and secondary electrons. When reacting with suitable monomers (e.g., acrylates), positive ions are transformed into free radicals. Secondary electrons lose their excess energy, become thermalized, and add to the monomer. The radical anions formed this way are a further source of radicals capable of inducing a fast transformation.  [Pg.3]

In industrial irradiation processes, either UV photons with energies between 2.2 and 7.0 eV or accelerated electrons with energies between 100 and 300 eV are used. Fast electrons transfer their energy to the molecules of the reactive substance (liquid or solid) during a series of electrostatic interactions with the outer sphere electrons of the neighboring molecules. This [Pg.3]

In summary, UV and electron beam technology improves productivity, speeds up production, lowers cost, and makes new and often better products. At the same time, it uses less energy, drastically reduces polluting emissions, and eliminates flammable and polluting solvenfs. [Pg.4]

The fact that the hydrogen ion is an important chemical species in these reactions is indicative of the major role that carbonic acid plays in influencing the pH and buffer capacity of natural waters. Furthermore, the activity of the carbonate anion in part determines the degree of saturation of natural waters with respect to carbonate minerals. Determination of the activity or concentration of CO32- is not an easy task nevertheless, it is necessary to the interpretation of a myriad of processes, including carbonate mineral and cement precipitation-dissolution and recrystallization reactions. [Pg.1]

The relative proportions of the different carbonic acid system species can be calculated using equilibrium constants. If thermodynamic constants are used, activities must be employed instead of concentrations. The activity of the ith dissolved species (a,) is related to its concentration (mj) by an activity coefficient [Pg.1]

Approaches for calculating activity coefficients will be discussed later in this chapter. Three important concepts are introduced here. The first is that in dilute solutions the activity coefficient approaches 1 as the concentration of all electrolytes approaches zero. The second is that the activity coefficient must be calculated on the same scale (e.g., molality, molarity, etc.) as that used to express concentration. The third is that activity in the gas phase is expressed as fugacity. Because the fugacity coefficient for CO2 is greater than 0.999 under all but the most extreme conditions for sediment geochemistry (e.g., deep subsurface), the partial pressure of CO2 (Pc02 may reasonably used in the place of fugacity. [Pg.2]

Based on these considerations, it is possible to write the expressions for the carbonic acid system thermodynamic equilibrium constants (Kj). [Pg.2]

Values for these constants are a function of temperature and pressure. [Pg.2]

The preceding chapter closed with a discussion on kinetic methods which presume investigations of non-stationary time dependent relaxation processes of optical polarization of the angular momenta in a molecular ensemble. Another possibility, which also permits us to introduce a time scale, consists of the application of an external magnetic field. Indeed, since an angular momentum J produces a corresponding proportional (collinear) magnetic moment hj  [Pg.104]

We assume here that the Bohr magneton /zb is a positive quantity. The numerical value of the Bohr magneton and some other fundamental constants, as recommended by CODATA, the Committee on Data for Science and Technology of the international Council of Scientific Unions, are presented in Table 4.1 [103] [Pg.104]

Quantity Symbol Value Units Relative uncertainty ppm [Pg.105]

at a positive g-factor, precession of J takes place in a clockwise direction if viewed from the tip of the B-vector, and in a counterclockwise direction at negative gj (see Fig. 4.1(a)). [Pg.105]

The problem of the Lande factors in molecules is complex, and its signs may differ. Some information on this point will be presented in Section 4.5. We wish to remind the reader that the direction of B naturally determines the quantization axis z. The frequency u j fixes the time scale, and this permits us, as we will see presently, to obtain direct (in one sense) information on relaxation processes and/or to study molecular magnetism. [Pg.105]

Enzymes are proteins that are catalysts of biochemical reactions. They usually exist in very low concentrations in cells, where they increase the rate of a reaction without altering its equilibrium position i.e., both forward and reverse reaction rates are enhanced by the same factor. This factor is usually around H -IO12. [Pg.228]

Although phenomena of fermentation and digestion had long been known, the first clear recognition of an enzyme was made by Payen and Persoz (Ann. Chim. (Phys), 53, 73, 1833) when they found that an alcohol precipitate of malt extract contained a thermolabile substance that converted starch into sugar. [Pg.228]

The above-mentioned substance was called diastase (Greek separation ) because of its ability to separate soluble dextrin from insoluble envelopes of starch grains. Diastase became a generally applied term for these enzyme mixtures until 1898, when Duclaux suggested the use of -ase in the name of an enzyme this classification procedure still holds today. [Pg.228]

Many enzymes were purified from a large number of sources, but it was J. B. Sumner who was the first to crystallize one. The enzyme was urease from jack beans. For his travail, which took over 6 years (1924-1930), he was awarded the 1946 Nobel prize. The work demonstrated once and for all that enzymes are distinct chemical entities. [Pg.228]

Carbon dioxide gas dissolves readily in water and is spontaneously hydrated to form carbonic acid, which rapidly dissociates to a proton and a bicarbonate ion  [Pg.228]

We consider a microscopic polyatomic system consisting of N nuclei and n electrons (1-4). Let the positions of the nuclei be described by the radius vectors Rx (a = 1. N). If the polyatomic system is free of external force, the total linear momentum is conserved and thus its center of mass moves with a constant velocity vector (5). Consequently, a new coordinate system with its origin fixed at the center of mass can be introduced (the center-of-mass coordinate system), where the description of the polyatomic system can be simplified. Since the position of the center of mass of the entire polyatomic system practically coincides with the position of the center of mass of the nuclear subsystem, the number of the degrees of freedom, F, of the nuclei in the center-of-mass system can be reduced by 3 due to the translation of the center of mass, and by 3 connected with the overall rotation about the center of mass (in case of a linear polyatomic system, the reduction due to the overall rotation is only by 2) (5) i.e., the number of independent nuclear coordinates is F = 3N — 6 (3N — 5). The radius vectors Rx can be then expressed in terms of F generalized coordinates Q (5)  [Pg.248]

The coordinates QJ generate an F-dimensional vector space (or manifold) M-the configuration space of the nuclei. The positions of the nuclei in the configuration space M are given by a single point, the system point Q = QJ. Analogously, let the positions of the electrons in the center-of-mass system be given by the coordinates q = qu, u — 1. 3n. [Pg.248]

These coordinates will be employed as variables in the equations of motion describing the time evolution of the polyatomic system. [Pg.248]

The use of the quantum treatment in dealing with processes in polyatomic systems is rather limited (6,7). Nevertheless, the quantum formulation implies the most general features of the problem, so that it is convenient to commence our consideration with the quantum equations of motion. [Pg.248]

The total wave function ij/(Q, q, t) of the polyatomic system in question satisfies the time-dependent Schrodinger equation ( S) [Pg.248]

The presence of any of several functional groups is likely to impart photolability to drug molecules. These include carbonyl (C=0), nitroaromatic, -N-oxide, -alkene (C=C), aryl chloride, weak C-H and O-H bonds, sulfides, and polyenes. Some of these functional groups impart photolability as a result of their chromophoric properties (e.g., carbonyl) and some of them impart photolability by virtue of their weak covalent bonds, (e.g., O-H bonds). A list of several common bonds and their respective bond energies (E ) and the corresponding wavelengths ( ) are presented in Table 1. [Pg.79]

As an illustration, upon absorption of a photon having a wavelength equal to or shorter than 332 nm, a drug molecule that incorporates a C-O bond absorbs [Pg.79]

The rate at which an ion exchange reaction proceeds is a complex function of several physico-chemical processes such that the overall reaction rate may be influenced by the separate or combined effects of  [Pg.134]

Concentration gradients in both phases Electrical charge gradients in both phases Ionic interactions in either phase Exchanger properties (structure, functional group) Chemical reactions in either phase [Pg.134]

As yet, no analytical and readily integrated unique mathematical function of the type - dCjdt — Cx) exists for describing the kinetics, where (C ) is the resin phase concentration of the counter-ion A initially in the exchanger, B the ion in solution, and t the elapsed time. However, analytical solutions of the rate equations are available which account for the observed rate behaviour under specified circumstances or boundary conditions. [Pg.134]

Studies of ion exchange reactions on organic exchangers have identified the possible rate controlling steps to be  [Pg.134]

Coupled diffusion or transport of counter-ions in the external solution phase. [Pg.134]

A pnxlrug by definition is inactive and must be converted into an active species within the biological system. There arc a variety of mechanisms by which this conversion may be accomplished. Generally, the conversion to an active form is most often carried out by mctaboli/.ing en/ymes within the body. Conversion to an aetive form may be accomplished by chemical means (c.g., hydrolysis or decarboxylation), although this is less common. Chemical transformation docs not depend on the presence or relative amounts of metabolix- [Pg.142]

Sulindac is administered orally, ahsorbed in the small intestine, and subsequently reduced to the active species. Administration of the inactive form has the benefit of reducing the gastrointestinal (Gl) irritation associated with the. sulfide. This example also illustrates one of the problems assrxiiated with this approach, namely, participation of alternate metabolic paths that may inactivate the compound. In this case, after absorption of. sulindac. irreversible metabolic oxidation of the sulfoxide to the sulfone can also occur to give an inactive compound. [Pg.143]

A portion of the incident light, however, may be refiected by the surface of the cell or absorbed by the cell wall or solvent. To focus attention on the compound of interest, elimination of these factors is necessary. This is achieved using a reference cell identical to the sample cell, except that the compound of interest is omitted from the solvent in the reference cell. The transmittance (T) through this reference cell is Jj divided by Jq the transmittance for the compound in solution then is defined as Is divided by Jjj. In practice the reference cell is inserted and the instrument adjusted to an arbitrary scale reading of 100 (corresponding to 100% transmittance), after which the percent transmittance reading is made on the sample. As we increase the concentration of the compound in solution, we find that transmittance varies inversely and logarithmically with concentration. Consequently, it is more convenient to define a new term, absorbance (A), that wUl be directly proportional to con- [Pg.62]

Analytically, the amount of light absorbed or transmitted is related mathematically to the concentration of the analyte in question by Beer s law. [Pg.63]

Beer s Law—Relationship between Transmittance, Absorbance, and Concentration [Pg.63]

This equation forms the basis of quantitative analysis by absorption photometry. When is 1 cm and c is expressed in moles per liter, the symbol e is substituted for the constant a. The value for c is a constant for a given compound at a given wavelength under prescribed conditions of solvent, temperature, pH, etc., and is called the molar absorptivity. The nomenclature of spectrophotometry is summarized in Table 3-2. Values for e are useful to characterize compounds, establish their purity, and compare sensitivities of measurements obtained on derivatives. Pure bilirubin, for example, when dissolved in chloroform at 25 °C, has a molar absorptivity of 60,700+1600 at 453 nm. The molecular weight of bilirubin is 584. Hence a solution containing 5mg/L (0.005 g/L) should have an absorbance of [Pg.63]

The molar absorptivity of the complex between ferrous iron and s-tripyridyltriazine is 22600, whereas that with 1,10-phenanthrolme is 11000. Thus for a given concentration of iron, s-tripyridyltriazine produces a complex with an absorbance about twice that of the complex with 1,10-phenanthroline. Consequently, s-tripyridyltriazine is a more sensitive reagent to use in the measurement of iron. [Pg.63]

Different factors governing the reactions and yields of conjugation (Table 11.2) apply equally to haptenation. The stability and solubility of the hapten, and the nature of the groups available for conjugation also influence the yield. [Pg.280]

The choice of the carrier is important. The most common carriers are serum albumin of various species (generally quite soluble Erlang-er et al, 1959), keyhole limpet hemocyanin, thyroglobulin, ovalbumin, or fibrinogen. Different carriers can be used for immunization and assays or, alternatively, antibodies to the carrier can be removed by absorption. [Pg.280]

The linkage of haptens to proteins generally occurs at the most [Pg.280]

In this monograph, semiconductors and covalent or partially covalent insulators are considered. These materials differ from metals by the existence, at low temperature, of a fully occupied electronic band (the valence band or VB) separated by an energy gap or band gap (Eg) from an empty higher energy band (the conduction band or CB). When Eg reduces to zero, like in mercury telluride, the materials are called semimetals. In metals, the highest occupied band is only partially filled with electrons such that the electrons in this band can be accelerated by an electric field, however small it is. [Pg.1]

From an optical viewpoint, on the other hand, the difference between semiconductors and insulators lies in the value of Eg. The admitted boundary is usually set at 3 eV (see Appendix A for the energy units) and materials with Eg below this value are categorized as semiconductors, but crystals considered as semiconductors like the wurtzite forms of silicon carbide and gallium nitride have band gaps larger than 3 eV, and this value is somewhat arbitrary. The translation into the electrical resistivity domain depends on the value of Eg, and also on the effective mass of the electrons and holes, and on their mobilities. The solution is not unique moreover, the boundary is not clearly defined. Semi-insulating silicon carbide 4H polytype samples with reported room temperature resistivities of the order of 1010flcm could constitute the [Pg.1]

In a category of materials known as Mott insulators, like MnO, CoO or NiO, with band gaps of 4.8, 3.4, and 1.8 eV, respectively ([2], and references therein), the upper energy band made from 3d states is partially occupied resulting in metallic conduction. The insulating behaviour of these compounds is attributed to a strong intra-atomic Coulomb interaction, which results in the formation of a gap between the filled and empty 3d states [35]. [Pg.2]

A consequence of the existence of an electronic band gap is that at sufficiently low temperature, intrinsic semiconductors or insulators show no absorption of photon related to electronic processes for energies below Eg. Inversely, the photons with energies above Eg are strongly absorbed by optical transitions between the valence and conduction bands, and this absorption is called fundamental or intrinsic. [Pg.2]

Extrinsic semiconductors are materials containing foreign atoms (FAs) or atomic impurity centres that can release electrons in the CB or trap an electron from the VB with energies smaller than Eg (from neutrality conservation, trapping an electron from the VB is equivalent to the release of a positive hole in the otherwise filled band). These centres can be inadvertently present in the material or introduced deliberately by doping, and, as intrinsic, the term extrinsic refers to the electrical conductivity of such materials. The electron-releasing entities are called donors and the electron-accepting ones acceptors. When a majority of the impurities or dopants in a material is of [Pg.2]

A solution is formed by the addition of a solid solute to the solvent. The solid dissolves, forming the homogeneous solution. At a given temperature there is a maximum amount of solute that can dissolve in a given amount of solvent. When this maximum is reached the solution is said to be saturated. The amount of solute required to make a saturated solution at a given condition is called the solubility. [Pg.1]

Solubilities of common materials vary widely, even when the materials appear to be similar. Table 1.2 lists the solubility of a number of inorganic species (Mullin 1997 and Myerson et al. [Pg.1]

The first five species all have calcium as the cation but their solubilities vary over several orders of magnitude. At 20 °C the solubility of calcium hydroxide is 0.17g/100g water while that of calcium iodide is 204 g/100 g water. The same variation can be seen in the six sulfates listed in Table 1.2. Calcium sulfate has a solubility of 0.2g/100g water at 20 °C while ammonium sulfate has a solubility of 75.4 g/100 g water. [Pg.1]

Compound Chemical Formula Solubility (g anhydrous/IOOg HaO) [Pg.1]

Given 1 molar solution of NaCl at 25 °C Density of solution = 1.042 g/cm Moleeular weight (MW) NaCl = 58.44 [Pg.2]

The interactions between the stationary phase, the mobile phase and the solute in reversed phase liquid chromatography may be considered [Pg.76]

The major problem in defining the separation mechanism resides in a lack of understanding of the nature of the surface coverage of the bonded groups on the stationary phase. The simplest description of the surface is that it consists of a brush-like structure (Karch et al., [Pg.77]

We begin with the notion of a random experiment, which is a procedure or an operation whose outcome is uncertain, and consider some aspects of events themselves before considering the probability theory associated with events. [Pg.8]

The collection of all possible outcomes of a particular experiment is called a sample space. We will use the letter S to denote a sample space. An event is any collection of possible outcomes of an experiment, that is, any subset of S. For example, if we plan to roll a die twice, the experiment is actual rolling of the die two times, and none, one, or both of these two events will have occurred after the experiment is carried out. In the first five examples, we have discrete sample spaces in the last two, we have continuous sample spaces. [Pg.8]

Example 2.1 Sample Space and Events from Random Experiments and Real Data. The following are some examples of random experiments and the associated sample spaces and events  [Pg.8]

Assume you toss a coin once. The sample space is S = H, T, where H = head and T = tail and the event of a head is H.  [Pg.8]

Assume you count the number of defective welds in a car body. The sample space is 5 = 0, 1. N, where N = total number of welds. The event that the number of defective welds is no more than two is 0, 1, 2.  [Pg.9]

There are many definitions of a system (e.g., in Rechtin 1991), but let us use a relatively simple definition introduced several years ago by Mark Maier and Eberhard Rechtin (2000)  [Pg.77]

A system is a set of different elements so connected or related as to perform a unique function not performable by the elements alone. [Pg.77]

A system s elements are understood as separate entities, each with a specific function that is usually different from the function of the entire system. Let us use a glass box model to present a general model of a system. It has interconnected elements and relationships between these elements, which are called feedbacks. Our system is active in a given environment and communicates with its environment through input and output. Input represents [Pg.77]

For example, when a model of our system represents a skeleton structural system, its input may include gravity, wind, and earthquake forces. Its output is the pressure transferred from the structure through the foundation to [Pg.78]

From a different perspective, we may say that input represents the action or impact of the environment on a given system operating in this environment, while the system reacts through its output we have here an action-reaction model. Another interpretation is that a system transforms its input into output, and this transformation model is particularly important for us, as we will see in Section 4.6, where we discuss various engineering designing models. [Pg.78]

Minimum Temperature Approach AT For a feasible heat transfer between the hot and cold composite streams, a minimum temperature approach must be specified, which corresponds to the closest temperature difference between the two composite curves on the T H axis. This minimum temperature approach is termed as the network temperature approach and defined as AT an-Maximal Process Heat Recovery The overlap between the hot and cold composite curves represents the maximal amount of heat recovery for a given AT r - In other words, the heat available from the hot streams in the hot composite curve can be heat-exchanged with the cold streams in the cold composite curve in the overlap region. [Pg.159]

Hot and Cold Utility Requirement The overshoot at the top of the cold composite represents the minimum amoimt of external heating ((2h), while the overshoot at the bottom of the hot composite represents the minimum amount [Pg.159]

Pinch Point The location of AT is called the process pinch. In other words, the pinch point occurs at AT an- When the hot and cold composite curves move closer to a specified AT, the heat recovery reaches the maximum and the hot and cold utilities reach the minimum. Thus, the pinch point becomes the bottleneck for further reduction of hot and cold utilities. Process changes must be made if further utility reduction is pursued. [Pg.159]

Strain hardening occurs during transient creep, which is induced by pure glide. Mobile dislocations, present at the start of creep and continue to move under the influence of an effective stress, which slowly declines as the mobile dislocations are trapped in the network. The total dislocation density, equal to the sum of the mobile and network densities, remains constant. It is clear that strain is a function of stress and increases with stress. [Pg.419]

Secondary creep or stage 11 creep is often referred to as steady state or linear creep . During the tertiary creep or stage 111 creep , the creep rate begins to accelerate as the cross-sectional area of the specimen decreases due to necking, which decreases the effective area of the specimen. If stage III is allowed to proceed, fracture will occur. The instantaneous strain, Sq, is obtained immediately upon loading this is not a creep deformation, since it is not dependent on time and is, by its nature, elastic. However, plastic strain also contributes in this case. [Pg.419]

In Fig. 6.1b, a minimum, constant creep rate, which is an important design parameter, is shown. The magniffide of the minimum creep rate on the strain-time relation (see Fig. 6.1a) is associated with steady-state creep and is stress and temperature dependent. Two criteria are commonly applied to alloys (a) the stress needed to produce a creep rate of 0.1 x 10 %/h (or 1 % in 1 x 10 h) and (b) the stress needed to produce a creep rate of 0.1 x lO %/h, namely 1 % in [Pg.420]

Several empirical models have been suggested for creep. Andrade was the hrst to consider creep in 1914. He considered creep to be the superposition of transient and viscous creep terms (discussed in the next section dealing with creep in polycrystalline materials). Since creep is a thermally-activated process, the minimum secondary-creep rate may be described by an Arrhenius equation (see McLean [15]) as  [Pg.421]

A and a are constants and Qo is the activation energy for creep at zero stress. A is also known as the frequency or pre-exponential factor . [Pg.421]

Multiphase Catalytic Reactors Theory, Design, Manufacturing, and AppUcations, First Edition. Edited by Zeynep Usen Onsan and Ahmet Kerim Avci. 2016 John Wiley Sons, Inc. Fubli ed 2016 by John Wiley Sons, Inc. [Pg.156]

The kinetics of high-temperature corrosion differs from those of corrosion at ambient temperature in three respects  [Pg.365]

Although there are no aqueous electrolytes, high temperature corrosion is an electrochemical process, involving anodic and cathodic partial reactions. The metal oxides generated at the corroding surface or molten salts present at the surface form the electrolyte. [Pg.365]

During low temperature oxidation, oxide films grow by high-field conduction (Section 8.1) rather than by solid-state diffusion because the value of the diffusion coefficients is too small. Under these conditions the thickness of the oxide layer does not exceed a few nanometers. In contrast, in high temperature corrosion, volume diffusion and grain-boundary diffusion are the principle transport mechanisms by which oxide layers grow. As a consequence their thickness can reach much larger values. [Pg.365]

At ambient temperature, the rate of charge-transfer at the metal-electrolyte interface often limits the corrosion rate (Chapter 4). Because transfer reactions generally exhibit higher activation energy than diffusion phenomena, their rate [Pg.365]

At high temperatures, certain gases that are normally considered harmless can undergo a chemical reaction with metals or with non-metallic phases present in an alloy such as carbides. The most common gases found in high temperature corrosion are  [Pg.366]

Coagulation is the process whereby destabilisation (aggregation) of a suspension is effected by reducing the electrical double layer repulsion between particles through changes in the nature and concentration of the ions in the suspending electrolyte solution. Coagulant refers to the chemical or substance added to the suspension to effect the destabilisation. [Pg.129]

Flocculation is the process whereby a long chain polymer (or polyelectrolyte) causes particles to aggregate, often by forming bridges between them. Flocculant refers to the chemical or substance added to a suspension to accelerate the rate of flocculation or to strengthen the floes formed during flocculation. [Pg.129]

Sludge conditioners (sometimes called deliquoring aids) are those chemicals or substances added to a thickened suspension to promote deliquoring and/or to strengthen floes prior to deliquoring. [Pg.129]

These definitions provide the basis for the descriptions in Sections 3.2 and 3.3. [Pg.129]

Chemical reactions that can occur in either direction are called reversible reactions. Most reversible reactions do not go to completion. That is, even when reactants are mixed in stoichiometric quantities, they are not completely converted to products. [Pg.668]

Reversible reactions can be represented in general terms as follows, where the capital letters represent formulas and the lowercase letters represent the stoichiometric coefficients in the balanced equation. [Pg.668]

The double arrow (v indicates that the reaction is reversible—that is, both the forward [Pg.668]

Chemical equiUbrium exists when two opposing reactions occur simultaneously at the same rate. [Pg.668]

Chemical equilibria are dynamic equilibria that is, individual molecules are continually reacting, even though the overall composition of the reaction mixture does not change. In a system at equilibrium, the equilibrium is said to lie toward the right if more C and D are present than A and B (product-favored), and to lie toward the left if more A and B are present (reactant-favored). [Pg.668]

Intuitively, a graph is a set of arcs whose endpoints are the nodes of the [Pg.487]

We say node n is incident with arc j (or conversely) when n is one of the endpoints of j. By convention, we shall not consider the case when the two endpoints of an arc coincide (so-called self-loop) as will be seen in Chapter 3, such arcs play no role in balancing problems. Hence any arc has precisely two distinct endpoints, thus n, and aij for arc above. But in certain applications, it is useful to consider the case when some node is not incident with any arc (node Wq in Fig. A-1) such node is called isolated. On the other hand, we admit the possibility that more than one arc are incident with the same couple of endpoints (as arcs and in Fig. A-1). Such arcs are called parallel (multiple). A simple graph contains no multiple arcs. [Pg.487]

In formal mathematical language, any of the lists determines the incidence relation for the graph arc j and node n are incident/nonincident. The relation can be written on assigning to each couple (n, j) (n e N, j 6 J) either 1 (incident) or 0 (nonincident). So the set of nodes (N), the set of arcs (J), and the incidence relation determine the graph (say) G. Any of the lists represents an economic way of storing graph G in the memory of a computer. [Pg.489]

A number of problems of the graph theory can be solved using only the above definition of G. In balance schemes that are the matter of our interest, we assign in addition a direction (orientation) to any arc j. With the intuitive concept, an arc thus becomes an arrow in Fig. A-1 for example [Pg.489]

SO arc (stream) j, goes from n, to n2- In List L,, the information can be completed by assigning a given order to the endpoints. We thus have [Pg.489]

All substances are poisons. There is none which is not a poison. The right dose differentiates a poison from a remedy. [Pg.87]

As recent experience has shown, however, it is dangerous to underestimate the resourcefulness of a determined adversary. While the wide and effective dispersal of CW agents generally requires some sort of sophisticated delivery system, and while some of the public s understandable alarm is exaggerated, a crude apparatus, even a crop-dusting plane or simple canister in the hands of a fanatical subway rider, could cause a devastating amount of injury. [Pg.87]

Maybe the easiest way to begin thinking about chemical weapons is to compare them to other types of armaments—to distinguish them from what they are not. Chemical weapons are in the first place different from kinetic weapons, such as bullets, other projectiles, and shrapnel, which create casualties using force of impact. The lethality of a kinetic weapon depends on its size and its force at impact, so dense materials like steel and lead (and, more recently, the even denser metal uranium, in depleted form) are the chosen materials. [Pg.88]

Radiating (nuclear) weapons produce energy in the form of an explosive blast, in addition to gamma rays and neutrons that destroy unprotected tissue, particularly DNA. (Thus, mustard agents and T2 mycotoxin, because of their similar effects, are sometimes referred to as radiomimetic. ) Enhanced radiation warheads, or neutron bombs, minimize the destruction of materials while maximizing lethalities among enemy personnel. [Pg.88]

Another metric used to classify chemical agents is speed of action, a measure of the delay between exposure and effect. Fast-acting poisons, such as nerve agents and cyanide, can cause symptoms to appear almost instantaneously and might cause fatahties in as little as a few minutes. Slower-acting agents Hke mustard can, depending on the amount of exposure, take hours to effect serious injury. [Pg.89]

According to the definition and functions of catalyst, the catalytic activity is a kind of measurement, which is used to measure the accelerated degree of chemical reation [Pg.544]

HREELS High-Resolution Electron Energy Loss Spectroscopy adsorption species [Pg.545]

TPSR Temperature-Programmed Surface Reduction properties [Pg.545]

Activity is the most important property of catalyst, and there are many ways to evaluate cataljdic activity. We can adopt different measurement methods of activity according to the different purposes in the development of new catalyst, the improvement of present catalyst, the production control and measurement of kinetic parameters for catalysts and theories of catalysis foundation, and due to the difference with different reactions and required conditions (such as strong exothermic and endothermic reaction, high and low temperature, high and low pressme). [Pg.546]

We should introduce definition of specific rate, as shown in (7.3), (7.4) and (7.5), because the reaction rate relates to the volume, mass or surface area of catalyst. [Pg.546]

When interpreting the data from a stream method, one must be concerned about the fact that the physical properties of the interrogation zone may not be isotropic. For example, in the literature on the Coulter Counter, a well-known resistazone stream counter, it was assumed for a long time that the signal generated in the instrument was independent of the location of the fineparticle within the zone. More recent work, however, has established that, if the fineparticle is close to the wall of the cylinder of [Pg.170]

The search for biomarkers is not a simple task. There are several challenges in the pathway to a validated biomarker. These challenges include sample availability, the use of large numbers of samples for validation, technical issues in experimental design, assay development, the necessity for bioinformatics, and asking the right question so that the results of the experiment will have meaning and be of practical use in the clinic. [Pg.507]

The recent advancements in separation technology, mass spectrometry, and informatics for the biological sciences have been very useful in accomplishing this [Pg.507]

However, biomarker discovery is one of the most difficult types of projects in biology. This is partly due to the level of complexity and inherent inconsistencies that are present in biological systems [7]. In addition, the pathology of a disease is rarely simple and there can be closely related conditions that complicate the diagnosis and thus the search for biomarkers. [Pg.508]

To this end it is important to take a multidisciplinary approach to biomarker discovery. There are a number of difficult tasks to be accomplished in the process of biomarker discovery, each requiring expertise in different fields, separation technology, medicine, pathology of the disease, the chemistry of the type of molecule that is the target of the study, and statistical analysis. The conclusion from these facts is that a multidisciplinary approach to biomarker discovery is necessary. Once the group is assembled, it is then vital that members of the group listen to each other about the capabilities and weaknesses in each area of a project before starting the actual work. [Pg.508]

The adsorption is actually the first step of a heterogeneous chemical reaction process and occurs on flat surfaces or porous solids or on smooth planes and films. Understanding this process is crucial to explain the activity and selectivity of a chemical reaction. It is a well-studied phenomenon, but initially it was differently interpreted. Berzelius [1] was one of the first to note that the adsorption is a process where the surface tension causes the condensation of gases in pores. He showed that the vapor pressure in a small drop is much larger than in the bulk fluid and proposed the following relation  [Pg.27]

This equation is commonly used to calculate the condensation of fluids in capillary pores. However, it is a limited equation depending on the properties of the different fluids and solids, mainly whether the solid is or not porous. In fact, it was observed that the amount of adsorbed gas per unit mass varied significantly. [Pg.27]

Besides, there is a difference between the adsorption and absorption phenomena. In the first case, gas is directly linked at the surface, while in the second case, it is dissolved in the bulk. If the mass of absorbent doubles, then the amount of absorbed [Pg.27]

The nature of the gas and the surface properties influence the adsorption process. Thus, for example, H2 and CO are adsorbed in different forms on metals or oxides of Pt or Co, even on flat or porous surface solids, as reported in several experiments [1]. Actually, the advanced knowledge on surface science improved the observations on adsorption processes, either of physical or chemical nature, which depends on the adsorbed molecule and on the geometry and the adsorptimi form. Thus, for example, benzene and ethylene may be adsorbed oti surfaces in different forms, horizontal or vertically and o or tt form, respectively, that may influence the amount of adsorption on the surface. [Pg.28]

Generally, one can say that the gas can be adsorbed forming one layer (mono-layer) or several layers, which are of physical or chemical nature, while liquids are usually condensed at the surface or in capOlaiy pores. The nature depends on the binding energy of gases or fluids (adsorbed) and the surface (adsorbent) when  [Pg.28]

FigMra 7.1 Atomic rearrangements that accompany the motion of an edge dislocation as it moves in response to an applied shear stress, (a) The extra half-plane of atoms is labeled A. (b) The dislocation moves one atomic distance to the right as A links up to the lower portion of plane B in the process, the upper portion of B becomes the extra half-plane, (c) A step forms on the surface of the crystal as the extra half-plane exits. [Pg.218]

Moffatt, and J. WnUf, The Structure and Properties ofMaterials, Vo. in. Mechanical Behavior, p. 70. Copyright 1965 by John Wiley Sons, New York.) [Pg.219]

Generally speaking, we can divide liquid ciystalline phases into two distinctly different types the ordered and the disordered. For the ordered phase, the theoretical framework invoked for describing the physical properties of liquid crystals is closer in form to that pertaining to solids it is often called elastic continuum theory. In this case various terms and definitions typical of solid materials (e.g., elastic constant, distortion energy, torque, etc.) are commonly used. Nevertheless, the interesting fact about liquid crystals is that in such an ordered phase they still possess many properties typical of liquids. In particular, they flow like liquids and thus require hydrody-namical theories for their complete description. These are explained in further detail in the next chapter. [Pg.22]

Liquid crystals in the disordered or isotropic phase behave very much like ordinary fluids of anisotropic molecules. They can thus be described by theories pertaining to anisotropic fluids. There is, however, one important difference. [Pg.22]

Near the isotropic — nematic phase transition temperature, liquid crystals exhibit some highly correlated pretransitional effects. In general, the molecules become highly susceptible to external fields, and their responses tend to slow down cottsiderably. [Pg.22]

In the next few sectiorts we introduce some basic concepts and definitions, such as order parameter, short- and long-range order, phase transition, and so on, which form the basis for describing the ordered and disordered phases of liquid crystals. [Pg.22]

While applying a voltage to a polymer fiber, the current can be carried by either electrons or ions. Normally, the conductivity associated to the movement of electrons is called electronic conductivity. The conductivity caused by the movement of ionic species is called ionic conductivity. Both electronic and ionic conductivities contribute to the so-called electrical conductivity. [Pg.367]

In theory, the overall conductivity ( r) of a polymer fiber is governed by the following eqiration  [Pg.368]

In practice, it is difficult to calculate the conductivity of polymer fibers by using Equation 18.1. The conductivity of polymer fibers can be obtained by carrying out the electrical resistance measurement. According to the Ohm s law, the electrical resistance R) of a fiber can be obtained by  [Pg.369]

In a solid, the nuclei and the electrons in perpetual motion are the somces of an intense electromagnetic field, veiy rapidly varying in space as well as in time. The calculation of this field is all the more difficult since at the atomic level only a quantum description is appropriate. The traditional approach developed by Maxwell consists of dividing the sample into regions, small with respect to the macroscopic scale, but large on the atomic scale, of a few hundreds of cells approximately. The Maxwell field is an average carried out on such a region and also in time. We can [Pg.414]

The potential created at a point R by one of these regions is given by  [Pg.415]

Imposing each region to be electrically neutral, the basic contribution is dipolar in nature. Polarization density P = Y, is the dipolar moment per unit volume. [Pg.415]

Considering the fact that a region can be assimilated to a point on the macroscopic scale, polarization density may very in space. In the following, it will be assumed to be uniform. In the absence of an applied electric field, polarization density is most often zero because of the cancellation of the contributions of all electric charges. If not, polarization density is said to be spontaneous. With application of an external electric field, an induced polarization density is established. [Pg.415]

Spontaneous polarization must be invariant for all the operations of symmetry of the crystal. A non-zero polarization density exists only for 10 of the 32 groups of point syrmnetry. Crystals are then referred to as pyroelectric (see Table 11.8) since polarization density is a function of the temperature. [Pg.415]


The detectability of critical defects with CT depends on the final image quality and the skill of the operator, see figure 2. The basic concepts of image quality are resolution, contrast, and noise. Image quality are generally described by the signal-to-noise ratio SNR), the modulation transfer function (MTF) and the noise power spectrum (NFS). SNR is the quotient of a signal and its variance, MTF describes the contrast as a function of spatial frequency and NFS in turn describes the noise power at various spatial frequencies [1, 3]. [Pg.209]

Benninghoven A, Rudenauer F G and Werner FI W 1987 Secondary ion Mass Spectrometry Basic Concepts, instrumentai Aspects, Appiications, and Trends (New York Wiley)... [Pg.319]

In this chapter we shall first outline the basic concepts of the various mechanisms for energy redistribution, followed by a very brief overview of collisional intennoleciilar energy transfer in chemical reaction systems. The main part of this chapter deals with true intramolecular energy transfer in polyatomic molecules, which is a topic of particular current importance. Stress is placed on basic ideas and concepts. It is not the aim of this chapter to review in detail the vast literature on this topic we refer to some of the key reviews and books [U, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, and 32] and the literature cited therein. These cover a variety of aspects of tire topic and fiirther, more detailed references will be given tliroiighoiit this review. We should mention here the energy transfer processes, which are of fiindamental importance but are beyond the scope of this review, such as electronic energy transfer by mechanisms of the Forster type [33, 34] and related processes. [Pg.1046]

A 3.13.2 BASIC CONCEPTS FOR INTER- AND INTRAMOLECULAR ENERGY TRANSFER... [Pg.1046]

Having introduced the basic concepts and equations for various energy redistribution processes, we will now... [Pg.1049]

The present article reviews basic concepts of semiconductor physics and devices witli emphasis on current problems. Furtlier details can be found in tlie references. [Pg.2877]

P. Deuflhard, M. Dellnitz, O. Junge, and Ch. Schiitte. Computation of essential molecular dynamics by subdivision techniques I Basic concept. Preprint SC 96-45, Konrad Zuse Zentrum, Berlin (1996)... [Pg.115]

This section describes briefly some of the basic concepts and methods of automatic 3D model builders. However, interested readers are referred to Chapter II, Section 7.1 in the Handbook, where a more detailed description of the approaches to automatic 3D structure generation and the developed program systems is given. [Pg.96]

In this section, the basic concepts of reaction retrieval are explained. The first example is concerned with finding an efficient way to reduce a 3-methy]cydohex-2-cnonc derivative to the corresponding 3-mcthylcyclohcx-2-cnol compound (see Figure 5-24). As this is a conventional organic reaction, the CIRX database should contain valuable information on how to syntbesi2e this product easily. [Pg.264]

To understand the basic concepts of force field calculations... [Pg.319]

To have an overview of the algorithms and basic concepts used to perform molecular dynamics simulations... [Pg.319]

The concept of feature trees as molecular descriptors was introduced by Rarey and Dixon [12]. A similarity value for two molecules can be calculated, based on molecular profiles and a rough mapping. In this section only the basic concepts are described. More detailed information is available in Ref. [12]. [Pg.411]

Development of weighted residual finite element schemes that can yield stable solutions for hyperbolic partial differential equations has been the subject of a considerable amount of research. The most successful outcome of these attempts is the development of the streamline upwinding technique by Brooks and Hughes (1982). The basic concept in the streamline upwinding is to modify the weighting function in the Galerkin scheme as... [Pg.54]

To illustrate the basic concepts described in this section we consider the following worked example. [Pg.55]

To describe the basic concept of the Gaussian elimination method we consider the following system of simultaneous algebraic equations... [Pg.200]

Recent years have witnessed an increase in the number of people using computational chemistry. Many of these newcomers are part-time theoreticians who work on other aspects of chemistry the rest of the time. This increase has been facilitated by the development of computer software that is increasingly easy to use. It is now so easy to do computational chemistry that calculations can be performed with no knowledge of the underlying principles. As a result, many people do not understand even the most basic concepts involved in a calculation. Their work, as a result, is largely unfocused and often third-rate. [Pg.1]

The section on applications examines the same techniques from the standpoint of the type of chemical system. A number of techniques applicable to biomolecular work are mentioned, but not covered at the level of detail presented throughout the rest of the book. Likewise, we only provide an introduction to the techniques applicable to modeling polymers, liquids, and solids. Again, our aim was to not repeat in unnecessary detail information contained elsewhere in the book, but to only include the basic concepts needed for an understanding of the subjects involved. [Pg.397]

Another objective is to discuss briefly recent and major trends in the field of methine dyes color. Indeed, because of its relatively simple structure, the thiazole ring has been chosen in the past for studying color-structure relations. Using Brooker s basicity concepts (5), numerous valuable attempts in different countries succeeded in establishing semiempirical rules for explaining the effects of structural changes on color. [Pg.24]

Benninghoven, A., Rudenauer, F.G., and Werner, H.W., Secondary Ion Mass Spectrometry Basic Concepts, Instrumental Aspects, Applications and Trends, Wiley, New York, 1987. [Pg.449]

The words basic concepts" in the title define what I mean by fundamental." This is the primary emphasis in this presentation. Practical applications of polymers are cited frequently—after all, it is these applications that make polymers such an important class of chemicals—but in overall content, the stress is on fundamental principles. Foundational" might be another way to describe this. I have not attempted to cover all aspects of polymer science, but the topics that have been discussed lay the foundstion—built on the bedrock of organic and physical chemistry—from which virtually all aspects of the subject are developed. There is an enormous literature in polymer science this book is intended to bridge the gap between the typical undergraduate background in polymers—which frequently amounts to little more than occasional relevant" examples in other courses—and the professional literature on the subject. [Pg.726]

According to these basic concepts, molecular recognition implies complementary lock-and-key type fit between molecules. The lock is the molecular receptor and the key is the substrate that is recognised and selected to give a defined receptor—substrate complex, a coordination compound or a supermolecule. Hence molecular recognition is one of the three main pillars, fixation, coordination, and recognition, that lay foundation of what is now called supramolecular chemistry (8—11). [Pg.174]

Formaldehyde polymers have been known for some time (1) and early investigations of formaldehyde polymerization contributed significantly to the development of several basic concepts of polymer science (2). Polymers of higher aUphatic homologues of formaldehyde are also well known (3) and frequently referred to as aldehyde polymers (4). Some have curious properties, but none are commercially important. [Pg.56]

The processes used commercially for the manufacture of film and sheeting materials are generaUy similar in basic concept, but variations in equipment or process conditions are used to optimize output for each type of film or sheeting material. The nature of the polymer to be used, its formulation with plasticizers (qv), fillers (qv), flow modifiers, stabilizers, and other modifiers, as weU as its molecular weight and distribution are aU critical to the... [Pg.378]

The basic concepts of a gas-fluidized bed are illustrated in Figure 1. Gas velocity in fluidized beds is normally expressed as a superficial velocity, U, the gas velocity through the vessel assuming that the vessel is empty. At a low gas velocity, the soHds do not move. This constitutes a packed bed. As the gas velocity is increased, the pressure drop increases until the drag plus the buoyancy forces on the particle overcome its weight and any interparticle forces. At this point, the bed is said to be minimally fluidized, and this gas velocity is termed the minimum fluidization velocity, The bed expands slightly at this condition, and the particles are free to move about (Fig. lb). As the velocity is increased further, bubbles can form. The soHds movement is more turbulent, and the bed expands to accommodate the volume of the bubbles. [Pg.69]

Estimation of the influence of end groups by using their topological indexes, Oq and E, has been proposed. The first parameter, Oq, characterizes the shift of the MO modes and the level positions of a PMD containing end groups relative to unsubstituted polymethines. Thus it corresponds to the end-group basicity concept (7). The parameter Oq was found to be related directiy to the electron donor ability 1 = lim ( q. The other index, E,... [Pg.491]

In order to operate a process facility in a safe and efficient manner, it is essential to be able to control the process at a desired state or sequence of states. This goal is usually achieved by implementing control strategies on a broad array of hardware and software. The state of a process is characterized by specific values for a relevant set of variables, eg, temperatures, flows, pressures, compositions, etc. Both external and internal conditions, classified as uncontrollable or controllable, affect the state. Controllable conditions may be further classified as controlled, manipulated, or not controlled. Excellent overviews of the basic concepts of process control are available (1 6). [Pg.60]


See other pages where Basic concept is mentioned: [Pg.759]    [Pg.1519]    [Pg.2467]    [Pg.98]    [Pg.327]    [Pg.205]    [Pg.75]    [Pg.71]    [Pg.327]    [Pg.492]    [Pg.157]    [Pg.157]    [Pg.146]    [Pg.154]    [Pg.227]    [Pg.513]    [Pg.551]    [Pg.87]    [Pg.244]   
See also in sourсe #XX -- [ Pg.40 ]

See also in sourсe #XX -- [ Pg.102 ]




SEARCH



A Review of Basic Bonding Concepts

Activity coefficients basic concepts

Activity concentrations basic concepts

Activity-based costing basic concepts

Alumina basic concepts

An Overview and Basic Scientific Concepts

An Overview of Some Basic Concepts in Catalysis

Animal cells basic concepts

Assay basic concepts

BASIC CHEMICAL CONCEPTS

BASIC CONCEPT AND ADVANTAGES

BASIC CONCEPTS IN PROBABILITY THEORY

BASIC CONCEPTS OF CHEMICAL BONDING

BASIC CONCEPTS OF EOPS

BASIC CONCEPTS OF THE PLASMA

Bacteria basic concepts

Basic Attic Method Concepts

Basic Concept and Experimental Realization

Basic Concept and Processing Modes of Crystallization

Basic Concept of Orbital-Dependent Functionals

Basic Concept of Rate

Basic Concept of Spin Decoupling

Basic Concepts Piezo-, Pyro- and Ferroelectricity

Basic Concepts and Calculations

Basic Concepts and Definitions

Basic Concepts and Mixing Mechanisms

Basic Concepts and Models

Basic Concepts and Process Variables

Basic Concepts and Processes

Basic Concepts and Properties

Basic Concepts and Terminology

Basic Concepts and Terminology Used to Describe the Combined Action of Chemicals in Mixtures

Basic Concepts in Chemical Kinetics—Determination of the Reaction Rate Expression

Basic Concepts in Supramolecular Chemistry

Basic Concepts of Capillary Electrophoresis

Basic Concepts of Chemical Kinetics

Basic Concepts of Chemistry

Basic Concepts of Coagulation

Basic Concepts of Continuation Methods

Basic Concepts of Distillation

Basic Concepts of Electrochemistry

Basic Concepts of Expert Systems

Basic Concepts of Fibrinolysis

Basic Concepts of Fuzzy Sets

Basic Concepts of HPLC

Basic Concepts of Kinetic Theory

Basic Concepts of Mechanics

Basic Concepts of Microwave Chemistry

Basic Concepts of Molecular Interaction Energy Values

Basic Concepts of Molecular Symmetry Character Tables

Basic Concepts of Nuclear Shieldings and Chemical Shifts

Basic Concepts of Poverty and Social Risk Management

Basic Concepts of Quantum Mechanics

Basic Concepts of Targeting

Basic Concepts of Toxicology

Basic Concepts of Triglyceride Transport

Basic Concepts of the Process

Basic Electrocatalytic Concepts

Basic Kinetic Concepts and Situations

Basic Mass-Transfer Concepts

Basic Mathematical Concepts

Basic Mechanistic Concepts Kinetic versus Thermodynamic Control, Hammonds Postulate, the Curtin-Hammett Principle

Basic Operations and Number Concepts

Basic Photophysical and Photochemical Concepts

Basic Physical Concepts

Basic Rheological Concepts

Basic Statistical Concepts

Basic Stoichiometric Concepts

Basic Terms and Concepts

Basic Theoretical Concepts

Basic Theories and Concepts

Basic concept of the thermal

Basic concept of the thermal explosion theory

Basic concepts and methods

Basic concepts of amorphous semiconductors

Basic concepts of calorimetry

Basic concepts of coherence

Basic concepts of individual processes

Basic concepts of potential energy surfaces

Basic concepts of rheological behaviour

Basic concepts of signal processing

Basic function Lewis concept

Basic function protonic concept

Basic structural unit concept

Basic terms and concepts in vacuum technology

Basicity equilibrium concept

Basicity, concept

Basicity, concept

Basicity, concept groups

Bonding I Basic Concepts

Calorimetry, basic concepts

Capillarity basic concepts

Carbonate minerals basic concepts

Catalysis basic concepts

Chemical Bonding I Basic Concepts

Chemical bonding, basic concepts

Chemical bonding, basic concepts Lewis structures

Chemical bonding, basic concepts compounds

Chemical sensors basic concepts

Chemical shift basic concepts

Chirality basic concepts

Chromatography basic concepts

Column chromatography basic concepts

Concentration units basic concepts

Coordination chemistry basic concepts

Corrosion basic concepts

Crystallization: Basic Concepts and Industrial Applications, First Edition. Edited by Wolfgang Beckmann

Decoupling basic concept

Dependable computing basic concepts

Design concept, basic

Development basic concepts

Dose-response relationship basic concepts

Electricity: basic concepts

Electrolyte basic concepts

Electron Spin Resonance - Basic Concepts

Electronic band theory, basic concepts

Emulsion stability basic concepts

Energy metabolism basic concepts

Enzymes basic concepts

Expert systems basic concepts

First-Order ECirre Mechanism Basic Concepts

Fracture Mechanics basic concepts

Galactic chemical evolution basic concepts and issues

High-performance liquid basic concepts

Homogeneous catalysis basic concepts

Interest rate modeling concepts, basic

Introduction and Basic Concepts

Introduction and First Basic Concepts

Introduction to Basic Concepts

Inverse models/modeling basic concepts

Ionizing radiation basic concepts

Knowledge of basic statistical concepts

Lattice energies Some basic concepts

Linear systems approach basic concepts

Liquid chromatography basic concepts

Liquid crystals basic concepts

Magnetism basic concepts

Managed care basic concepts

Mechanics, basic concepts

Mechanistic analysis, perspectives in modem voltammeter: basic concepts

Mechanistic analysis, perspectives in modem voltammeter: basic concepts and

Mechanistic analysis, perspectives in modern voltammeter: basic concepts and

Micro basic concepts

Microwave basic concepts

Molecular magnets basic concepts

NUCLEAR MAGNETIC RESONANCE SPECTROSCOPY PART ONE BASIC CONCEPTS

Neural networks basic concepts

Nonlinear optics basic concepts

Optical basic concept

Optical techniques basic concepts

Orbital symmetry basic concept

Origin and Basic Concepts

Percolation in Electroactive Polymers Basic Concepts

Pharmacology basic concepts

Phase Transitions Basic Concepts

Plasma basic concepts

Potential energy surface basic concepts

Presentation of the basic concepts faults, errors and failures

Product Life Cycle — The Basic Concept

Radiation, basic concepts

Radiation, basic concepts emission

Radiation, basic concepts excitation energy

Radiation, basic concepts particles

Recycling basic concepts

Review of Basic Concepts and Terminology

Rheology basic concepts

Rubber elasticity: basic concepts and

Rubber elasticity: basic concepts and behavior

Secure computing basic concepts

Solubility basic concepts

Solutions - Basic Definitions and Concepts

Some Basic Concepts

Some Basic Concepts in Photoconductivity

Some Basic Definitions and Concepts

Statistics basic concepts

Stereochemistry basic concepts

Strength basic concepts

Structural reaction injection molding basic concepts

Supramolecular basic concepts

The Basic Concept of Single-Train Plants

The Basic Vocabulary and Concepts of Light Scattering

The basic concepts

The molecular beam method basic concepts and examples of bimolecular reaction studies

Thermodynamic basic concepts

Thermodynamics basic concepts

Thermodynamics basics free energy concept

Transition dipole moment basic concepts

Transition state theory basic concepts

Two basic concept

Unified theory basic concepts

Valence bond theory basic concepts

Virtual screening basic concepts

Voltammetry, perspectives in modem: basic concepts and mechanistic analysis

Voltammetry, perspectives in modern: basic concepts and mechanistic analysis

© 2024 chempedia.info