Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Models theory behind

The NMR experimental methods for studying chemical exchange are all fairly routine experiments, used in many other NMR contexts. To interpret these results, a numerical model of the exchange, as a frmction of rate, is fitted to the experimental data. It is therefore necessary to look at the theory behind the effects of chemical exchange. Much of the theory is developed for intennediate exchange, and this is the most complex case. However, with this theory, all of the rest of chemical exchange can be understood. [Pg.2092]

In the sections below, we describe several studies in which flat-histogram methods were used to examine phase equilibria in model systems. The discussion assumes the reader is familiar with this general family of techniques and the theory behind them, so it may be useful to consult the material in Chap. 3 for background reference. Although the examples provided here entail specific studies, their general form and the principles behind them serve as useful templates for using flat-histogram methods in novel phase equilibria calculations. [Pg.372]

The theory behind molecular vibrations is a science of its own, involving highly complex mathematical models and abstract theories and literally fills books. In practice, almost none of that is needed for building or using vibration spectroscopic sensors. The simple, classical mechanical analogue of mass points connected by springs is more than adequate. [Pg.119]

XPS is among the most frequently used techniques in catalysis. It yields information on the elemental composition, the oxidation state of the elements and in favorable cases on the dispersion of one phase over another. When working with flat layered samples, depth-selective information is obtained by varying the angle between sample surface and the analyzer. Several excellent books on XPS are available [5,8,17-20], In this section we first describe briefly the theory behind XPS, then the instrumentation, and finally we illustrate the type of information that XPS offers about catalysts and model systems. [Pg.54]

This chapter is intended to provide basic understanding and application of the effect of electric field on the reactivity descriptors. Section 25.2 will focus on the definitions of reactivity descriptors used to understand the chemical reactivity, along with the local hard-soft acid-base (HSAB) semiquantitative model for calculating interaction energy. In Section 25.3, we will discuss specifically the theory behind the effects of external electric field on reactivity descriptors. Some numerical results will be presented in Section 25.4. Along with that in Section 25.5, we would like to discuss the work describing the effect of other perturbation parameters. In Section 25.6, we would present our conclusions and prospects. [Pg.364]

The theory behind every measurement method can be generalised by Eq (1) [1]. Some quantity (or quantities, measurands) is measured, which has a specific relationship to the sought quantity. The measurand can be regarded to be a stochastic variable associated with an uncertainty, which implies that the sought quantity is also a random variable. The mathematical relationship depends on the physical model, that is, the model of the physical phenomenon of interest, for example temperature, pressure, and volume flow. The physical model always includes limitations, which implies that the measurement method has restrictions that is, it will only function in a certain measuring range and according to the assumption of the model. [Pg.50]

The theory behind the method to measure the burning rate was not explicitly presented by Lamb et al. Obviously, the mathematical model must be based on the bed properties, such as packing ratio, loading density, bed height, as well as the trolley speed. No discussion is presented about the limitations and assumptions of the method. [Pg.57]

Modern quantitative spectroscopy of hot stars has two aspects the analysis of photospheric lines and stellar wind lines The first one is meanwhile established as an almost classical tool to determine stellar parameters. NLTE model atmosphere and line formation calculations yield Te , log g and abundances with high precision (see recent reviews by Husfeld (this meeting), Kudritzki (1987), Kudritzki and Hummer (1986), Kudritzki, (1985).) The second aspect, however, the quantitative analysis of stellar wind lines is still at its very beginning. For long time the stellar wind lines have been used to determine mass-loss rates M and terminal velocities v only. While these studies were pioneering and of enormous importance, it was also clear that very approximate calculations were done with respect to NLTE ionization and excitation and the radiative transfer in stellar winds. Thus, stellar wind lines could be used only in a more qualitative comparative sense, with no theory behind, which allowed the determination of precise and reliable numbers. [Pg.114]

The theories behind continuum solvation models have been presented extensively in various reviews [1-4] and in other contributions to this book, so we do not repeat them here, focusing instead on their application to calculations of the NMR... [Pg.130]

The Avrami model (19,20) states that in a given system under isothermal conditions at a temperature lower than V. the degree of crystallinity or fractional crystallization (70 as a liinction of time (t) (Fig. 11) is described by Equation 5. Although the theory behind this model was developed for perfect crystalline bodies like most polymers, the Avrami model has been used to describe TAG crystallization in simple and complex models (5,9,13,21,22). Thus, the classical Avrami sigmoidal behavior from an F and crystallization time plot is also observed in TAG crystallization in vegetable oils. This crystallization behavior consists of an induction period for crystallization, followed by an increase of the F value associated with the acceleration in the rate of volume or mass production of crystals, and finally a metastable crystallization plateau is reached (Fig. 11). [Pg.69]

Changes in moisture content affect charged species in foods that are not part of the chemical equation, but that may impart their own effects upon reaction rate. Reactions that involve proton and electron transport, which include hydrolysis, Maillard browning, oxidation, and almost every critical shelf-life-limiting reaction in foods, will be affected by the presence of ions. This is part of the theory behind the Debye-Hiickel equation. This model describes the effect of ionic strength on the reaction rate constant in dilute solutions ... [Pg.364]

Once a dose metric is selected and estimated, a dose extrapolation model can be applied to estimate cancer risk. The choice of the model will be driven by the likely mechanism of action of the chemical or agent. For example, if the substance is a genotoxic material, such as radiation, a linear model would be used. A threshold model or nonlinear model might be used if the chemical or agent is not genotoxic (Paustenbach 2002 Williams and Paustenbach 2002). The general theory behind both models is discussed below. [Pg.768]

In the last chapter, the theory behind nonlinear mixed effects models was introduced. In this chapter, practical issues related to nonlinear mixed effects modeling will be introduced. Due to space considerations not all topics will be given the coverage they deserve, e.g., handling missing data. What is intended is a broad coverage of problems and issues routinely encountered in actual population pharmacokinetics (PopPK) analyses. The reader is referred to the original source material and references for further details. [Pg.267]

The theory behind linear viscoelasticity is simple and appealing. It is important to realize, however, that the applicability of the model for fluoropolymers is restricted to strains below the yield strain. One example comparing predictions based on linear viscoelasticity and experimental data for PTFE with 15 vol% glass fiber in the very small strain regime is shown in Fig. 11.4. [Pg.364]

In this chapter we first develop some of the theory behind the distribution of trace elements and explain the physical laws used in trace element modelling. Then various methods of displaying trace element data are examined as a prelude to showing how trace elements might be used in identifying geological processes and in testing hypotheses. [Pg.102]

The many theories behind the various models developed to calculate the solubility of polymers, and to predict the ability of liquids to dissolve them, are described clearly and in high detail by Burke (Burke, 1984). All define a term known as solubility parameter for liquids and polymers using one or more of the intermolecular force components and represent the parameter in two or three dimensions. Calculating solubility parameters is a mathematically complex process which will not be discussed here. The most widely used method today for predicting whether a polymer is soluble in a liquid was developed by Charles M. Hansen in 1966. Hansen parameters ( ) for solvents and polymers are calculated from the dispersion force component ( d), polar component ((5p) and hydrogen bonding component ( h) for each using the formula ... [Pg.96]

Although MD calculations resorting of interatomic potentials have been successful in many instances, the major shortcoming associated with this type of study, is the reliance of the quantitative precision of the predicted property upon the accuracy of the empirical potential used to model interatomic interactions when interatomic distances are substantially different from those used to fit, the model potential. Ab initio MD circumvents entirely this problem and will play a decisive role in the study erf" mantle phases under pressure. In the next section we outline the theory behind a new ab initio constant pressure MD with variable cell shape (VCS). The following section illustrates its use as an efficient structural optimizer for two mantle phases MgSi03-perovskite and C2/c enstatite. Although these were 0 K calculations, finite temperature studies are similarly possible, the current limitation being simply computational power. [Pg.41]

The history of, and the theory behind, continuum solvation models have been described exhaustively in many reviews and articles in the past, so we prefer not to repeat them here. In addition, so large and continuously increasing is the amount of examples of theoretical developments on one hand, and of numerical applications on the other, that we shall limit our attention to a brief review of the basic characteristics of these models which have gained wide acceptance and are in use by various research groups. [Pg.479]


See other pages where Models theory behind is mentioned: [Pg.100]    [Pg.513]    [Pg.216]    [Pg.56]    [Pg.4]    [Pg.253]    [Pg.2]    [Pg.2]    [Pg.26]    [Pg.116]    [Pg.162]    [Pg.154]    [Pg.528]    [Pg.350]    [Pg.3076]    [Pg.361]    [Pg.70]    [Pg.384]    [Pg.144]    [Pg.141]    [Pg.73]    [Pg.85]    [Pg.265]    [Pg.168]    [Pg.119]    [Pg.275]    [Pg.144]    [Pg.3]    [Pg.57]   
See also in sourсe #XX -- [ Pg.110 ]




SEARCH



Model theory

© 2024 chempedia.info