Extrapolation Procedures


This hierarchical extrapolation procedure can save a significant amount of computer time as it avoids a large fraction of the most time consuming step, namely the exact evaluation of long range interactions. Here, computational  [c.82]

Although we take a while before eventually casting these theories in forms which are directly applicable to polymers, the final results are highly practical. Throughout the chapter the presentation is aimed toward these eventual applications. We begin by comparing and contrasting the turbidity of solutions which scatter light with the absorbance of solutions which absorb light. We describe the experiments whereby scattering data are collected, and discuss the extrapolation procedures that must be followed to match experimental results with  [c.659]

The extrapolation w — 0 gives the DC conductivity. A detailed description of this extrapolation procedure will be given elsewhere [15].  [c.278]

There is a rather limited number of methods for obtaining experimental surface energy and free energy values, and many of them are peculiar to special solids or situations. The only general procedure is the rather empirical one of estimating a solid surface tension from that of the liquid. Evidence from a few direct measurements (see Section VII-1 A) and from nucleation studies (Section IX-3) suggests that a solid near its melting point generally has a surface tension 10-20% higher than the liquid-about in the proportion of the heat of sublimation to that of liquid vaporization—and a value estimated at the melting point can then be extrapolated to another temperature by means of an equation such as in-10.  [c.278]

The variation in qj with 0 will not in general be the same as F(0), since some adsorption will be occurring on all portions of the surface so that the heat liberated on adsorption of dn moles will be a weighted average. There is one exception, however, namely adsorption at 0 K the adsorption will occur sequentially on portions of increasing Q value so that qd(Q) now gives F(0). This circumstance was used by Drain and Morrison [1], who determined qd(0) for argon, nitrogen, and oxygen on titanium dioxide at a series of temperatures and extrapolated to 0 K. The procedure is a difficult one and not without some approximations in the extrapolation. Clearly, it would be very desirable to find a way of solving the integral equation so that site or adsorption energy distributions could be obtained from data at customary temperatures.  [c.656]

The Gaussian theories Gaussian-1 and Gaussian-2, abbreviated as G1 and G2, are not basis sets, but they are similar to the basis set extrapolation mentioned in the previous paragraph. These model chemistries arose from the observation that certain levels of theory with certain basis sets tended always to give results with systematic errors for the equilibrium geometries of main group compounds. The procedure for obtaining these results consists of running a series of calculations with different basis sets and levels of theory and then plugging the energies into an equation that is meant to correct for systematic errors so energies are closer to the exact energy than with any of the individual methods. The results from this procedure have been good for equilibrium geometries of main group compounds. Results for other calculations such as transition structures or nonbonded interactions have been less encouraging. Gaussian theory is discussed in more detail in Chapter 4.  [c.83]

Most SCF programs do not actually compute orbitals from the previous iteration orbitals in the way that is described in introductory descriptions of the SCF method. Most programs use a convergence acceleration method, which is designed to reduce the number of iterations necessary to converge to a solution. The method of choice is usually Pulay s direct inversion of the iterative subspace (DIIS) method. Some programs also give the user the capability to modify the DIIS method, such as adding a dampening factor. These modifications can be useful for fixing convergence problems, but a significant amount of experience is required to know how best to modify the procedure. Turning off the DIIS extrapolation can help a calculation converge, but usually requires many more iterations.  [c.195]

Further problems arise if measurements of the rate of nitration have been made at temperatures other than 25 °C under these circumstances two procedures are feasible. The first is discussed in 8.2.2 below. In the second the rate profile for the compound imder investigation is corrected to 25 °C by use of the Arrhenius parameters, and then further corrected for protonation to give the calculated value of logio/i fb. at 25 °C, and thus the calculated rate profile for the free base at 25 °C. The obvious disadvantage is the inaccuracy which arises from the Arrhenius extrapolation, and the fact that, as mentioned above, it is not always known which acidity functions are appropriate.  [c.152]

The standard procedure is to measure D at several different initial concentrations, using the procedure just described, and then extrapolating the results to c = 0. We symbolize the resulting limiting value D°. This value can be interpreted in terms of Eq. (9.79), which is derived by assuming 7 -> 1 and therefore requires extreme dilution. It is apparent from Eqs. (9.79) and (9.5) that D° depends on the ratio T/770, as well as on the properties of the solute itself. In order to reduce experimental (subscript ex) values of D° to some standard condition (subscript s), it is conventional to write  [c.634]

Experimentally, Rg is measured at a series of different C2 s and 0 s, which makes the extrapolations of Kc2/Rg at constant 0 to C2 = 0 and of Kc2/Rg at constant C2 to 0 = 0 equally feasible. In the next section we shall examine a specific graphical technique which combines these two extrapolations in a single procedure.  [c.703]

Until now we have looked at various aspects of light scattering under several limiting conditions, specifically, C2 = 0, 0 = 0, or both. Actual measurements, however, are made at finite values of both C2 and 6. In the next section we shall consider a method of treating experimental data that consolidates all of the various extrapolations into one graphical procedure.  [c.709]

The procedures used for estimating the service life of solid rocket and gun propulsion systems include physical and chemical tests after storage at elevated temperatures under simulated field conditions, modeling and simulation of propellant strains and bond tine characteristics, measurements of stabilizer content, periodic surveillance tests of systems received after storage in the field, and extrapolation of the service life from the detailed data obtained (21—33).  [c.34]

Stream water quaUty commonly varies gready ia response to water discharge thus, a single year of record is not adequate for rehable extrapolation, and ia any exacting comparison of historical data, this factor needs to be taken iato account. From the beginning, it has been a general poHcy ia the USGS surface water quaUty program to locate sampling sites at or near gaging stations where records of stream dow are obtained. Until about 1970, many of the USGS water quaUty records were based on daily sampling, generally with determinations of specific electrical conductance (an iadicator of total cation and anion concentration) on each sample, but with extensive analyses performed only on composited daily samples. Composites usually contained 10 daily samples. However, where stream discharge and other factors caused substantial day-to-day changes ia specific conductance, the composite period was shortened to prevent mixing of chemically dissimilar samples and to give a clearer iadication of the stream chemistry variabiUty. Annual averages of these analyses, weighted by time or discharge, were used to summarize the records. After 1970, complete analyses were done on single samples collected at various time iatervals ranging from semimonthly to quarterly, and analyses of composite samples were no longer made. Analytical procedures changed from time to time as improved iastmmentation and techniques became available.  [c.202]

The sohd line in Figure 3 represents the potential vs the measured (or the appHed) current density. Measured or appHed current is the current actually measured in an external circuit ie, the amount of external current that must be appHed to the electrode in order to move the potential to each desired point. The corrosion potential and corrosion current density can also be deterrnined from the potential vs measured current behavior, which is referred to as polarization curve rather than an Evans diagram, by extrapolation of either or both the anodic or cathodic portion of the curve. This latter procedure does not require specific knowledge of the equiHbrium potentials, exchange current densities, and Tafel slope values of the specific reactions involved. Thus Evans diagrams, constmcted from information contained in the Hterature, and polarization curves, generated by experimentation, can be used to predict and analyze uniform and other forms of corrosion. Further treatment of these subjects can be found elsewhere (1—3,6,18).  [c.277]

There was less agreement between calculated and experimental energy values. The use of 6-3IG, the best procedure in energy calculations of three-membered rings, yielded a value too low by more than 40 kJ moF in the case of diazirine bond separation energy was calculated as -45 kJ moF the experimental value is +0.4 kJ moF . Vibrational correction and extrapolation to 0 K would reduce this difference by several kJ moF .  [c.197]

On occasion one will find that heat-transfer-rate data are available for a system in which mass-transfer-rate data are not readily available. The Chilton-Colburn analogy provides a procedure for developing estimates of the mass-transfer rates based on heat-transfer data. Extrapolation of experimental or Jh data obtained with gases to predict hquid systems (and vice versa) should be approached with caution, however. When pressure-drop or friction-factor data are available, one may be able to place an upper bound on the rates of heat and mass transfer, according to Eq. (5-308).  [c.625]

Overall Kg(J data may be obtained from tower-packing vendors for many of the established commercial gas-absorption processes. Such data often are based either upon tests in large-diameter test units or upon actual commercial operating data. Since extrapolation to untried operating conditions is not recommended, the preferred procedure for applying the traditional design method is equivalent to duplicating a previously successful commercial installation. When this is not possible, then a commercial demonstration at the new operating conditions may be required, or else one could consider using some of the more rigorous methods described later.  [c.1364]

Once the model parameters have been estimated, analysts should perform a sensitivity analysis to establish the uniqueness of the parameters and the model. Figure 30-9 presents a procedure for performing this sensitivity analysis. If the model will ultimately be used for exploration of other operating conditions, analysts should use the results of the sensitivity analysis to estabhsh the error in extrapolation that will result from database/model interactions, database uncertainties, plant fluctuations, and alternative models. These sensitivity analyses and subsequent extrapolations will assist analysts in determining whether the results of the unit test will lead to results suitable for the intended purpose.  [c.2556]

Many engineering components (e.g. tie bars in furnaces, super-heater tubes, high-temperature pressure vessels in chemical reaction plants) are expected to withstand moderate creep loads for long times (say 20 years) without failure. The safe loads or pressure they can safely carry are calculated by methods such as those we have just described. But there are dangers. One would like to be able to test new materials for these applications without having to run the tests for 20 years and more. It is thus tempting to speed up the tests by increasing the load to get observable creep in a short test time. Now, if this procedure takes us across the boundary between two different types of mechanism, we shall have problems about extrapolating our test data to the operating conditions. Extrapolation based on power-law creep will be on the dangerous side as shown in Fig. 19.6. So beware changes of mechanism in long extrapolations.  [c.192]

Fig. 2.41. Basic procedure for EELS quantification exercised for BN (a) background extrapolation and subtraction Fig. 2.41. Basic procedure for EELS quantification exercised for BN (a) background extrapolation and subtraction
If batch tests are performed with an initial slurry concentration below that of the critical, the average concentration of the compression zone will exceed the critical value because it will consist of sludge layers compressed over varying time lengths. A method for estimating the required time to pass from the critical solids content to any specified underflow concentration is as follows First, extrapolate the compression curve to the critical point or zero time. Then locate the time when the upper interface (between the supernatant liquid and slurry) is at height Z g, halfway between the initial height, Zg, and the extrapolated zero-time compression zone height, Z g. This time represents the period in which all the solids are at the critical dilution and go into compression. The retention time is computed as t - t, where t is the time when the solids reach the specified underflow concentration. The procedure is illustrated in Figure 49. The determination of the required volume for the compression zone should be based on estimates of the time each layer has been in compression. The volume for the compression zone is the sum of the volume occupied by the solids plus the volume of the entrapped fluid. This may be expressed as  [c.413]

The investigated specimens had nice and flat reflecting surfaces, and equivalent samples (i.e., with the same film thickness) gave similar results. The roughness of the surface and the finite CNT size effects were taken into account by coating the investigated specimens with a thin gold layer. Such gold coated samples were used as references. The average thickness of the film is generally smaller than the expected penetration depth of light in the far-infrared. Therefore, we looked at the influence of the substrate on the measured total reflectivity of the substrate-CNT film composite. Even though for film thicknesses above 3 p,m we did not find any qualitative and quantitative change in the reflectivity spectra due to the substrate, we appropriately took into account the effects due to multiple reflections and interferences at the film and substrate interface. Further details of the experimental procedure and data analysis can be found in refs. 12 and 13. The corrected and intrinsic reflectivity of the CNT films differ only by a few percent in intensity (particularly in FIR) but not in the overall shape from the measured one. The real part [c.92]

As we noted in Chapter 7, the CBS family of methods all include a component which extrapolates from calculations using a finite basis set to the estimated complete basis set limit. In this section, we very briefly introduce this procedure.  [c.278]

Further problems arise if measurements of the rate of nitration have been made at temperatures other than 25 °C under these circumstances two procedures are feasible. The first is discussed in 8.2.2 below. In the second the rate profile for the compound imder investigation is corrected to 25 °C by use of the Arrhenius parameters, and then further corrected for protonation to give the calculated value of logio 2fb. at 25 °C, and thus the calculated rate profile for the free base at 25 °C. The obvious disadvantage is the inaccuracy which arises from the Arrhenius extrapolation, and the fact that, as mentioned above, it is not always known which acidity functions are appropriate.  [c.152]

Direct Inversion in the Iterative Subspace (DIIS). This procedure was developed by Pulay and is an extrapolation procedure. It has proved to be very efficient in forcing convergence, and in reducing the number of iterations at the same time. It is now one of the most commonly used methods for helping SCF convergence. The idea is as follows. As the iterative procedure runs, a sequence of Fock and density matrices (Fq, Fi, F2,... andDo, Di, D2,...) is produced. At each iteration it is also assumed that an estimate of the error (Eq, Ei, E2 ...) is available, i.e. how far the current Fock/density matrix is from the converged solution. The converged solution has an enor of zero, and the DIIS method forms a linear combination of the en or indicators which in a least squares sense is a minimum (as close to zero as possible). In the function space generated by the previous iterations we try to find the point with lowest en or, which is not necessarily one of the points actually calculated. It is common to use the trace (sum of diagonal elements) of the matrix product of the error matrix with itself as a scalar indicator of the error.  [c.73]

Newton-Raphson methods can be combined with extrapolation procedures, the best known of these is perhaps the GDIIS (Geometry Direct Inversion in the Iterative Subspacewhich is directly analogous to the DIIS for electronic wave functions described in Section 3.8.1. In the GDIIS method the NR step is not taken from the last geometry, but from an interpolated point with a corresponding interpolated gradient.  [c.335]

When atoms not explicitly included in the trajectory are created based on internal coordinates the generated molecule or part thereof is effectively rigid. Contributions to the free energy from internal degrees of freedom are ignored. To estimate contributions from internal degrees of freedom the free energy must be averaged over a correctly weighted series of conformations which cover the range of potential motion. There will be a net contribution to the free energy only if the probability of sampling a given configuration is different in the initial and final states. Thus, only those degrees of freedom affected by the environment need be considered. To weight the configurations appropriately we must separate the difference in the potential energy between the reference and alternate state, AV (AV = Vb — V ) into a sum of inter- and intramolecular terms, AV = AVinter + AVintra- Note, if the molecule is rigid the intramolecular term is a constant and can be ignored. If not, configurations must be assigned Boltzmann weights based exclusively on the intramolecular term as this has not been included in the Hamiltonian of the initial state. Configurations could, for example, be generated using a Monte Carlo procedure considering only AVintra- In fh extrapolation itself, double counting must be avoided by considering only tli< intermolccular  [c.158]

Fatigue testiag of polymers may coasist of static fatigue, ie, creep mpture, or dyaamic fatigue. The ASTM procedure for dyaamic fatigue, D671 (205), is a coastant ampHtude force technique ia flexural mode utilizing a carefliUy prepared specimen that allows evea stress distributioa over the test span. However, the test is limited to a single frequency of 30 Hz. For many polymers, this iaduces an unacceptable temperature rise that can affect the results, if not lead to thermal failure ia which the specimen distorts (206—208). Other modes of cycHc fatigue iaclude teasioa, compression, or shear (209,210), which use siausoidal-, square-, and sawtooth-wave forms for applyiag stress, although varyiag strain can also be used ia cycHc fatigue testiag (2II—213). Notching of specimen and high or low temperatures have been employed to accelerate the failure via embrittlement of the materials. This can also lead to unwarranted extrapolation of the test data without additional tests (214).  [c.153]

Another instance in which the constant-temperature method is used involves the direc t application of experimental KcO values obtained at the desired conditions of inlet temperatures, operating pressure, flow rates, and feed-stream compositions. The assumption here is that, regardless of any temperature profiles that may exist within the actu tower, the procedure of working the problem in reverse will yield a correct result. One should be cautious about extrapolating such data veiy far from the original basis and be carebil to use compatible equilibrium data.  [c.1360]

Therefore, in applying either of these two procedures it is necessary to run the test in a vessel having an average bed depth close to that expec ted in a full-scale thickener. This requires a very large sample, and it is more convenient to cariy out the test in a cylinder having a volume of 1 to 4 hters. The calculated unit area value from this test can be extrapolated to full-scale depth by cari ying out similar tests at different depths to determine the effect on unit area. Alternatively, an empirical relationship can be used which is effective in applying a depth correc tion to laboratoiy cylinder data over normal operating ranges. The unit area calculated by either the Wilhelm and Naide approach or the direct method is multiphed by a factor equal to h/HY, where h is the average depth of the pulp in the cylinder, H is the expected fiill-scale compression zone depth, usually taken as 1 m, and n is the exponent calculated from Fig. 18-85. For consei vative design purposes, the minimum value of this factor that should be used is 0.25.  [c.1680]

The background signal, 7b, contributing to the feth inner-shell edge must be subtracted from the total energy-loss intensity to obtain the signal, 7k, of the ionization loss itself As already mentioned in Sect. 2.3.3.2 the background often follows a power law (7b = AE ). This fit can be used to extrapolate the background in the higher-loss region for inner-shell losses of approximately 100 eV and below the use of a polynomial fit is sometimes more suitable. The procedure of background extrapolation and subtraction is demonstrated for the B-K edge of boron nitride in Fig. 2.41a where, in addition to the recorded edge profile, the extrapolated background and the net edge are shown.  [c.65]

The combination of EELS and transmission electron microscopy affords different experimental facilities for imaging of element distributions (see also Sect. 2.3.2) depending on the electron optics of the microscope and the particular type of spectrometer. When the electron microscope can be operated in the STEM mode two-dimensional energy-selective images can easily be gathered by applying the signal of a serial-detection spectrometer to the brightness control of a cathode-ray tube. In general, for an element map three individual images have to be taken - two background images (also termed pre-edge images) and one post-edge image. This procedure is, therefore, called the three-window method [2.171, 2.173, 2.174] and enables determination of the net contribution of an inner-shell edge to image brightness after background extrapolation and subtraction. The image processing necessary is quite similar to the handling of an ionization edge in EEL spectroscopy to obtain the net edge profile. Local differences between the low-loss spectra can, however, also be used for element-specific imaging, e. g. for A1 layers on Si02 where in cross-section the Al-containing regions appear bright because of a plasmon energy of approximately 15 eV compared with 23 eV for silicon oxide. When a PEELS is run on a STEM the element distribution can be visualized along a line by recording series of spectra (Fig. 2.40b). For a STEM the lateral resolution is essentially fixed by the diameter of  [c.67]

As it is usual in all spectroscopic methods for quantification, the original signal characteristic of an individual element inside the excited volume that can be correlated with the content of this element must be extracted from the raw-data spectrum. Hence, quantitative EDXS analysis involves the determination of the background contribution, background subtraction, and counting the net intensities of the characteristic X-ray peaks. There are different possible ways of fitting the background curve underlying the whole EDX spectrum. For thin-foil analysis it is nearly a horizontal line of rather low intensity. Thus, one procedure used to determine the background signal below an X-ray peak is to assume a straight-line contribution that is subtracted. Another method of background extrapolation comprises averaging of the background intensities in two windows of identical width just below and above the characteristic peak. It is also possible to model the background by Kramers law [4.108] defining the number Ngack of bremsstrahlung photons as a function of the photon energy E  [c.204]

At high doses (hundreds of rem) and high dose rates, the effects are fairly well known (Glasstone and Dolan, 1977). The problem is at low doses where the correlation between exposure and cancers is poor because of experimental difficulties from competing effects. The procedure presented in BIER II linearly related data from large exposures to cancer. This relationship is about 1.5E-4 cancers/person-rem for adults and is extrapolated to low exposures. (Since the [K>pulation ot Ihe I, S. is about 2.65E8 and the background radiation is 0.1 rem/yr, then 2.65E8 0,1 1,5h-4  [c.328]

Although there are no direct measurements of the thermal expansion coefficient of this alloy at various pressures, compressibility versus temperature measurements have been made. From these compressibility data and thermodynamic identity dP(dP = —dkjdT the initial slope of the thermal-expansion-pressure relation can be computed at atmospheric pressure. This initial slope is found to be -t-1.7 x 10 °C GPa S which is in contradiction to the behavior we have inferred from our high-pressure measurements. However, the extrapolation of initial slopes at atmospheric pressure to high pressures where there are large changes in the magnetic interactions is clearly an uncertain procedure. Thus, the most likely behavior of the thermal expansion coefficient with pressure is an initial small increase followed by a continual decrease in slope until a large negative slope is obtained.  [c.122]

Figure 3-1 shows calculated plots of Eq. (3-16) for hypothetical systems in which kilk2 has the values 1 and 5. It is evident from the example in which k, = 5 2 that the curvature persists well into the reaction and that unambiguous identification of the terminal linear portion may be difficult. The long extrapolation to find eg is also uncertain. The accuracy of this procedure depends upon the ratios kilk2 and  [c.64]


See pages that mention the term Extrapolation Procedures : [c.2826]    [c.112]    [c.300]    [c.164]    [c.165]    [c.167]    [c.269]    [c.2338]    [c.83]    [c.660]    [c.91]    [c.480]    [c.336]    [c.26]   
See chapters in:

Introduction to computational chemistry  -> Extrapolation Procedures