Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Error bound method

Here = g g j and (fry = gtgt are used to approximate the electron densities 4>ij and 4>u The attractiveness of this approach is due to the simple structure of the error bound 6 which can be calculated separately for the pair of densities ij and 4>u> yielding a number of integrals proportional to n2. One can then obtain the approximated density by minimizing 6 with respect to all parameters in ij and < >,. This method is called the error bound method. Its results have been shown to be quite successful in many cases. [Pg.105]

From the derivation of the method (4) it is obvious that the scheme is exact for constant-coefficient linear problems (3). Like the Verlet scheme, it is also time-reversible. For the special case A = 0 it reduces to the Verlet scheme. It is shown in [13] that the method has an 0 At ) error bound over finite time intervals for systems with bounded energy. In contrast to the Verlet scheme, this error bound is independent of the size of the eigenvalues Afc of A. [Pg.423]

Babuska, B., 1971. Error bounds for tinite element method. Numer. Methods 16, 322-333. [Pg.108]

Rigorous error bounds are discussed for linear ordinary differential equations solved with the finite difference method by Isaacson and Keller (Ref. 107). Computer software exists to solve two-point boundary value problems. The IMSL routine DVCPR uses the finite difference method with a variable step size (Ref. 247). Finlayson (Ref. 106) gives FDRXN for reaction problems. [Pg.476]

The numerator is a random normally distributed variable whose precision may be estimated as V(N) the percent of its error is f (N)/N = f (N). For example, if a certain type of component has had 100 failures, there is a 10% error in the estimated failure rate if there is no uncertainty in the denominator. Estimating the error bounds by this method has two weaknesses 1) the approximate mathematics, and the case of no failures, for which the estimated probability is zero which is absurd. A better way is to use the chi-squared estimator (equation 2,5.3.1) for failure per time or the F-number estimator (equation 2.5.3.2) for failure per demand. (See Lambda Chapter 12 ),... [Pg.160]

It should also be acknowledged that in recent years computational quantum chemistry has achieved a number of predictions that have since been experimentelly confirmed (45-47). On the other hand, since numerous anomalies remain even within attempts to explain the properties of atoms in terms of quantum mechanics, the field of molecular quantum mechanics can hardly be regarded as resting on a firm foundation (48). Also, as many authors have pointed out, the vast majority of ab initio research judges its methods merely by comparison with experimental date and does not seek to establish internal criteria to predict error bounds theoretically (49-51). The message to chemical education must, therefore, be not to emphasize the power of quantum mechanics in chemistry and not to imply that it necessarily holds the final answers to difficult chemical questions (52). [Pg.17]

Do of 1.40 eV for AI2 is within the error bounds of the experimental value of 1.55 0.15 eV determined by Stearns and Kohl (46) using a Knudsen cell mass spectrometric method and assuming a ground state. [Pg.22]

Almost all contemporary ab initio molecular electronic structure calculations employ basis sets of Gaussian-type functions in a pragmatic approach in which no error bounds are determined but the accuracy of a calculation is assessed by comparison with quantities derived from experiment[l] [2]. In this quasi-empirical[3] approach each basis set is calibrated [4] for the treatment of a particular range of atoms, for a particular range of properties, and for a particular range of methods. Molecular basis sets are almost invariably constructed from atomic basis sets. In 1960, Nesbet[5] pointed out that molecular basis sets containing only basis sets necessary to reach to atomic Hartree-Fock limit, the isotropic basis set, cannot possibly account for polarization in molecular interactions. Two approaches to the problem of constructing molecular basis sets can be identified ... [Pg.158]

Figure 9. Average elemental C concentrations by the reflectance method in the Los Angeles Basin 1958-1972, in y-gm ((Q) air monitoring station). Error bounds represent a 95% confidence interval on the long-term means. Figure 9. Average elemental C concentrations by the reflectance method in the Los Angeles Basin 1958-1972, in y-gm ((Q) air monitoring station). Error bounds represent a 95% confidence interval on the long-term means.
Similar methods have been used to integrate thermodynamic properties of harmonic lattice vibrations over the spectral density of lattice vibration frequencies.21,34 Very accurate error bounds are obtained for properties like the heat capacity,34 using just the moments of the lattice vibrational frequency spectrum.35 These moments are known35 in terms of the force constants and masses and lattice type, so that one need not actually solve the lattice equations of motion to obtain thermodynamic properties of the lattice. In this way, one can avoid the usual stochastic method36 in lattice dynamics, which solves a random sample of the (factored) secular determinants for the lattice vibration frequencies. Figure 3 gives a typical set of error bounds to the heat capacity of a lattice, derived from moments of the spectrum of lattice vibrations.34 Useful error bounds are obtained... [Pg.93]

It should be clear that the extrapolation methods suggested in this section do not have rigorous error bounds like those developed in Section III. However, the extrapolation methods do furnish estimates of the spectral density itself, rather than only averages of the spectral density. Furthermore, these estimates satisfy all known conditions on the spectral density discussed in Section II. They are (a) positive functions, (b) with correct moments, insofar as they are known, and (c) satisfy any known asymptotic behavior at the ends of the frequency intervals. In a number of test cases with known positive continuous functions with known asymptotic behavior, estimates generally were correct to within a few per cent, even when only a few (say 10) moments were given. A typical spectral density obtained in this way for a lattice vibration problem is plotted in Figure 4.34 The results are similar to those obtained numerically for the same problem, by solving a random sample of secular equations for the lattice vibrations.36... [Pg.96]

The usefulness of spectral densities in nonequilibrium statistical mechanics, spectroscopy, and quantum mechanics is indicated in Section I. In Section II we discuss a number of known properties of spectral densities, which follow from only the form of their definitions, the equations of motion, and equilibrium properties of the system of interest. These properties, particularly the moments of spectral density, do not require an actual solution to the equations of motion, in order to be evaluated. Section III introduces methods which allow one to determine optimum error bounds for certain well-defined averages over spectral densities using only the equilibrium properties discussed in Section II. These averages have certain physical interpretations, such as the response to a damped harmonic perturbation, and the second-order perturbation energy. Finally, Section IV discusses extrapolation methods for estimating spectral densities themselves, from the equilibrium properties, combined with qualitative estimates of the way the spectral densities fall off at high frequencies. [Pg.97]

A. Ralston. Runge-Kutta Methods with Minimum Error Bounds. Mathe. Comput., 16 431-437,1962. [Pg.833]

Sample preparation in NLC and NCE is the most important step in analysis due to the nano nature of these modalities. The sampling should be carried out in such a way as to avoid changes in the chemical composition of the sample. The quantitative values of species depend on the strategy adopted in sample preparation. Extraction recoveries may vary from one species to another and they should, consequently, be assessed independently for each compound as well as for the compounds together. Materials with an integral analyte, that is, bound to the matrix in the same way as the unknown, which is preferably labeled (radioactive labeling) would be necessary, which is called method validation. As discussed above few papers described off- and online sample preparation methods on microfluidic devices. Of course, online methods are superior due to lower risk of contamination and error of methods. Not much work been carried out on online nanosample preparation devices, which need more research. Briefly, to get maximum extraction of analytes, sample preparation should be handled very carefully. [Pg.138]

To check that the method can be used for isobaric data a set of perfect data are generated and random errors added to x, y, T, and tt in turn and all together to see what effect they have on our standard procedure. For large samples we expect 68% of the sample values to lie within one standard deviation of the perfect value of the selected variable. In the case of small samples, e.g., twelve data, error bounds are calculated using binomial probabilities for each of the above variables so that, with probability of 0.95, we expect 41-95% of the sample observations to lie within one standard deviation of the perfect value of the selected variable (the normal distribution is assumed). Twelve is a common number of data points with salt-saturated solutions and this shows the desirability of taking more experimental observations. [Pg.50]

This alternate form of the equations produces a faster convergence as shown in an example given by Wigley (41) and also converges more rapidly than Newton-Raphson. EQTemploys an additional control on the continued fraction method which generates monotone sequences (43,44). Its chief virtues are strict error bounds and increased stability with respect to a range of analyses of aqueous solutions used as input. [Pg.863]

The data of five mixtures of three dyes were treated by using the proposed method. The error bound, e, was taken as ten fluorescence intensity units of the Hitachi 850 instrument. [Pg.80]

Since the equilibrium concentrations of the reagents are determined independently from each other, additional errors bound up with the use of the indirect concentrations of the reagents do not arise. The results of the treatment of the obtained potentiometric data are presented in Tables 3.6.3 and 3.6.4. To show the advantages of the proposed method as compared with the standard titration routine, the values of p MeO obtained according to the conventional method are collected in the same tables, the values calculated using the proposed method being denoted by an asterisk. [Pg.254]

One comment should be made. The Lanczos method was suggested by Lanczos in 1950 s. About 20 years this approach was kept in the background because of its well-known instability. In the 1970 s Paige obtained important results clarifying behavior of the simple Lanczos process in computer arithmetic. In 1990 s Druskin and Knizhnerman managed to obtain for computer arithmetic the following important result the Lanczos process is unstable by itself, but error bounds remain stable with respect to round-off. [Pg.629]

A well-developed statistical inference of the estimators and exists (Rubinstein and Shapiro 1993). That inference aids in the construction of stopping rules, validation analysis, and error bounds for obtained solutions and, furthermore, suggests variance reduction methods that may substantially enhance the rate of convergence of the numerical procedure. For a discussion of this topic and an application to two-stage stochastic programming with recourse, we refer to Shapiro and Homem-de-Mello (1998). [Pg.2636]

In the signal processing, we can use either the linear approximation method or nonlinear approximation method. The linear wavelet-based approximation picks wavelet coefficients from the coarsest level to the finest level while the nonlinear wavelet-based approximation selects wavelet coefficients adaptively e.g. it takes N largest coefficients in absolute value. In both approaches N is taken to be fixed or for instance in the way to satisfy the predetermined error bound. The wavelet coefficients selected from the above approximation methods are usually treated as compressed data. [Pg.821]

For the Lekner method, upper error boimds can be given easily since the Bessel functions drop essentially exponentially fast, allowing for a simple approximation of the sum by an integral. These error estimates are much less sharp than the error estimates for the Ewald-type methods, but here the error bound only enters logarithmically into the computation time, so that excessive accuracy has only a small impact on the overall performance. [Pg.78]

Upper error bounds can be foimd easily by approximating the sums with integrals (see [54]). As for the Lekner sum, the additional accuracy has to be paid for with only a small decrease of computational performance therefore, MMM is the method of choice if high accuracy is required. [Pg.82]


See other pages where Error bound method is mentioned: [Pg.105]    [Pg.106]    [Pg.105]    [Pg.106]    [Pg.461]    [Pg.215]    [Pg.319]    [Pg.160]    [Pg.161]    [Pg.161]    [Pg.88]    [Pg.144]    [Pg.40]    [Pg.303]    [Pg.168]    [Pg.745]    [Pg.180]    [Pg.41]    [Pg.336]    [Pg.200]    [Pg.403]    [Pg.130]    [Pg.53]    [Pg.124]    [Pg.137]    [Pg.139]   
See also in sourсe #XX -- [ Pg.104 , Pg.105 ]




SEARCH



Error method

© 2024 chempedia.info