Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Error bounds

From the derivation of the method (4) it is obvious that the scheme is exact for constant-coefficient linear problems (3). Like the Verlet scheme, it is also time-reversible. For the special case A = 0 it reduces to the Verlet scheme. It is shown in [13] that the method has an 0 At ) error bound over finite time intervals for systems with bounded energy. In contrast to the Verlet scheme, this error bound is independent of the size of the eigenvalues Afc of A. [Pg.423]

In the following we devise, following [14], an efficiently implementable scheme which leads to favorable error bounds independently of the highest frequencies under the mere assumption that the system has bounded energy. The scheme will be time-reversible, and robust in the singular limit of the mass ratio m/M tending to 0. [Pg.428]

For the combined scheme (21), (23), second-order error bounds are derived in [14], These bounds hold independently of the size of the eigenvalues of T, and without assumptions about the smoothness of the solution, which in general is highly oscillatory. [Pg.428]

Babuska, B., 1971. Error bounds for tinite element method. Numer. Methods 16, 322-333. [Pg.108]

The solvent effects are generally less than 1 ppm, which are well within the error bounds of the standard deviations of the calculated shifts. It is possible, however, that under extreme conditions larger deviations may be observed. [Pg.253]

Rigorous error bounds are discussed for linear ordinary differential equations solved with the finite difference method by Isaacson and Keller (Ref. 107). Computer software exists to solve two-point boundary value problems. The IMSL routine DVCPR uses the finite difference method with a variable step size (Ref. 247). Finlayson (Ref. 106) gives FDRXN for reaction problems. [Pg.476]

Structure calculation algorithms in general assume that the experimental list of restraints is completely free of errors. This is usually true only in the final stages of a structure calculation, when all errors (e.g., in the assignment of chemical shifts or NOEs) have been identified, often in a laborious iterative process. Many effects can produce inconsistent or incorrect restraints, e.g., artifact peaks, imprecise peak positions, and insufficient error bounds to correct for spin diffusion. [Pg.264]

Restraints due to artifacts may, by chance, be completely consistent with the correct structure of the molecule. However, the majority of incorrect restraints will be inconsistent with the correct structural data (i.e., the correct restraints and information from the force field). Inconsistencies in the data produce distortions in the structure and violations in some restraints. Structural consistency is often taken as the final criterion to identify problematic restraints. It is, for example, the central idea in the bound-smoothing part of distance geometry algorithms, and it is intimately related to the way distance data are usually specified The error bounds are set wide enough that all data are geometrically consistent. [Pg.264]

Techniques to use for evaluations have been discussed by Cox and Tikvart (42), Hanna (43) and Weil et al. (44). Hanna (45) shows how resampling of evaluation data will allow use of the bootstrap and jackknife techniques so that error bounds can be placed about estimates. [Pg.334]

For the air quality manager to place model estimates in the proper perspective to aid in making decisions, it is becoming increasingly important to place error bounds about model estimates. In order to do this effectively, a history of model performance under circumstances similar to those of common model use must be established for the various models. It is anticipated that performance standards will eventually be set for models. [Pg.338]

The Committee is unable to determine whether the absolute probabilities of accident sequences in WASH-1400 are high or low, but it is believed that the error bounds on those estimates are, in general, greatly understated. This is due in part to an inability to quantify common cause failures, and in part to some questionable methodological and statistical procedures. [Pg.4]

The numerator is a random normally distributed variable whose precision may be estimated as V(N) the percent of its error is f (N)/N = f (N). For example, if a certain type of component has had 100 failures, there is a 10% error in the estimated failure rate if there is no uncertainty in the denominator. Estimating the error bounds by this method has two weaknesses 1) the approximate mathematics, and the case of no failures, for which the estimated probability is zero which is absurd. A better way is to use the chi-squared estimator (equation 2,5.3.1) for failure per time or the F-number estimator (equation 2.5.3.2) for failure per demand. (See Lambda Chapter 12 ),... [Pg.160]

It should also be acknowledged that in recent years computational quantum chemistry has achieved a number of predictions that have since been experimentelly confirmed (45-47). On the other hand, since numerous anomalies remain even within attempts to explain the properties of atoms in terms of quantum mechanics, the field of molecular quantum mechanics can hardly be regarded as resting on a firm foundation (48). Also, as many authors have pointed out, the vast majority of ab initio research judges its methods merely by comparison with experimental date and does not seek to establish internal criteria to predict error bounds theoretically (49-51). The message to chemical education must, therefore, be not to emphasize the power of quantum mechanics in chemistry and not to imply that it necessarily holds the final answers to difficult chemical questions (52). [Pg.17]

It is indeed a great honor to be invited to contribute to this memorial volume. I should say from the outset that I never met Lowdin but nevertheless feel rather familiar with at least part of his wide-ranging writing. In 1986 I undertook what I believe may have been the first PhD thesis in the new field of philosophy of chemistry. My topic was the question of the reduction of chemistry to quantum mechanics. Not surprisingly this interest very soon brought me to the work of Lowdin and in particular his analysis of rigorous error bounds in ab initio calculations (Lowdin, 1965). [Pg.91]

Do of 1.40 eV for AI2 is within the error bounds of the experimental value of 1.55 0.15 eV determined by Stearns and Kohl (46) using a Knudsen cell mass spectrometric method and assuming a ground state. [Pg.22]

A tighter and local estimate on the generalization error bound can br derived by observing locally the maximum encountered empirical error Consider a given dyadic multiresolution decomposition of the input spac( and, for simplicity, let us assume piecewise constant functions as approxi mators. In a given subregion of the input space, let p be the set o... [Pg.191]

Since both the temperature dependence of the characteristic ratio and that of the density are known, the prediction of the scaling model for the temperature dependence of the tube diameter can be calculated using Eq. (53) the exponent a = 2.2 is known from the measurement of the -dependence. The solid line in Fig. 30 represents this prediction. The predicted temperature coefficient 0.67 + 0.1 x 10-3 K-1 differs from the measured value of 1.2 + 0.1 x 10-3 K-1. The discrepancy between the two values appears to be beyond the error bounds. Apparently, the scaling model, which covers only geometrical relations, is not in a position to simultaneously describe the dependences of the entanglement distance on the volume fraction or the flexibility. This may suggest that collective dynamic processes could also be responsible for the formation of the localization tube in addition to the purely geometric interactions. [Pg.57]

Certainly, nonlinearities in real data can have several possible causes, both chemical (e.g., interactions that make the true concentrations of any given species different than expected or might be calculated solely from what was introduced into a sample, and interaction can change the underlying absorbance bands, to boot) and physical (such as the stray light, that we simulated). Approximating these nonlinearities with a Taylor expansion is a risky procedure unless you know a priori what the error bound of the approximation is, but in any case it remains an approximation, not an exact solution. In the case of our simulated data, the nonlinearity was logarithmic, thus even a second-order Taylor expansion would be of limited accuracy. [Pg.155]

Gordon, R. G. (1968). Error bounds in equilibrium statistical mechanics. Journal of Mathematical Physics 9, 655-663. [Pg.414]

Problems are encountered in practice in the application of this procedure. The data in the literature for both V o/pH and equilibrium constants are quite large. Also, it is difficult to evaluate to which extent... [Pg.94]

To further appreciate the indistinguishability of these functions, we show the error bounds of a, assuming that the data came from an SPC instrument. Clearly, Poisson noise always greatly exceeds the differences exceptfor D3. For 104peak counts in SPC data, differentiation between any of these functions is impossible except perhaps for D3. Noise hides the differences. While we have selected only a small set of possible exponentials, clearly there is a continuum of possible lifetimes and preexponential factors that would be similarly indistinguishable. [Pg.96]

Case II - Analyte Detection (A - assumed). Here, the analyte- rather than signal-detection limit is calculated, but the systematic error in A, applied in the estimation of x from Equation 2c imposes systematic error bounds which must be applied to the analyte detection limit. The limit is no longer purely probabilistic in nature ( ). [Pg.55]

Other practices which tend to underestimate the true detection limits and add confusion to the uniform evaluation of results by the public include varied (or no) treatment of interference, avoidance of systematic error bound estimation, and consideration of Poisson counting errors only. A further problem which has emerged with the prevalence of microprocessors and proprietary computer software, is the effect of hidden algorithms and inaccessible source code, so that data evaluation operations (Op) are not known to the user, and possible source code deficiences and blunders cannot be readily assessed. [Pg.57]

Figure 1. Plots showing the Calibration Process. A. Response transformation to constant variance Examples showing a. too little, b. appropriate, and c. too much transformation power. B. Amount Transformation in conforming to a (linear) model. C. Construction of p. confidence bands about the regressed line, q. response error bounds and intersection of these to determine r. the estimated amount interval. Figure 1. Plots showing the Calibration Process. A. Response transformation to constant variance Examples showing a. too little, b. appropriate, and c. too much transformation power. B. Amount Transformation in conforming to a (linear) model. C. Construction of p. confidence bands about the regressed line, q. response error bounds and intersection of these to determine r. the estimated amount interval.

See other pages where Error bounds is mentioned: [Pg.461]    [Pg.269]    [Pg.5]    [Pg.129]    [Pg.132]    [Pg.286]    [Pg.171]    [Pg.171]    [Pg.181]    [Pg.188]    [Pg.190]    [Pg.191]    [Pg.192]    [Pg.215]    [Pg.319]    [Pg.330]    [Pg.606]    [Pg.131]    [Pg.297]    [Pg.51]    [Pg.56]    [Pg.56]    [Pg.152]    [Pg.155]   
See also in sourсe #XX -- [ Pg.119 ]




SEARCH



© 2024 chempedia.info