Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Generalization error

This situation, despite the fact that reliability is increasing, is very undesirable. A considerable effort will be needed to revise the shape of the potential functions such that transferability is greatly enhanced and the number of atom types can be reduced. After all, there is only one type of carbon it has mass 12 and charge 6 and that is all that matters. What is obviously most needed is to incorporate essential many-body interactions in a proper way. In all present non-polarisable force fields many-body interactions are incorporated in an average way into pair-additive terms. In general, errors in one term are compensated by parameter adjustments in other terms, and the resulting force field is only valid for a limited range of environments. [Pg.8]

CPU time. In response to these slow and rigorous calculations, many fast heuristic approaches have been developed that are based on intuitive concepts such as docking [10], matching pharmacophores [19], or linear free energy relationships [20], A disadvantage of many simple heuristic approaches is their susceptibility to generalization error [17], where accuracy of the predictions is limited to the training data. [Pg.326]

There are two sources to the generalization error (Girosi and Anzel-... [Pg.169]

In recent years some theoretical results have seemed to defeat the basic principle of induction that no mathematical proofs on the validity of the model can be derived. More specifically, the universal approximation property has been proved for different sets of basis functions (Homik et al, 1989, for sigmoids Hartman et al, 1990, for Gaussians) in order to justify the bias of NN developers to these types of basis functions. This property basically establishes that, for every function, there exists a NN model that exhibits arbitrarily small generalization error. This property, however, should not be erroneously interpreted as a guarantee for small generalization error. Even though there might exist a NN that could... [Pg.170]

Algorithm 1 requires the a priori selection of a threshold, s, on the empirical risk, /en,p( X which will indicate whether the model needs adaptation to retain its accuracy, with respect to the data, at a minimum acceptable level. At the same time, this threshold will serve as a termination criterion for the adaptation of the approximating function. When (and if) a model is reached so that the generalization error is smaller than e, learning will have concluded. For that reason, and since, as shown earlier, some error is unavoidable, the selection of the threshold should reflect our preference on how close and in what sense we would like the model to be with respect to the real function. [Pg.178]

A tighter and local estimate on the generalization error bound can br derived by observing locally the maximum encountered empirical error Consider a given dyadic multiresolution decomposition of the input spac( and, for simplicity, let us assume piecewise constant functions as approxi mators. In a given subregion of the input space, let p be the set o... [Pg.191]

Note that the variance does not depend on the true value x, and the mean estimator x has the least variance. The finite sampling bias is the difference between the estimate x and the true value x, and represents the finite sampling systematic part of the generalized error... [Pg.201]

Albarede, F. Provost, A. (1977). Petrological and geochemical mass balance an algorithm for least-squares fitting and general error analysis. Comp. Sci., 3, 309-26. [Pg.526]

In general, errors tend to be more systematic at a given ab initio or DFT level and may therefore often be taken into account by suitable corrections. Errors in semiempirical calculations are normally less uniform and thus harder to correct. [Pg.243]

At the end of Section 1, it was mentioned that measurement techniques are subject to errors, and bias was also mentioned. In general, errors are of three types (1) those that are systematic errors and produce a known bias in the data, (2) those that are avoidable blunders that are known to have occurred, or were found later to have occurred, the so-called determinate errors, and (3) those called random errors, or also indeterminate errors, which are errors that occur, but can neither be identified nor directly compensated. Correction... [Pg.16]

Subsequently, other workers developed numerous variations and generalizations of the basic method. Most methods can be summarized by the accompanying generalized error-reduction algorithm. [Pg.122]

The general errors associated with this technique are reserved for comparison at the end of this section. [Pg.172]

These general errors can be broken into two categories. The first one is in the general area of sampling, the problems of getting the sample from where it is into the GC. The second area is the GC system itself. [Pg.202]

Chueh and Swanson (15) have proposed values for different molecular groups to estimate molar liquid heat capacity, Op, at room temperature (T = 293 K). This method is accurate and more general. Errors for the Chueh-Swanson method rarely exceed 2 to 3% ... [Pg.698]

All measurements are accompanied by a certain amount of error, and an estimate of its magnitude is necessary to validate results. The error cannot be eliminated completely, although its magnitude and nature can be characterized. It can also be reduced with improved techniques. In general, errors can be classified as random and systematic. If the same experiment is repeated several times, the individual measurements cluster around the mean value. The differences are due to unknown factors that are stochastic in nature and are termed random errors. They have a Gaussian distribution and equal probability of being above or below the mean. On the other hand, systematic errors tend to bias the measurements in one direction. Systematic error is measured as the deviation from the true value. [Pg.6]

Various methods have been proposed to measure the importance of inputs (Sarle, 1998) and are likely to be useful in different applications of neural nets. The two most common notions of importance are predictive importance and causal importance. Predictive importance is concerned with the increase in generalization error when an input is omitted from a network. Causal importance is concerned with situations in which an individual wants to quantify the relationship between input value manipulation and consequent output change. [Pg.153]

Sentence Structure generally error-free syntax effective sentence variety... [Pg.152]

Grammar-Usage-Mechanics generally error-free grammar, usage, or mechanics... [Pg.152]


See other pages where Generalization error is mentioned: [Pg.197]    [Pg.198]    [Pg.359]    [Pg.407]    [Pg.551]    [Pg.364]    [Pg.169]    [Pg.169]    [Pg.169]    [Pg.171]    [Pg.181]    [Pg.181]    [Pg.182]    [Pg.182]    [Pg.191]    [Pg.192]    [Pg.194]    [Pg.195]    [Pg.199]    [Pg.122]    [Pg.38]    [Pg.475]    [Pg.186]    [Pg.39]    [Pg.56]    [Pg.94]    [Pg.163]    [Pg.166]    [Pg.182]    [Pg.377]   
See also in sourсe #XX -- [ Pg.326 ]




SEARCH



Error analysis general

General Error Analysis - Common to both Volumetric and Gravimetric

Generalization error, sources

Human error generally

Standard errors more generally

© 2024 chempedia.info