Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Self-prediction

Addictive choices and other losses of self-control will often follow stimuli that occasion appetites for them, somewhat as Loewenstein describes. However, these appetites are better seen as reward-dependent processes that are part of the recursive self-prediction that Darwin and many others have elucidated rather than as the transferred reflexes of two-factor theory. Individuals can remember reward values with great accuracy but avoid rehearsing experiences like panic or drug craving lest they be lured back into them. Thus these experiences are often unreportable in practice. Once aroused, these processes function as self-confirming prophesies and, therefore, may seem both explosive and coercive. [Pg.234]

Abbreviations used in table (see also caption to Table 5.2) HB -hydrogen bonding potential energy term self-bb - prediction of side-chain conformation on to template backbone of modeled sequence non-self - prediction of side-chain conformation onto backbone borrowed from a homologous protein structure. [Pg.196]

Self-prediction. It may be tempting to use self-prediction to estimate the MSEP for a model. In self-prediction the whole calibration set is used to... [Pg.345]

So, if there is no alternative to self-prediction, it may be used with great care to make an attempt to select the number of factors, but its use is in general dangerous and to be avoided. [Pg.347]

Fu// cross-validation. Ideally, what is needed is a hybrid of self-prediction and training and test set validation. We need to use all observations in both model formation and validation without encountering the problems of selfprediction. Full cross-validation (Geisser, 1975 Stone, 1974) attempts to do just this. The term cross-validation is often applied to the partitioning form of training and test set validation and it is from this that full cross validation was developed (from here on we will use the term cross-validation to refer to full cross-validation). [Pg.348]

The problem of choosing a good partition still remains, but we now have a model based on all the observations, and an MSEP also based on all observations, but no self-prediction. [Pg.349]

Fig. 9. A PRESS plot for a self-prediction validation of a training set of NIR diffuse reflectance spectra of SO samples of wheat. Note that the PRESS value continues to decrease as new factors are added. There is no clear indication of the optimum number of factors for this model. Fig. 9. A PRESS plot for a self-prediction validation of a training set of NIR diffuse reflectance spectra of SO samples of wheat. Note that the PRESS value continues to decrease as new factors are added. There is no clear indication of the optimum number of factors for this model.
This method is an attempt to compromise between a full cross-validation (which is very slow but gives the best estimate of the model s performance when it is appUed to unknown samples) and a self-prediction (which is very fast but gives limited information about the predictive ability of the model). [Pg.126]

Applying the F test to PRESS values from a self-prediction generally does not work. This is due to the fact that the F test is primarily designed to find the statistically optimum number of factors for predicting samples that were not included when the model was built. In the self-prediction scheme, every sample is already included in the model, which gives no information on the performance of the model with true unknowns. This is merely one more reason why one of the other validation methods should be used to optimize the number of factors for the model. [Pg.131]

There is one additional method to use in determining outliers in discriminant analysis models to look at a plot of the predicted Mahalanobis distances (either from a cross-validation or self-prediction) to see if any samples stand out (Fig. 13). [Pg.188]

The accuracy of the consensus prediction regularities obtained by these three strategies is evaluated according to four indicators of the recognizing and predicting abilities of the integral decision rule, i.e., the results of self-prediction, leave-one-out cross-validation, split-half cross-validation, and double leave-one-out cross-... [Pg.390]

Self-Prediction. The aetivity of each one of N compounds in the training set is calculated without at r ehanges in the QL matrix or recalculation of the decision rules. [Pg.390]

A summary of the adequacy of decision mles in predicting expressed activity of stmcturally similar compounds is shown in Table 12.7. In this case, the maximum values Fq, and F in all strategies amount to 100% only in the self-prediction model. In leave-one-out and split-half cross-validations the maximum values of Fg, Fg, and F were 91 %, 100 %, and 96 %, respectively. [Pg.399]


See other pages where Self-prediction is mentioned: [Pg.230]    [Pg.230]    [Pg.231]    [Pg.346]    [Pg.346]    [Pg.347]    [Pg.122]    [Pg.125]    [Pg.126]    [Pg.127]    [Pg.129]   
See also in sourсe #XX -- [ Pg.359 ]




SEARCH



Predictive self-organizing fuzzy logic control

Predictive self-organizing fuzzy logic control PSOFLC)

© 2024 chempedia.info