Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Nonclassification, Supervised Learning Problems

With two exceptions, nonclassification, supervised learning problems constitute any continuous valued output, supervised learning problem other than classification. The exceptions are heteroassociative and autoassociative binary output problems such as mapping, data compression, and dimension reduction. [Pg.118]

We consider the continuous output value case first. PMs for these problems are usually a function of a residual or error term (O, - D,), where O, is the observed value of an output PE for the ith case and D, is the desired value of the same output PE for the rth case. Observed values are actual output PE values generated by the ANN. Desired values are correct values (i.e., values the ANN is trying to learn) and are sometimes called target values. Be warned that different authors use the same terminology to mean different things some use observed [Pg.118]

Common PMs include (1) the average (or mean) absolute error (or deviation), [210, - Dj ]/N, where the sum is over i and N is the number of cases (2) the average (or mean) squared error (sometimes called PRESS or SEC ), 2(0, -Di)VN (3) the root-mean-square error (RMSE), which most authors take as [2(0, - D,2 )/N]i/2 but which others take as [2(Oj - D,)2]t/2/ f. gnJ (4) Pearson product-moment correlation coefficient, or simply the correlation coefficient. This coefficient is defined as follows  [Pg.119]

A PM that is often used in jackknife calculations is the cross-validated correlation coefficient denoted typically by cv or q. For one output PE, it is given by [Pg.119]

Finally, a few brief comments regarding the exceptional heteroassociative and autoassociative binary output problems. Although these are not classification problems, we feel that classification PMs are appropriate to apply here because the outputs are either zero or one. In most cases you should probably apply these PMs globally if you are interested in the compression, reduction, or mapping of an entire data set. On occasion, however, a few of the output PEs may not perform well, degrading the quality of the compression, and so on PMs applied to individual output PEs may be helpful in such cases. [Pg.120]




SEARCH



Learning problems

Nonclassification

Supervised

Supervised learning

© 2024 chempedia.info