Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Monotonic learning

Some concepts are leamable only via non-monotonic learning [Jantke 91]. Other concepts are leamable only via inconsistent learning [Lange and Wiehagen 91]. A host of leamability results inserts the corresponding refined identification types into the partially-ordered set of identification types [Jantke 89,91]. [Pg.38]

The relative error recorded on successive items within a seven-item sequence learning task by five year olds, for an arbitrary colour task (depicted by triangles) and the two monotonic size tasks (depicted by circles). [Pg.267]

One other network that has been used with supervised learning is the radial basis function (RBF) network.f Radial functions are relatively simple in form, and by definition must increase (or decrease) monotonically with the distance from a certain reference point. Gaussian functions are one example of radial functions. In a RBF network, the inputs are fed to a layer of RBFs, which in turn are weighted to produce an output from the network. If the RBFs are allowed to move or to change size, or if there is more than one hidden layer, then the RBF network is non-linear. An RBF network is shown schematically for the case of n inputs and m basis functions in Fig. 3. The generalized regression neural network, a special case of the RBF network, has been used infrequently especially in understanding in vitro-in vivo correlations. [Pg.2401]

One can learn about the stmeture of a system from following its dynamics. The orientational relaxation dynamics of water confined between mica surfaces has been investigated by MD simulations. The presence of wide heterogeneity in the dynamics of water adjacent to a strongly hydrophilic mica surface has been observed. By analyzing the survival probabilities, a 10-fold increase in the survival times for water that is directly in contact with the mica surface and a non-monotonic variation in the survival times moving away fi om the mica surface to the bulk-like... [Pg.206]

Figure 8.30 shows the arithmetic means of misclassification rates for learning set and test set for various Z, depending on the number of descriptors. Obviously, for the learning set, MCE decreases monotonically with increasing number of descriptors and with increasing Z. The latter is also true for the test set. With respect to the number of... [Pg.348]

The inequality (2.17) shows that the actual risk of the learning machine consists of two parts the first term on the right hand side of the inequality is the empirical risk (corresponding to the training errors) and the second term is called the "VC confidence ", which depends on the VC dimension of the set of fimctions (h) and the number of sample points ( ). Obviously, the VC confidence is a monotonic increasing function of h, which is true for any value of . [Pg.33]


See other pages where Monotonic learning is mentioned: [Pg.37]    [Pg.37]    [Pg.99]    [Pg.346]    [Pg.36]    [Pg.266]    [Pg.266]    [Pg.268]    [Pg.269]    [Pg.270]    [Pg.122]    [Pg.349]    [Pg.149]    [Pg.290]    [Pg.27]    [Pg.61]    [Pg.296]    [Pg.105]    [Pg.292]    [Pg.32]    [Pg.280]    [Pg.183]    [Pg.841]    [Pg.49]    [Pg.53]    [Pg.132]    [Pg.132]    [Pg.328]   
See also in sourсe #XX -- [ Pg.37 ]




SEARCH



Monotonic

© 2024 chempedia.info