Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Training vector

A set of training vectors was assembled from this data. Let ns be the number of training samples. Each input vector vc, with i e 1,..., ns] corresponded to a horizontal line from the illuminated Mondrian. Since they worked with simulated data, the corresponding reflectances vr, were known. Let nx be the width of the Mondrian image. Therefore, each vector vc, and vr, contained nx data samples. The training samples were collected in a matrix of size ns x nx. This resulted in two matrices R and C. They assumed that the transform from input to output, i.e. from measured intensities to reflectances, is linear. [Pg.193]

There are two learning paradigms that determine how a network relates to its environment. In supervised learning (learning with teacher), a teacher provides output targets for each input pattern, and corrects the network s errors explicitly. The teacher has knowledge of the environment (in the form of a historical set of input-output data) so that the neural network is provided with desired response when a training vector is available. The... [Pg.62]

The Shuttle dataset from NASA contains 9 numerical attributes, 43500 training vectors and 14500 test vectors. There are 6 classes. FSM initialization gives 7 network nodes and 88% accuracy. Increasing the accuracy on the training set to 94%, 96% and 98% requires a total of 15, 18 and 25 net-... [Pg.338]

Only those training vectors will have nonzero Lagrange multipliers which are at the class boundaries or are margin errors. These prototypes, which determine the construction of the decision function, are termed support vectors. [Pg.199]

We have made some assumptions about how our example network functions. Many types of ANN operate as we have assumed, but some do not, and we now indicate these differences. The just described ANNs are heteroassociative because the desired outputs differ from the inputs. When the desired outputs are the same as the inputs for all the training vectors, the network is autoassociative. This circumstance naturally requires that the number of input PEs be equal to the number of output PEs. Some types of network—for example, backpropaga-tion—may be configured as either hetero- or autoassociative, whereas other types must be heteroassociative, and still others must be autoassociative. [Pg.62]

Average quantization error q between training vectors and their BMUs on the map it is used to measure the data representation accuracy. [Pg.897]

If x(t) the training vector presented at iteration t, mj the set of code vectors and m< (t) the nearest code vector to x(t). Vector nic is obtained from the equation ... [Pg.109]

Note that in (2.65) w is described as a linear combination of the training vector. In a sense, the complexity of a function s representation by SVs is independent of the dimensionality of the input space X, and depends only on the number of SVs. [Pg.48]


See other pages where Training vector is mentioned: [Pg.464]    [Pg.464]    [Pg.465]    [Pg.536]    [Pg.296]    [Pg.360]    [Pg.91]    [Pg.164]    [Pg.322]    [Pg.63]    [Pg.44]    [Pg.159]    [Pg.114]    [Pg.436]    [Pg.61]    [Pg.62]    [Pg.72]    [Pg.215]    [Pg.204]    [Pg.30]    [Pg.136]    [Pg.137]    [Pg.37]    [Pg.574]   
See also in sourсe #XX -- [ Pg.193 ]




SEARCH



© 2024 chempedia.info