Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Learning, neuron

Minichiello, L., Korte, M., Wolfer, D. et al. Essential role for TrkB receptors in hippocampus-mediated learning. Neuron 24 401-414,1999. [Pg.433]

Walker MP, Brakefield T, Morgan A, Hobson JA, Stickgold R. Practice with sleep makes perfect sleep-dependent motor skill learning. Neuron 2002 35 205-211. [Pg.332]

Neuronal networks are nowadays predominantly applied in classification tasks. Here, three kind of networks are tested First the backpropagation network is used, due to the fact that it is the most robust and common network. The other two networks which are considered within this study have special adapted architectures for classification tasks. The Learning Vector Quantization (LVQ) Network consists of a neuronal structure that represents the LVQ learning strategy. The Fuzzy Adaptive Resonance Theory (Fuzzy-ART) network is a sophisticated network with a very complex structure but a high performance on classification tasks. Overviews on this extensive subject are given in [2] and [6]. [Pg.463]

Just like humans, ANNs learn from examples. The examples are delivered as input data. The learning process of an ANN is called training. In the human brain, the synaptic connections, and thus the connections between the neurons. [Pg.454]

In misiipcrviscd learning, the network tries to group the input data on the basis of similarities between theses data. Those data points which arc similar to each other arc allocated to the same neuron or to eloscly adjacent neurons. [Pg.455]

A counter-propagation network is a method for supervised learning which can be used for prediction, It has a two-layer architecture where each netiron in the upper layer, the Kohonen layer, has a corresponding netiron in the lower layer, the output layer (sec Figure 9-21). A trained counter-propagation network can be used as a look-up tabic a neuron in one layer is used as a pointer to the other layer. [Pg.459]

Neural networks model the functionality of the brain. They learn from examples, whereby the weights of the neurons are adapted on the basis of training data. [Pg.481]

Vinpocetine (2), another dmg initially categorized as a cerebral vasodilator, is a member of the vinca alkaloid family of agents (7). However, interest in this compound as a potential dmg for learning and memory deficits comes from its abiUty to act as a neuronal protectant. This compound was evaluated in 15 patients with AD over a one-year period and was ineffective in improving cognitive deficits or slowing the rate of decline (8). However, in studies of patients with chronic vascular senile cerebral dysfunction (9) and organic psycho syndrome (10), vinpocetine showed beneficial results. [Pg.93]

The human brain is comprised of many millions of interconnected units, known individually as biological neurons. Each neuron consists of a cell to which is attached several dendrites (inputs) and a single axon (output). The axon connects to many other neurons via connection points called synapses. A synapse produces a chemical reaction in response to an input. The biological neuron fires if the sum of the synaptic reactions is sufficiently large. The brain is a complex network of sensory and motor neurons that provide a human being with the capacity to remember, think, learn and reason. [Pg.347]

Artificial Neural Networks (ANNs) attempt to emulate their biological counterparts. McCulloch and Pitts (1943) proposed a simple model of a neuron, and Hebb (1949) described a technique which became known as Hebbian learning. Rosenblatt (1961), devised a single layer of neurons, called a Perceptron, that was used for optical pattern recognition. [Pg.347]

The seleeted network had a 3-6-6-6-3 strueture, i.e. input and output layers eomprising 3 neurons in eaeh, separated by three hidden layers of 6 neurons. During learning, 4 million epoehs were trained. The learning rate and momentum were initially set at 0.3 and 0.8, but were redueed in three steps to final values of 0.05 and 0.4 respeetively. [Pg.360]

The general principle behind most commonly used back-propagation learning methods is the delta rule, by which an objective function involving squares of the output errors from the network is minimized. The delta rule requires that the sigmoidal function used at each neuron be continuously differentiable. This methods identifies an error associated with each neuron for each iteration involving a cause-effect pattern. Therefore, the error for each neuron in the output layer can be represented as ... [Pg.7]

We mentioned above that a typical problem for a Boltzman Machine is to obtain a set of weights such that the states of the visible neurons take on some desired probability distribution. For example, the task may he to teach the net to learn that the first component of an Ai-component input vector has value +1 40% of the time. To accompli.sh this, a Boltzman Machine uses the familiar gradient-descent technique, but not on the energy of the net instead, it maximizes the relative entropy of the system. [Pg.534]

Pseudo-Code Implementation The Boltzman Machine Learning Algorithm proceeds in two phases (1) a positive, or learning, ph2 se and (2) a negative, or unlearning, phtise. It is summarized below in pseudo-code. It is assumed that the visible neurons are further subdivided into input and output sets as shown schematically in figure 10.8. [Pg.535]

Just as was the case with simple perceptrons, the multi-layer perceptron s fundamental problem is to learn to associate given inputs with desired outputs. The input layer consists of as many neurons as are necessary to set up some natural... [Pg.540]

The output layer likewise consists of as many neurons as are necessary to set up a natural cori espondence between the output neurons and the output-fact set. Using the same example of learning the alphabet, the output space might consist of 26 neurons, one for each letter of the alphabet. A perfect association between input and output facts would be to have - for each input letter - the value of the output neuron corresponding to the letter equal one and all other output neurons equal zero. [Pg.541]

Ef, as long as it is differentiable and is minimised by Of = Sf. One interesting form that has a natural interpretation in terms of learning the probabilities of a set of hypotheses represented by the output neurons, has recently been suggested by Hopfield [hopf87] and Banm and Wilczek [baumSSb] ... [Pg.546]

In our discussion of Hopfield nets in section 10.6, we found that the maximal number of patterns that can be stored before their stability is impaired is some linear function of the size of the net n, ax aN, where 0 < a < 1 and N is the number of neurons in the net (see sections 10.6.6 and 10.7). A similar question can of course be asked of perceptroiis How many input-output fact pairs can a perceptron of given size learn ... [Pg.550]


See other pages where Learning, neuron is mentioned: [Pg.518]    [Pg.518]    [Pg.462]    [Pg.464]    [Pg.464]    [Pg.465]    [Pg.193]    [Pg.195]    [Pg.452]    [Pg.455]    [Pg.460]    [Pg.462]    [Pg.481]    [Pg.500]    [Pg.2]    [Pg.2]    [Pg.5]    [Pg.5]    [Pg.9]    [Pg.287]    [Pg.510]    [Pg.512]    [Pg.512]    [Pg.524]    [Pg.532]    [Pg.538]    [Pg.541]    [Pg.541]    [Pg.547]    [Pg.552]    [Pg.554]   
See also in sourсe #XX -- [ Pg.54 ]




SEARCH



© 2024 chempedia.info