Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

The Mathematical Basis of Backpropagation

The mathematical interlude that follows is a justification of the formulae in Box 1. If you are interested only in using neural networks, not the background mathematics, you may want to skip this section. [Pg.32]

The gradient of the weights can be expanded using the chain rule  [Pg.33]

The first term is the error in unit i, while the second can be written as  [Pg.33]

To find this term, we need to calculate the activity and error for all relevant network nodes. For input nodes, this activity is merely the input signal x. For all other nodes, the activity is propagated forward  [Pg.33]

Since the activity of unit i depends on the activity of all nodes closer to the input, we need to work through the layers one at a time, from input to output. As feedforward networks contain no loops that feed the output of one node back to a node earlier in the network, there is no ambiguity in doing this. [Pg.34]


See other pages where The Mathematical Basis of Backpropagation is mentioned: [Pg.32]   


SEARCH



Backpropagation

Mathematical Basis

© 2024 chempedia.info