A method of training an artificial neural network.

  Most often used to describe the method of training a multilayer perceptron (MLP) developed by Paul Werbos at Harvard in 1974. This technique is used in 30-80% of all modern MLP neural network implementations.

  More specifically, backpropagation is a method of assigning incremental adjustments across the synapse weights in the neural net. When a test input is fed to the network, and the resulting output does not match the expected output, an error vector (or possibly tensor, for unusual network topologies) is formed. Apportioning this error across the synapse weights to encourage rapid learning was seen as a major barrier to the usefulness of the MLP model by Minsky and Papert. The technique involves visualizing the multidimensional error space and adjusting each weight according to a gradient descent approach to error minimization. Using a summation of cascades of partial derivatives, expressions for the updated synapse weights for each layer can be found. The computational complexity of the synapse weight delta calculation increases exponentially with the distance from the output layer, however, making computation for networks with large numbers of layers burdensome.

  This technique has difficulties training the network when local error minima are present (that are substantially different from the global miminum error). Techniques exist to overcome this shortcoming, however, and include 'shocking' the network with a burst of random synapse weight deltas, and simulated annealing.

Log in or register to write something here or to contact authors.