for modifying the weights of a feedforward neural network
, usually for the purpose of training the network
to do pattern classification
Often referred to simply as "backprop", this algorithm begins after a training pattern has been presented to the network's input layer and the response propagated forward to the output layer. The correct output for the training pattern is known, and so the network's error can be calculated. The algorithm gets its name because this error measure will be propagated backward through the network, for the purpose of modifying the weights along the negative gradient of the error space, so that the network is slightly more correct for the given training pattern.