abbreviation for "backward propagation of errors",
common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of a loss function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the loss function.
Backpropagation requires a known, desired output for each input value in order to calculate the loss function gradient. It is therefore usually considered to be a supervised learning method. Backpropagation requires that the activation function used by the artificial neurons be differentiable.
[1908.01580] The HSIC Bottleneck: Deep Learning without Back-Propagation (2019)(About) > we show that it is possible to learn classification tasks at near competitive accuracy **without
backpropagation**, by maximizing a surrogate of the mutual information between hidden representations and labels and
simultaneously minimizing the mutual dependency between hidden representations and the inputs...
the hidden units of a network trained in this way form useful representations. Specifically, fully competitive accuracy
can be obtained by freezing the network trained without backpropagation and appending and training a one-layer
network using conventional SGD to convert convert the representation to the desired format.
The training method uses an approximation of the [#information bottleneck](/tag/information_bottleneck_method).
The backpropagation algorithm(About) a proof of the backpropagation algorithm based on a graphical approach in which the algorithm reduces to a graph labeling problem. This method is not only more general than the usual analytical derivations, which handle only the case of special network topologies, but also much easier to follow. It also shows how the algorithm can be efficiently implemented in computing systems in which only local information can be transported through the network.