Parents:

Wikipedia
Backpropagation

abbreviation for "backward propagation of errors",

common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of a loss function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the loss function. Backpropagation requires a known, desired output for each input value in order to calculate the loss function gradient. It is therefore usually considered to be a supervised learning method. Backpropagation requires that the activation function used by the artificial neurons be differentiable.

common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of a loss function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the loss function. Backpropagation requires a known, desired output for each input value in order to calculate the loss function gradient. It is therefore usually considered to be a supervised learning method. Backpropagation requires that the activation function used by the artificial neurons be differentiable.

Related Tags:

9 Documents (Long List)

- [1908.01580] The HSIC Bottleneck: Deep Learning without Back-Propagation (2019)
*(About)*

> we show that it is possible to learn classification tasks at near competitive accuracy **without backpropagation**, by maximizing a surrogate of the mutual information between hidden representations and labels and simultaneously minimizing the mutual dependency between hidden representations and the inputs... the hidden units of a network trained in this way form useful representations. Specifically, fully competitive accuracy can be obtained by freezing the network trained without backpropagation and appending and training a one-layer network using conventional SGD to convert convert the representation to the desired format. The training method uses an approximation of the [#information bottleneck](/tag/information_bottleneck_method).

2019-08-15 - Tradeoff batch size vs. number of iterations to train a neural network - Cross Validated
*(About)*

2018-08-06 - Stanford Seminar - "Can the brain do back-propagation?" - Geoffrey Hinton
*(About)*

2018-01-22 - How the backpropagation algorithm works
*(About)*

2017-08-21 - The backpropagation algorithm
*(About)*

a proof of the backpropagation algorithm based on a graphical approach in which the algorithm reduces to a graph labeling problem. This method is not only more general than the usual analytical derivations, which handle only the case of special network topologies, but also much easier to follow. It also shows how the algorithm can be efficiently implemented in computing systems in which only local information can be transported through the network.

2017-08-21 - Calculus on Computational Graphs: Backpropagation -- colah's blog
*(About)*

2017-08-20 - Stacked Approximated Regression Machine: A Simple Deep Learning Approach
*(About)*

This paper seems too good to be true! They can train a VGG-like net VERY quickly to good accuracy, without backprop.

2016-09-03 - Derivation: Error Backpropagation & Gradient Descent for Neural Networks | The Clever Machine
*(About)*

2016-01-14 - Neural backpropagation - Wikipedia, the free encyclopedia
*(About)*

2016-01-03

Properties

- sl:creationDate : 2016-01-03
- sl:creationTime : 2016-01-03T16:00:09Z
- sl:describedBy : https://en.wikipedia.org/wiki/Backpropagation
- rdf:type : sl:Tag
- skos:altLabel : Back Propagation@en
- skos:prefLabel : Backpropagation@en