About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Gregory Benton
- sl:arxiv_num : 2010.11882
- sl:arxiv_published : 2020-10-22T17:18:48Z
- sl:arxiv_summary : Invariances to translations have imbued convolutional neural networks with
powerful generalization properties. However, we often do not know a priori what
invariances are present in the data, or to what extent a model should be
invariant to a given symmetry group. We show how to \emph{learn} invariances
and equivariances by parameterizing a distribution over augmentations and
optimizing the training loss simultaneously with respect to the network
parameters and augmentation parameters. With this simple procedure we can
recover the correct set and extent of invariances on image classification,
regression, segmentation, and molecular property prediction from a large space
of augmentations, on training data alone.@en
- sl:arxiv_title : Learning Invariances in Neural Networks@en
- sl:arxiv_updated : 2020-10-22T17:18:48Z
- sl:bookmarkOf : https://arxiv.org/abs/2010.11882
- sl:creationDate : 2020-10-25
- sl:creationTime : 2020-10-25T12:38:17Z
Documents with similar tags (experimental)