About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : David Lopez-Paz
- sl:arxiv_num : 1511.03643
- sl:arxiv_published : 2015-11-11T20:27:54Z
- sl:arxiv_summary : Distillation (Hinton et al., 2015) and privileged information (Vapnik &
Izmailov, 2015) are two techniques that enable machines to learn from other
machines. This paper unifies these two techniques into generalized
distillation, a framework to learn from multiple machines and data
representations. We provide theoretical and causal insight about the inner
workings of generalized distillation, extend it to unsupervised, semisupervised
and multitask learning scenarios, and illustrate its efficacy on a variety of
numerical simulations on both synthetic and real-world data.@en
- sl:arxiv_title : Unifying distillation and privileged information@en
- sl:arxiv_updated : 2016-02-26T02:21:52Z
- sl:bookmarkOf : https://arxiv.org/abs/1511.03643
- sl:creationDate : 2020-05-31
- sl:creationTime : 2020-05-31T10:42:51Z
- sl:relatedDoc : http://www.semanlink.net/doc/2020/04/1503_02531_distilling_the_kno
Documents with similar tags (experimental)