About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Rohan Anil
- sl:arxiv_num : 1804.03235
- sl:arxiv_published : 2018-04-09T20:56:03Z
- sl:arxiv_summary : Techniques such as ensembling and distillation promise model quality
improvements when paired with almost any base model. However, due to increased
test-time cost (for ensembles) and increased complexity of the training
pipeline (for distillation), these techniques are challenging to use in
industrial settings. In this paper we explore a variant of distillation which
is relatively straightforward to use as it does not require a complicated
multi-stage setup or many new hyperparameters. Our first claim is that online
distillation enables us to use extra parallelism to fit very large datasets
about twice as fast. Crucially, we can still speed up training even after we
have already reached the point at which additional parallelism provides no
benefit for synchronous or asynchronous stochastic gradient descent. Two neural
networks trained on disjoint subsets of the data can share knowledge by
encouraging each model to agree with the predictions the other model would have
made. These predictions can come from a stale version of the other model so
they can be safely computed using weights that only rarely get transmitted. Our
second claim is that online distillation is a cost-effective way to make the
exact predictions of a model dramatically more reproducible. We support our
claims using experiments on the Criteo Display Ad Challenge dataset, ImageNet,
and the largest to-date dataset used for neural language modeling, containing
$6\times 10^{11}$ tokens and based on the Common Crawl repository of web data.@en
- sl:arxiv_title : Large scale distributed neural network training through online distillation@en
- sl:arxiv_updated : 2018-04-09T20:56:03Z
- sl:bookmarkOf : https://arxiv.org/abs/1804.03235
- sl:creationDate : 2020-06-06
- sl:creationTime : 2020-06-06T16:51:26Z
- sl:relatedDoc : http://www.semanlink.net/doc/2020/05/1706_00384_deep_mutual_learni
Documents with similar tags (experimental)