About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Kevin Clark
- sl:arxiv_num : 1907.04829
- sl:arxiv_published : 2019-07-10T17:14:47Z
- sl:arxiv_summary : It can be challenging to train multi-task neural networks that outperform or
even match their single-task counterparts. To help address this, we propose
using knowledge distillation where single-task models teach a multi-task model.
We enhance this training with teacher annealing, a novel method that gradually
transitions the model from distillation to supervised learning, helping the
multi-task model surpass its single-task teachers. We evaluate our approach by
multi-task fine-tuning BERT on the GLUE benchmark. Our method consistently
improves over standard single-task and multi-task training.@en
- sl:arxiv_title : BAM! Born-Again Multi-Task Networks for Natural Language Understanding@en
- sl:arxiv_updated : 2019-07-10T17:14:47Z
- sl:bookmarkOf : https://arxiv.org/abs/1907.04829
- sl:creationDate : 2020-05-12
- sl:creationTime : 2020-05-12T19:08:45Z
Documents with similar tags (experimental)