About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Yunzhi Yao
- sl:arxiv_num : 2106.13474
- sl:arxiv_published : 2021-06-25T07:37:05Z
- sl:arxiv_summary : Large pre-trained models have achieved great success in many natural language
processing tasks. However, when they are applied in specific domains, these
models suffer from domain shift and bring challenges in fine-tuning and online
serving for latency and capacity constraints. In this paper, we present a
general approach to developing small, fast and effective pre-trained models for
specific domains. This is achieved by adapting the off-the-shelf general
pre-trained models and performing task-agnostic knowledge distillation in
target domains. Specifically, we propose domain-specific vocabulary expansion
in the adaptation stage and employ corpus level occurrence probability to
choose the size of incremental vocabulary automatically. Then we systematically
explore different strategies to compress the large pre-trained models for
specific domains. We conduct our experiments in the biomedical and computer
science domain. The experimental results demonstrate that our approach achieves
better performance over the BERT BASE model in domain-specific tasks while 3.3x
smaller and 5.1x faster than BERT BASE. The code and pre-trained models are
available at https://aka.ms/adalm.@en
- sl:arxiv_title : Adapt-and-Distill: Developing Small, Fast and Effective Pretrained Language Models for Domains@en
- sl:arxiv_updated : 2021-06-29T05:42:13Z
- sl:bookmarkOf : https://arxiv.org/abs/2106.13474
- sl:creationDate : 2021-10-21
- sl:creationTime : 2021-10-21T18:24:46Z
Documents with similar tags (experimental)