About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Yaqing Wang
- sl:arxiv_num : 2205.12410
- sl:arxiv_published : 2022-05-24T23:41:22Z
- sl:arxiv_summary : Standard fine-tuning of large pre-trained language models (PLMs) for
downstream tasks requires updating hundreds of millions to billions of
parameters, and storing a large copy of the PLM weights for every task
resulting in increased cost for storing, sharing and serving the models. To
address this, parameter-efficient fine-tuning (PEFT) techniques were introduced
where small trainable components are injected in the PLM and updated during
fine-tuning. We propose AdaMix as a general PEFT method that tunes a mixture of
adaptation modules -- given the underlying PEFT method of choice -- introduced
in each Transformer layer while keeping most of the PLM weights frozen. For
instance, AdaMix can leverage a mixture of adapters like Houlsby or a mixture
of low rank decomposition matrices like LoRA to improve downstream task
performance over the corresponding PEFT methods for fully supervised and
few-shot NLU and NLG tasks. Further, we design AdaMix such that it matches the
same computational cost and the number of tunable parameters as the underlying
PEFT method. By only tuning 0.1-0.2% of PLM parameters, we show that AdaMix
outperforms SOTA parameter-efficient fine-tuning and full model fine-tuning for
both NLU and NLG tasks.@en
- sl:arxiv_title : AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning@en
- sl:arxiv_updated : 2022-11-02T02:47:17Z
- sl:bookmarkOf : https://arxiv.org/abs/2205.12410
- sl:creationDate : 2022-12-16
- sl:creationTime : 2022-12-16T23:51:49Z
Documents with similar tags (experimental)