About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Haokun Liu
- sl:arxiv_num : 2205.05638
- sl:arxiv_published : 2022-05-11T17:10:41Z
- sl:arxiv_summary : Few-shot in-context learning (ICL) enables pre-trained language models to
perform a previously-unseen task without any gradient-based training by feeding
a small number of training examples as part of the input. ICL incurs
substantial computational, memory, and storage costs because it involves
processing all of the training examples every time a prediction is made.
Parameter-efficient fine-tuning (PEFT) (e.g. adapter modules, prompt tuning,
sparse update methods, etc.) offers an alternative paradigm where a small set
of parameters are trained to enable a model to perform the new task. In this
paper, we rigorously compare few-shot ICL and PEFT and demonstrate that the
latter offers better accuracy as well as dramatically lower computational
costs. Along the way, we introduce a new PEFT method called (IA)$^3$ that
scales activations by learned vectors, attaining stronger performance while
only introducing a relatively tiny amount of new parameters. We also propose a
simple recipe based on the T0 model called T-Few that can be applied to new
tasks without task-specific tuning or modifications. We validate the
effectiveness of T-Few on completely unseen tasks by applying it to the RAFT
benchmark, attaining super-human performance for the first time and
outperforming the state-of-the-art by 6% absolute. All of the code used in our
experiments is publicly available.@en
- sl:arxiv_title : Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning@en
- sl:arxiv_updated : 2022-08-26T16:23:29Z
- sl:bookmarkOf : https://arxiv.org/abs/2205.05638
- sl:creationDate : 2022-12-15
- sl:creationTime : 2022-12-15T12:34:51Z
Documents with similar tags (experimental)