About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Zhengxuan Wu
- sl:arxiv_num : 2404.03592
- sl:arxiv_published : 2024-04-04T17:00:37Z
- sl:arxiv_summary : Parameter-efficient fine-tuning (PEFT) methods seek to adapt large models via
updates to a small number of weights. However, much prior interpretability work
has shown that representations encode rich semantic information, suggesting
that editing representations might be a more powerful alternative. Here, we
pursue this hypothesis by developing a family of $\textbf{Representation
Finetuning (ReFT)}$ methods. ReFT methods operate on a frozen base model and
learn task-specific interventions on hidden representations. We define a strong
instance of the ReFT family, Low-rank Linear Subspace ReFT (LoReFT). LoReFT is
a drop-in replacement for existing PEFTs and learns interventions that are
10x-50x more parameter-efficient than prior state-of-the-art PEFTs. We showcase
LoReFT on eight commonsense reasoning tasks, four arithmetic reasoning tasks,
Alpaca-Eval v1.0, and GLUE. In all these evaluations, LoReFT delivers the best
balance of efficiency and performance, and almost always outperforms
state-of-the-art PEFTs. We release a generic ReFT training library publicly at
https://github.com/stanfordnlp/pyreft.@en
- sl:arxiv_title : ReFT: Representation Finetuning for Language Models@en
- sl:arxiv_updated : 2024-04-04T17:00:37Z
- sl:bookmarkOf : https://arxiv.org/abs/2404.03592
- sl:creationDate : 2024-04-08
- sl:creationTime : 2024-04-08T11:31:23Z
Documents with similar tags (experimental)