About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Chengsong Huang
- sl:arxiv_num : 2307.13269
- sl:arxiv_published : 2023-07-25T05:39:21Z
- sl:arxiv_summary : Low-rank adaptations (LoRA) are often employed to fine-tune large language
models (LLMs) for new tasks. This paper investigates LoRA composability for
cross-task generalization and introduces LoraHub, a strategic framework devised
for the purposive assembly of LoRA modules trained on diverse given tasks, with
the objective of achieving adaptable performance on unseen tasks. With just a
few examples from a novel task, LoraHub enables the fluid combination of
multiple LoRA modules, eradicating the need for human expertise. Notably, the
composition requires neither additional model parameters nor gradients. Our
empirical results, derived from the Big-Bench Hard (BBH) benchmark, suggest
that LoraHub can effectively mimic the performance of in-context learning in
few-shot scenarios, excluding the necessity of in-context examples alongside
each inference input. A significant contribution of our research is the
fostering of a community for LoRA, where users can share their trained LoRA
modules, thereby facilitating their application to new tasks. We anticipate
this resource will widen access to and spur advancements in general
intelligence as well as LLMs in production. Code will be available at
https://github.com/sail-sg/lorahub.@en
- sl:arxiv_title : LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition@en
- sl:arxiv_updated : 2023-07-25T05:39:21Z
- sl:bookmarkOf : https://arxiv.org/abs/2307.13269
- sl:creationDate : 2023-08-08
- sl:creationTime : 2023-08-08T08:15:26Z
Documents with similar tags (experimental)