About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Alexis Chevalier
- sl:arxiv_num : 2305.14788
- sl:arxiv_published : 2023-05-24T06:42:44Z
- sl:arxiv_summary : Transformer-based language models (LMs) are powerful and widely-applicable
tools, but their usefulness is constrained by a finite context window and the
expensive computational cost of processing long text documents. We propose to
adapt pre-trained LMs into AutoCompressors. These models are capable of
compressing long contexts into compact summary vectors, which are then
accessible to the model as soft prompts. Summary vectors are trained with an
unsupervised objective, whereby long documents are processed in segments and
summary vectors from all previous segments are used in language modeling. We
fine-tune OPT models on sequences of up to 30,720 tokens and show that
AutoCompressors can utilize long contexts to improve perplexity. We evaluate
AutoCompressors on in-context learning by compressing task demonstrations. We
find that summary vectors are good substitutes for plain-text demonstrations,
increasing accuracy while reducing inference cost. Finally, we explore the
benefits of pre-computing summary vectors for large corpora by applying summary
vectors to retrieval-augmented language modeling. Overall, AutoCompressors
emerge as a simple and inexpensive solution for extending the context window of
LMs while speeding up inference over long contexts.@en
- sl:arxiv_title : Adapting Language Models to Compress Contexts@en
- sl:arxiv_updated : 2023-05-24T06:42:44Z
- sl:bookmarkOf : https://arxiv.org/abs/2305.14788
- sl:creationDate : 2023-06-04
- sl:creationTime : 2023-06-04T14:53:59Z
Documents with similar tags (experimental)