About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Peng Xu
- sl:arxiv_num : 2310.03025
- sl:arxiv_published : 2023-10-04T17:59:41Z
- sl:arxiv_summary : Extending the context window of large language models (LLMs) is getting
popular recently, while the solution of augmenting LLMs with retrieval has
existed for years. The natural questions are: i) Retrieval-augmentation versus
long context window, which one is better for downstream tasks? ii) Can both
methods be combined to get the best of both worlds? In this work, we answer
these questions by studying both solutions using two state-of-the-art
pretrained LLMs, i.e., a proprietary 43B GPT and LLaMA2-70B. Perhaps
surprisingly, we find that LLM with 4K context window using simple
retrieval-augmentation at generation can achieve comparable performance to
finetuned LLM with 16K context window via positional interpolation on long
context tasks, while taking much less computation. More importantly, we
demonstrate that retrieval can significantly improve the performance of LLMs
regardless of their extended context window sizes. Our best model,
retrieval-augmented LLaMA2-70B with 32K context window, outperforms
GPT-3.5-turbo-16k and Davinci003 in terms of average score on seven long
context tasks including question answering and query-based summarization. It
also outperforms its non-retrieval LLaMA2-70B-32k baseline by a margin, while
being much faster at generation. Our study provides general insights on the
choice of retrieval-augmentation versus long context extension of LLM for
practitioners.@en
- sl:arxiv_title : Retrieval meets Long Context Large Language Models@en
- sl:arxiv_updated : 2023-10-04T17:59:41Z
- sl:bookmarkOf : https://arxiv.org/abs/2310.03025
- sl:creationDate : 2023-10-07
- sl:creationTime : 2023-10-07T14:35:23Z
Documents with similar tags (experimental)