About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Weizhi Wang
- sl:arxiv_num : 2306.07174
- sl:arxiv_published : 2023-06-12T15:13:39Z
- sl:arxiv_summary : Existing large language models (LLMs) can only afford fix-sized inputs due to
the input length limit, preventing them from utilizing rich long-context
information from past inputs. To address this, we propose a framework, Language
Models Augmented with Long-Term Memory (LongMem), which enables LLMs to
memorize long history. We design a novel decoupled network architecture with
the original backbone LLM frozen as a memory encoder and an adaptive residual
side-network as a memory retriever and reader. Such a decoupled memory design
can easily cache and update long-term past contexts for memory retrieval
without suffering from memory staleness. Enhanced with memory-augmented
adaptation training, LongMem can thus memorize long past context and use
long-term memory for language modeling. The proposed memory retrieval module
can handle unlimited-length context in its memory bank to benefit various
downstream tasks. Typically, LongMem can enlarge the long-form memory to 65k
tokens and thus cache many-shot extra demonstration examples as long-form
memory for in-context learning. Experiments show that our method outperforms
strong long-context models on ChapterBreak, a challenging long-context modeling
benchmark, and achieves remarkable improvements on memory-augmented in-context
learning over LLMs. The results demonstrate that the proposed method is
effective in helping language models to memorize and utilize long-form
contents. Our code is open-sourced at https://aka.ms/LongMem.@en
- sl:arxiv_title : Augmenting Language Models with Long-Term Memory@en
- sl:arxiv_updated : 2023-06-12T15:13:39Z
- sl:bookmarkOf : https://arxiv.org/abs/2306.07174
- sl:creationDate : 2023-06-13
- sl:creationTime : 2023-06-13T12:57:37Z
Documents with similar tags (experimental)