About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Markus N. Rabe
- sl:arxiv_num : 2112.05682
- sl:arxiv_published : 2021-12-10T17:25:07Z
- sl:arxiv_summary : We present a very simple algorithm for attention that requires $O(1)$ memory
with respect to sequence length and an extension to self-attention that
requires $O(\log n)$ memory. This is in contrast with the frequently stated
belief that self-attention requires $O(n^2)$ memory. While the time complexity
is still $O(n^2)$, device memory rather than compute capability is often the
limiting factor on modern accelerators. Thus, reducing the memory requirements
of attention allows processing of longer sequences than might otherwise be
feasible. We provide a practical implementation for accelerators that requires
$O(\sqrt{n})$ memory, is numerically stable, and is within a few percent of the
runtime of the standard implementation of attention. We also demonstrate how to
differentiate the function while remaining memory-efficient. For sequence
length 16384, the memory overhead of self-attention is reduced by 59X for
inference and by 32X for differentiation.@en
- sl:arxiv_title : Self-attention Does Not Need $O(n^2)$ Memory@en
- sl:arxiv_updated : 2022-10-10T16:36:47Z
- sl:bookmarkOf : https://arxiv.org/abs/2112.05682
- sl:creationDate : 2023-02-27
- sl:creationTime : 2023-02-27T12:58:02Z
Documents with similar tags (experimental)