sl:arxiv_summary : Retrieval-based language models (R-LM) model the probability of natural
language text by combining a standard language model (LM) with examples
retrieved from an external datastore at test time. While effective, a major
bottleneck of using these models in practice is the computationally costly
datastore search, which can be performed as frequently as every time step. In
this paper, we present RetoMaton - retrieval automaton - which approximates the
datastore search, based on (1) saving pointers between consecutive datastore
entries, and (2) clustering of entries into \"states\". This effectively results
in a weighted finite automaton built on top of the datastore, instead of
representing the datastore as a flat list. The creation of the automaton is
unsupervised, and a RetoMaton can be constructed from any text collection:
either the original training corpus or from another domain. Traversing this
automaton at inference time, in parallel to the LM inference, reduces its
perplexity by up to 1.85, or alternatively saves up to 83% of the nearest
neighbor searches over $k$NN-LM (Khandelwal et al., 2020) without hurting
perplexity. Our code and trained models are available at
https://github.com/neulab/retomaton .@en
sl:arxiv_title : Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval@en