About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Eric Mitchell
- sl:arxiv_num : 2206.06520
- sl:arxiv_published : 2022-06-13T23:40:34Z
- sl:arxiv_summary : Even the largest neural networks make errors, and once-correct predictions
can become invalid as the world changes. Model editors make local updates to
the behavior of base (pre-trained) models to inject updated knowledge or
correct undesirable behaviors. Existing model editors have shown promise, but
also suffer from insufficient expressiveness: they struggle to accurately model
an edit's intended scope (examples affected by the edit), leading to inaccurate
predictions for test inputs loosely related to the edit, and they often fail
altogether after many edits. As a higher-capacity alternative, we propose
Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model
(SERAC), which stores edits in an explicit memory and learns to reason over
them to modulate the base model's predictions as needed. To enable more
rigorous evaluation of model editors, we introduce three challenging language
model editing problems based on question answering, fact-checking, and dialogue
generation. We find that only SERAC achieves high performance on all three
problems, consistently outperforming existing approaches to model editing by a
significant margin. Code, data, and additional project information will be made
available at https://sites.google.com/view/serac-editing.@en
- sl:arxiv_title : Memory-Based Model Editing at Scale@en
- sl:arxiv_updated : 2022-06-13T23:40:34Z
- sl:bookmarkOf : https://arxiv.org/abs/2206.06520
- sl:creationDate : 2022-07-07
- sl:creationTime : 2022-07-07T16:16:11Z
Documents with similar tags (experimental)