About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Sandeep Subramanian
- sl:arxiv_num : 1909.03186
- sl:arxiv_published : 2019-09-07T04:33:26Z
- sl:arxiv_summary : We present a method to produce abstractive summaries of long documents that
exceed several thousand words via neural abstractive summarization. We perform
a simple extractive step before generating a summary, which is then used to
condition the transformer language model on relevant information before being
tasked with generating a summary. We show that this extractive step
significantly improves summarization results. We also show that this approach
produces more abstractive summaries compared to prior work that employs a copy
mechanism while still achieving higher rouge scores. Note: The abstract above
was not written by the authors, it was generated by one of the models presented
in this paper.@en
- sl:arxiv_title : On Extractive and Abstractive Neural Document Summarization with Transformer Language Models@en
- sl:arxiv_updated : 2019-09-07T04:33:26Z
- sl:bookmarkOf : https://arxiv.org/abs/1909.03186
- sl:creationDate : 2019-09-11
- sl:creationTime : 2019-09-11T18:15:42Z
Documents with similar tags (experimental)