About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Qianqian Xie
- sl:arxiv_num : 2208.09982
- sl:arxiv_published : 2022-08-21T23:09:29Z
- sl:arxiv_summary : Recently, neural topic models (NTMs) have been incorporated into pre-trained
language models (PLMs), to capture the global semantic information for text
summarization. However, in these methods, there remain limitations in the way
they capture and integrate the global semantic information. In this paper, we
propose a novel model, the graph contrastive topic enhanced language model
(GRETEL), that incorporates the graph contrastive topic model with the
pre-trained language model, to fully leverage both the global and local
contextual semantics for long document extractive summarization. To better
capture and incorporate the global semantic information into PLMs, the graph
contrastive topic model integrates the hierarchical transformer encoder and the
graph contrastive learning to fuse the semantic information from the global
document context and the gold summary. To this end, GRETEL encourages the model
to efficiently extract salient sentences that are topically related to the gold
summary, rather than redundant sentences that cover sub-optimal topics.
Experimental results on both general domain and biomedical datasets demonstrate
that our proposed method outperforms SOTA methods.@en
- sl:arxiv_title : GRETEL: Graph Contrastive Topic Enhanced Language Model for Long Document Extractive Summarization@en
- sl:arxiv_updated : 2022-08-21T23:09:29Z
- sl:bookmarkOf : https://arxiv.org/abs/2208.09982
- sl:creationDate : 2022-08-24
- sl:creationTime : 2022-08-24T08:13:17Z
Documents with similar tags (experimental)