About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Xiaojing Liu
- sl:arxiv_num : 1903.11279
- sl:arxiv_published : 2019-03-27T07:47:12Z
- sl:arxiv_summary : Visually rich documents (VRDs) are ubiquitous in daily business and life.
Examples are purchase receipts, insurance policy documents, custom declaration
forms and so on. In VRDs, visual and layout information is critical for
document understanding, and texts in such documents cannot be serialized into
the one-dimensional sequence without losing information. Classic information
extraction models such as BiLSTM-CRF typically operate on text sequences and do
not incorporate visual features. In this paper, we introduce a graph
convolution based model to combine textual and visual information presented in
VRDs. Graph embeddings are trained to summarize the context of a text segment
in the document, and further combined with text embeddings for entity
extraction. Extensive experiments have been conducted to show that our method
outperforms BiLSTM-CRF baselines by significant margins, on two real-world
datasets. Additionally, ablation studies are also performed to evaluate the
effectiveness of each component of our model.@en
- sl:arxiv_title : Graph Convolution for Multimodal Information Extraction from Visually Rich Documents@en
- sl:arxiv_updated : 2019-03-27T07:47:12Z
- sl:bookmarkOf : https://arxiv.org/abs/1903.11279
- sl:creationDate : 2020-06-16
- sl:creationTime : 2020-06-16T09:27:40Z
Documents with similar tags (experimental)