About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Geewook Kim
- sl:arxiv_num : 2111.15664
- sl:arxiv_published : 2021-11-30T18:55:19Z
- sl:arxiv_summary : Understanding document images (e.g., invoices) is a core but challenging task
since it requires complex functions such as reading text and a holistic
understanding of the document. Current Visual Document Understanding (VDU)
methods outsource the task of reading text to off-the-shelf Optical Character
Recognition (OCR) engines and focus on the understanding task with the OCR
outputs. Although such OCR-based approaches have shown promising performance,
they suffer from 1) high computational costs for using OCR; 2) inflexibility of
OCR models on languages or types of document; 3) OCR error propagation to the
subsequent process. To address these issues, in this paper, we introduce a
novel OCR-free VDU model named Donut, which stands for Document understanding
transformer. As the first step in OCR-free VDU research, we propose a simple
architecture (i.e., Transformer) with a pre-training objective (i.e.,
cross-entropy loss). Donut is conceptually simple yet effective. Through
extensive experiments and analyses, we show a simple OCR-free VDU model, Donut,
achieves state-of-the-art performances on various VDU tasks in terms of both
speed and accuracy. In addition, we offer a synthetic data generator that helps
the model pre-training to be flexible in various languages and domains. The
code, trained model and synthetic data are available at
https://github.com/clovaai/donut.@en
- sl:arxiv_title : OCR-free Document Understanding Transformer@en
- sl:arxiv_updated : 2022-10-06T06:50:39Z
- sl:bookmarkOf : https://arxiv.org/abs/2111.15664
- sl:creationDate : 2023-02-13
- sl:creationTime : 2023-02-13T23:54:43Z
Documents with similar tags (experimental)