Au sujet de ce document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Wenhu Chen
- sl:arxiv_num : 1909.02164
- sl:arxiv_published : 2019-09-05T00:25:17Z
- sl:arxiv_summary : The problem of verifying whether a textual hypothesis holds based on the
given evidence, also known as fact verification, plays an important role in the
study of natural language understanding and semantic representation. However,
existing studies are mainly restricted to dealing with unstructured evidence
(e.g., natural language sentences and documents, news, etc), while verification
under structured evidence, such as tables, graphs, and databases, remains
under-explored. This paper specifically aims to study the fact verification
given semi-structured data as evidence. To this end, we construct a large-scale
dataset called TabFact with 16k Wikipedia tables as the evidence for 118k
human-annotated natural language statements, which are labeled as either
ENTAILED or REFUTED. TabFact is challenging since it involves both soft
linguistic reasoning and hard symbolic reasoning. To address these reasoning
challenges, we design two different models: Table-BERT and Latent Program
Algorithm (LPA). Table-BERT leverages the state-of-the-art pre-trained language
model to encode the linearized tables and statements into continuous vectors
for verification. LPA parses statements into programs and executes them against
the tables to obtain the returned binary value for verification. Both methods
achieve similar accuracy but still lag far behind human performance. We also
perform a comprehensive analysis to demonstrate great future opportunities. The
data and code of the dataset are provided in
\url{https://github.com/wenhuchen/Table-Fact-Checking}.@en
- sl:arxiv_title : TabFact: A Large-scale Dataset for Table-based Fact Verification@en
- sl:arxiv_updated : 2019-12-31T17:16:32Z
- sl:bookmarkOf : https://arxiv.org/abs/1909.02164
- sl:creationDate : 2019-12-01
- sl:creationTime : 2019-12-01T13:20:21Z