About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Congying Xia
- sl:arxiv_num : 2104.11882
- sl:arxiv_published : 2021-04-24T04:41:15Z
- sl:arxiv_summary : Text classification is usually studied by labeling natural language texts
with relevant categories from a predefined set. In the real world, new classes
might keep challenging the existing system with limited labeled data. The
system should be intelligent enough to recognize upcoming new classes with a
few examples. In this work, we define a new task in the NLP domain, incremental
few-shot text classification, where the system incrementally handles multiple
rounds of new classes. For each round, there is a batch of new classes with a
few labeled examples per class. Two major challenges exist in this new task:
(i) For the learning process, the system should incrementally learn new classes
round by round without re-training on the examples of preceding classes; (ii)
For the performance, the system should perform well on new classes without much
loss on preceding classes. In addition to formulating the new task, we also
release two benchmark datasets in the incremental few-shot setting: intent
classification and relation classification. Moreover, we propose two entailment
approaches, ENTAILMENT and HYBRID, which show promise for solving this novel
problem.@en
- sl:arxiv_title : Incremental Few-shot Text Classification with Multi-round New Classes: Formulation, Dataset and System@en
- sl:arxiv_updated : 2021-04-24T04:41:15Z
- sl:bookmarkOf : https://arxiv.org/abs/2104.11882
- sl:creationDate : 2022-10-25
- sl:creationTime : 2022-10-25T11:46:21Z