About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Thomas Müller
- sl:arxiv_num : 2203.14655
- sl:arxiv_published : 2022-03-28T11:16:46Z
- sl:arxiv_summary : We study the problem of building text classifiers with little or no training
data, commonly known as zero and few-shot text classification. In recent years,
an approach based on neural textual entailment models has been found to give
strong results on a diverse range of tasks. In this work, we show that with
proper pre-training, Siamese Networks that embed texts and labels offer a
competitive alternative. These models allow for a large reduction in inference
cost: constant in the number of labels rather than linear. Furthermore, we
introduce label tuning, a simple and computationally efficient approach that
allows to adapt the models in a few-shot setup by only changing the label
embeddings. While giving lower performance than model fine-tuning, this
approach has the architectural advantage that a single encoder can be shared by
many different tasks.@en
- sl:arxiv_title : Few-Shot Learning with Siamese Networks and Label Tuning@en
- sl:arxiv_updated : 2022-03-28T11:16:46Z
- sl:bookmarkOf : https://arxiv.org/abs/2203.14655
- sl:creationDate : 2022-03-30
- sl:creationTime : 2022-03-30T16:14:44Z
- sl:relatedDoc : http://www.semanlink.net/doc/2022/03/thomas_muller_sur_twitter_pa
Documents with similar tags (experimental)