About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Weicheng Kuo
- sl:arxiv_num : 2303.16839
- sl:arxiv_published : 2023-03-29T16:42:30Z
- sl:arxiv_summary : The development of language models have moved from encoder-decoder to
decoder-only designs. In addition, the common knowledge has it that the two
most popular multimodal tasks, the generative and contrastive tasks, tend to
conflict with one another, are hard to accommodate in one architecture, and
further need complex adaptations for downstream tasks. We propose a novel
paradigm of training with a decoder-only model for multimodal tasks, which is
surprisingly effective in jointly learning of these disparate vision-language
tasks. This is done with a simple model, called MaMMUT. It consists of a single
vision encoder and a text decoder, and is able to accommodate contrastive and
generative learning by a novel two-pass approach on the text decoder. We
demonstrate that joint learning of these diverse objectives is simple,
effective, and maximizes the weight-sharing of the model across these tasks.
Furthermore, the same architecture enables straightforward extensions to
open-vocabulary object detection and video-language tasks. The model tackles a
diverse range of tasks, while being modest in capacity. Our model achieves the
state of the art on image-text and text-image retrieval, video question
answering and open-vocabulary detection tasks, outperforming much larger and
more extensively trained foundational models. It shows very competitive results
on VQA and Video Captioning, especially considering its capacity. Ablations
confirm the flexibility and advantages of our approach.@en
- sl:arxiv_title : MaMMUT: A Simple Architecture for Joint Learning for MultiModal Tasks@en
- sl:arxiv_updated : 2023-03-30T05:44:47Z
- sl:bookmarkOf : https://arxiv.org/abs/2303.16839
- sl:creationDate : 2023-04-25
- sl:creationTime : 2023-04-25T00:33:41Z
- sl:relatedDoc : http://www.semanlink.net/doc/2021/01/clip_connecting_text_and_images
Documents with similar tags (experimental)