About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Jonas Pfeiffer
- sl:arxiv_num : 2302.11529
- sl:arxiv_published : 2023-02-22T18:11:25Z
- sl:arxiv_summary : Transfer learning has recently become the dominant paradigm of machine
learning. Pre-trained models fine-tuned for downstream tasks achieve better
performance with fewer labelled examples. Nonetheless, it remains unclear how
to develop models that specialise towards multiple tasks without incurring
negative interference and that generalise systematically to non-identically
distributed tasks. Modular deep learning has emerged as a promising solution to
these challenges. In this framework, units of computation are often implemented
as autonomous parameter-efficient modules. Information is conditionally routed
to a subset of modules and subsequently aggregated. These properties enable
positive transfer and systematic generalisation by separating computation from
routing and updating modules locally. We offer a survey of modular
architectures, providing a unified view over several threads of research that
evolved independently in the scientific literature. Moreover, we explore
various additional purposes of modularity, including scaling language models,
causal inference, programme induction, and planning in reinforcement learning.
Finally, we report various concrete applications where modularity has been
successfully deployed such as cross-lingual and cross-modal knowledge transfer.
Related talks and projects to this survey, are available at
https://www.modulardeeplearning.com/.@en
- sl:arxiv_title : Modular Deep Learning@en
- sl:arxiv_updated : 2023-02-22T18:11:25Z
- sl:bookmarkOf : https://arxiv.org/abs/2302.11529
- sl:creationDate : 2023-02-23
- sl:creationTime : 2023-02-23T13:25:12Z
Documents with similar tags (experimental)