Self-Supervised Learning, Yann LeCun, Facebook AI Research | Dartmouth News(About) I will propose the hypothesis that **self-supervised learning of predictive world models is an essential missing ingredient of current approaches to AI**. With such models, one can predict outcomes and plan courses of actions. One could argue that prediction is the essence of intelligence. Good predictive models may be the basis of intuition, reasoning and "common sense", allowing us to fill in missing information: predicting the future from the past and present, or inferring the state of the world from noisy percepts.
[1806.05662] GLoMo: Unsupervisedly Learned Relational Graphs as Transferable Representations(About) Modern deep transfer learning approaches have mainly focused on learning generic feature vectors from one task that are transferable to other tasks, such as word embeddings in language and pretrained convolutional features in vision. However, these approaches usually transfer unary features and largely ignore more structured graphical representations. This work explores the possibility of learning generic latent relational graphs that capture dependencies between pairs of data units (e.g., words or pixels) from large-scale unlabeled data and transferring the graphs to downstream tasks.