4th Workshop on Representation Learning for NLP(About) Talks:
- Language emergence as representation learning (Marco Baroni)
> language emergence among deep neural network agents that have to jointly solve a task. Recent findings suggest that the language-like code developed by such agents both differs from and resembles natural language in interesting ways. For example, the emergent code does not naturally represent general concepts, but rather very specific invariances in the perceptual input
- Representations shaped by dialogue interaction (Raquel Fernández)
> When we use language to communicate with each other in conversation, we build an internal representation of our evolving common ground. Traditionally, in dialogue systems this is captured by an explicit dialogue state defined a priori. Can we develop dialogue agents that learn their own (joint) representations?
- Knowledgeable and Adversarially-Robust Representation Learning (Mohit Bansal)
- Modeling Output Spaces in Continuous-Output Language Generation (Yulia Tsvetkov)
A2N: Attending to Neighbors for Knowledge Graph Inference - ACL 2019(About) > State-of-the-art models for knowledge graph completion aim at learning a fixed embedding representation of entities in a multi-relational graph which can generalize to infer unseen entity relationships at test time. This can be sub-optimal as it requires memorizing and generalizing to all possible entity relationships using these fixed representations. We thus propose a novel **attention-based method to learn query-dependent representation of entities** which adaptively combines the relevant graph neighborhood of an entity leading to more accurate KG completion.