]> 2020-03-28 2020-03-28T10:33:17Z BERT, ELMo, & GPT-2: How Contextual are Contextualized Word Representations? | SAIL Blog 2020-03-12T12:38:34Z 2020-03-12 Adrian Gschwend sur Twitter : "getting started with RDF and JavaScript!..." Gilda 2020-03-22 2020-03-22T19:08:11Z Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and Perspective Artur Garcez 2020-03-11T20:33:01Z Marco Gori 2020-03-15 2020-02-29T18:55:13Z 2020-03-15T10:39:59Z reviews the state-of-the-art on the use of GNNs as a model of neural-symbolic computing. Luis Lamb Pedro Avelar Marcelo Prates Luis Lamb Neural-symbolic computing has now become the subject of interest of both academic and industry research laboratories. Graph Neural Networks (GNN) have been widely used in relational and symbolic domains, with widespread application of GNNs in combinatorial optimization, constraint satisfaction, relational reasoning and other scientific domains. The need for improved explainability, interpretability and trust of AI systems in general demands principled methodologies, as suggested by neural-symbolic computing. In this paper, we review the state-of-the-art on the use of GNNs as a model of neural-symbolic computing. This includes the application of GNNs in several domains as well as its relationship to current developments in neural-symbolic computing. 2003.00330 Moshe Vardi [2003.00330] Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and Perspective 2020-03-09T17:21:00Z 2020-03-09 Au Gabon, une grotte pourrait révéler des secrets vieux de 700 ans Luciano Serafini Marco Gori Michael Spranger Current advances in Artificial Intelligence and machine learning in general, and deep learning in particular have reached unprecedented impact not only across research communities, but also over popular media channels. However, concerns about interpretability and accountability of AI have been raised by influential thinkers. In spite of the recent impact of AI, several works have identified the need for principled knowledge representation and reasoning mechanisms integrated with deep learning-based systems to provide sound and explainable models for such systems. Neural-symbolic computing aims at integrating, as foreseen by Valiant, two most fundamental cognitive abilities: the ability to learn from the environment, and the ability to reason from what has been learned. Neural-symbolic computing has been an active topic of research for many years, reconciling the advantages of robust learning in neural networks and reasoning and interpretability of symbolic representation. In this paper, we survey recent accomplishments of neural-symbolic computing as a principled methodology for integrated machine learning and reasoning. We illustrate the effectiveness of the approach by outlining the main characteristics of the methodology: principled integration of neural learning with symbolic knowledge representation and reasoning allowing for the construction of explainable AI systems. The insights provided by neural-symbolic computing shed new light on the increasingly prominent need for interpretable and accountable AI systems. 1905.06088 2019-05-15T11:00:48Z [1905.06088] Neural-Symbolic Computing: An Effective Methodology for Principled Integration of Machine Learning and Reasoning 2020-03-15 Artur d'Avila Garcez Neural-Symbolic Computing: An Effective Methodology for Principled Integration of Machine Learning and Reasoning 2019-05-15T11:00:48Z Artur d'Avila Garcez Luis C. Lamb 2020-03-15T11:06:28Z Son N. Tran 2020-03-13T10:38:03Z 2020-03-13 Martynas Jusevicius sur Twitter : "Is there a solution for entity recognition that would use a local #KnowledgeGraph to look for matches? Ideally any SPARQL datasource..." Weijie Liu Ping Wang 2020-03-08T22:54:15Z Peng Zhou Zhe Zhao 2019-09-17T06:16:04Z Weijie Liu Zhiruo Wang 2019-09-17T06:16:04Z 2020-03-08 K-BERT: Enabling Language Representation with Knowledge Graph a knowledge-enabled language representation model (K-BERT) with knowledge graphs (KGs), in which triples are injected into the sentences as domain knowledge (Summarized in [Domain adaptation of word embeddings through the exploitation of in-domain corpora and knowledge bases (PhD Thesis 2021)](doc:2022/03/domain_adaptation_of_word_embed), p43) 1909.07606 [1909.07606] K-BERT: Enabling Language Representation with Knowledge Graph Haotang Deng Qi Ju Pre-trained language representation models, such as BERT, capture a general language representation from large-scale corpora, but lack domain-specific knowledge. When reading a domain text, experts make inferences with relevant knowledge. For machines to achieve this capability, we propose a knowledge-enabled language representation model (K-BERT) with knowledge graphs (KGs), in which triples are injected into the sentences as domain knowledge. However, too much knowledge incorporation may divert the sentence from its correct meaning, which is called knowledge noise (KN) issue. To overcome KN, K-BERT introduces soft-position and visible matrix to limit the impact of knowledge. K-BERT can easily inject domain knowledge into the models by equipped with a KG without pre-training by-self because it is capable of loading model parameters from the pre-trained BERT. Our investigation reveals promising results in twelve NLP tasks. Especially in domain-specific tasks (including finance, law, and medicine), K-BERT significantly outperforms BERT, which demonstrates that K-BERT is an excellent choice for solving the knowledge-driven problems that require experts. Max Little sur Twitter : "Causal bootstrapping - a simple way of doing causal inference using arbitrary machine learning algo..." > Techniques from causal inference, such as probabilistic causal diagrams and do-calculus, provide powerful (nonparametric) tools for drawing causal inferences from such observational data. However, these techniques are often incompatible with modern, nonparametric machine learning algorithms since they typically require explicit probabilistic models. Here, we develop causal bootstrapping for augmenting classical nonparametric bootstrap resampling with information on the causal relationship between variables 2020-03-08 2020-03-08T11:36:48Z How Taiwan fended off the coronavirus | WORLD News Group 2020-03-29 2020-03-29T16:02:11Z 2019-02-26T20:15:09Z 1902.10197 2019-02-26T20:15:09Z 2020-03-03 2020-03-03T13:27:48Z We study the problem of learning representations of entities and relations in knowledge graphs for predicting missing links. The success of such a task heavily relies on the ability of modeling and inferring the patterns of (or between) the relations. In this paper, we present a new approach for knowledge graph embedding called RotatE, which is able to model and infer various relation patterns including: symmetry/antisymmetry, inversion, and composition. Specifically, the RotatE model defines each relation as a rotation from the source entity to the target entity in the complex vector space. In addition, we propose a novel self-adversarial negative sampling technique for efficiently and effectively training the RotatE model. Experimental results on multiple benchmark knowledge graphs show that the proposed RotatE model is not only scalable, but also able to infer and model various relation patterns and significantly outperform existing state-of-the-art models for link prediction. Jian-Yun Nie Jian Tang > We study the problem of learning representations of entities and relations in knowledge graphs for predicting missing links. [1902.10197] RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space Zhi-Hong Deng Zhiqing Sun RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space Zhiqing Sun Unsupervised NER using BERT - Hands-on NLP model review - Quora 2020-03-06 2020-03-06T00:12:06Z [GitHub](https://github.com/ajitrajasekharan/unsupervised_NER) Knowledge graph embedding, which projects symbolic entities and relations into continuous vector spaces, is gaining increasing attention. Previous methods allow a single static embedding for each entity or relation, ignoring their intrinsic contextual nature, i.e., entities and relations may appear in different graph contexts, and accordingly, exhibit different properties. This work presents Contextualized Knowledge Graph Embedding (CoKE), a novel paradigm that takes into account such contextual nature, and learns dynamic, flexible, and fully contextualized entity and relation embeddings. Two types of graph contexts are studied: edges and paths, both formulated as sequences of entities and relations. CoKE takes a sequence as input and uses a Transformer encoder to obtain contextualized representations. These representations are hence naturally adaptive to the input, capturing contextual meanings of entities and relations therein. Evaluation on a wide variety of public benchmarks verifies the superiority of CoKE in link prediction and path query answering. It performs consistently better than, or at least equally well as current state-of-the-art in almost every case, in particular offering an absolute improvement of 21.0% in H@10 on path query answering. Our code is available at \url{https://github.com/PaddlePaddle/Research/tree/master/KG/CoKE}. Songtai Dai A method to build contextualized entity and relation embeddings. Entities and relations may appear in different graph contexts. **Edges and paths, both formulated as sequences of entities and relations, are passed as input to a Transformer encoder to learn the contextualized representations..** [Github](https://github.com/PaddlePaddle/Research/tree/master/KG/CoKE) Yajuan Lyu Quan Wang 2020-04-04T07:22:20Z 2019-11-06T02:27:39Z 1911.02168 2020-03-22T17:34:10Z Yong Zhu Haifeng Wang Pingping Huang Quan Wang Jing Liu CoKE: Contextualized Knowledge Graph Embedding [1911.02168] CoKE: Contextualized Knowledge Graph Embedding Wenbin Jiang 2020-03-22 Hua Wu Au Kenya, l’unique girafe blanche femelle et son petit tués par des braconniers 2020-03-11 2020-03-11T16:33:15Z Entity matching at Amazon: a new [#entity alignment](/tag/entity_alignment) technique that factors in information about the graph in the vicinity of the entity name. [#Graph neural network](/tag/graph_neural_networks) that specifically addresses the problem of **merging multi-type knowledge graphs**. 2020-03-19T21:33:27Z 2020-03-19 Combining knowledge graphs, quickly and accurately (2020) 2020-03-08T11:45:16Z cover two areas of deep learning in which labeled data is not required: Deep Generative Models and Self-supervised Learning CS294-158-SP20 Deep Unsupervised Learning Spring 2020 2020-03-08 LinkedDataHub - AtomGraph's open-source Knowledge Graph management system 2020-03-05 2020-03-05T13:08:32Z > It is as easy to use for graph data as WordPress is for web content Mapper Annotated Text Plugin | Elastic 2020-03-14T11:47:52Z 2020-03-14 The doc about annotated text fields. See also elastic list: - <https://discuss.elastic.co/t/can-elasticsearch-handle-long-text/173991/2> - <https://discuss.elastic.co/t/continued-support-for-annotated-text-plugin/218688> 2020-03-08T12:11:37Z 2020-03-08 One-track minds: Using AI for music source separation 2020-03-01T12:15:45Z 2020-03-01 La plus grosse explosion jamais observée depuis le Big Bang L'événement fut si puissant qu'il aurait créé une brèche de la taille de 15 Voies lactées réunies dans le plasma environnant. > L'Univers est un endroit étrange. 2020-03-13T10:30:41Z 2020-03-13 AmbiverseNLU: A Natural Language Understanding suite by Max Planck Institute for Informatics 2020-03-29T10:47:45Z 2020-03-29 DIY masks for all could help stop coronavirus - The Washington Post 2020-03-02T19:55:39Z 2020-03-02 Neuromorphic spintronics | Nature Electronics Yunfan Shao Yige Xu 2020-03-19T13:34:50Z 2020-03-24T10:32:40Z [2003.08271] Pre-trained Models for Natural Language Processing: A Survey Ning Dai 2003.08271 Xipeng Qiu Recently, the emergence of pre-trained models (PTMs) has brought natural language processing (NLP) to a new era. In this survey, we provide a comprehensive review of PTMs for NLP. We first briefly introduce language representation learning and its research progress. Then we systematically categorize existing PTMs based on a taxonomy with four perspectives. Next, we describe how to adapt the knowledge of PTMs to the downstream tasks. Finally, we outline some potential directions of PTMs for future research. This survey is purposed to be a hands-on guide for understanding, using, and developing PTMs for various NLP tasks. 2020-03-19 2020-03-18T15:22:51Z Xuanjing Huang Xipeng Qiu Pre-trained Models for Natural Language Processing: A Survey Tianxiang Sun Coronavirus: Why You Must Act Now - Tomas Pueyo - Medium 2020-03-11T00:43:20Z 2020-03-11 Taïwan, un modèle dans la lutte contre le coronavirus (RFI - 12/03/2020) 2020-03-29T15:48:59Z 2020-03-29 Chen Liang > Fun AutoML-Zero experiments: Evolutionary search discovers fundamental ML algorithms from scratch, e.g., small neural nets with backprop. > Can evolution be the “Master Algorithm”? ;) AutoML-Zero: Evolving Machine Learning Algorithms From Scratch Quoc V. Le Esteban Real Machine learning research has advanced in multiple aspects, including model structures and learning methods. The effort to automate such research, known as AutoML, has also made significant progress. However, this progress has largely focused on the architecture of neural networks, where it has relied on sophisticated expert-designed layers as building blocks---or similarly restrictive search spaces. Our goal is to show that AutoML can go further: it is possible today to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks. We demonstrate this by introducing a novel framework that significantly reduces human bias through a generic search space. Despite the vastness of this space, evolutionary search can still discover two-layer neural networks trained by backpropagation. These simple neural networks can then be surpassed by evolving directly on tasks of interest, e.g. CIFAR-10 variants, where modern techniques emerge in the top algorithms, such as bilinear interactions, normalized gradients, and weight averaging. Moreover, evolution adapts algorithms to different task types: e.g., dropout-like techniques appear when little data is available. We believe these preliminary successes in discovering machine learning algorithms from scratch indicate a promising new direction for the field. 2020-03-17 Esteban Real 2020-03-06T19:00:04Z David R. So 2020-03-06T19:00:04Z 2003.03384 2020-03-17T21:57:40Z [2003.03384] AutoML-Zero: Evolving Machine Learning Algorithms From Scratch [1909.03193] KG-BERT: BERT for Knowledge Graph Completion 2019-09-07T06:09:25Z Liang Yao Pre-trained language models for knowledge graph completion. **Triples are treated as textual sequences**. (Hum, j'ai déjà vu ça quelque part. Ah, peut-être [RDF2VEC](tag:rdf2vec)? // TODO à voir) Takes entity and relation descriptions of a triple as input and computes scoring function of the triple with the KG-BERT language model > we first treat entities, relations and triples as textual sequences and turn knowledge graph completion into a sequence classification problem. We then fine-tune BERT model on these sequences for predicting the plausibility of a triple or a relation. The method [GitHub](https://github.com/yao8839836/kg-bert) Liang Yao Yuan Luo Knowledge graphs are important resources for many artificial intelligence tasks but often suffer from incompleteness. In this work, we propose to use pre-trained language models for knowledge graph completion. We treat triples in knowledge graphs as textual sequences and propose a novel framework named Knowledge Graph Bidirectional Encoder Representations from Transformer (KG-BERT) to model these triples. Our method takes entity and relation descriptions of a triple as input and computes scoring function of the triple with the KG-BERT language model. Experimental results on multiple benchmark knowledge graphs show that our method can achieve state-of-the-art performance in triple classification, link prediction and relation prediction tasks. Chengsheng Mao 1909.03193 2019-09-11T06:03:30Z 2020-03-22T18:56:43Z 2020-03-22 KG-BERT: BERT for Knowledge Graph Completion 2020-03-29T11:40:20Z [paper](/doc/2020/05/2003_08001_realistic_re_evalu) 2020-03-29 Chengkai Li sur Twitter : "Link prediction methods on knowledge graphs don't work..." > Google’s not secure message means this: “Google tried to take control of the open web and this site said no.” 2020-03-08T22:48:47Z 2020-03-08 Google and HTTP Aidan Hogan 2020-03-07 2003.02320 Juan Sequeda In this paper we provide a comprehensive introduction to knowledge graphs, which have recently garnered significant attention from both industry and academia in scenarios that require exploiting diverse, dynamic, large-scale collections of data. After a general introduction, we motivate and contrast various graph-based data models and query languages that are used for knowledge graphs. We discuss the roles of schema, identity, and context in knowledge graphs. We explain how knowledge can be represented and extracted using a combination of deductive and inductive techniques. We summarise methods for the creation, enrichment, quality assessment, refinement, and publication of knowledge graphs. We provide an overview of prominent open knowledge graphs and enterprise knowledge graphs, their applications, and how they use the aforementioned techniques. We conclude with high-level future research directions for knowledge graphs. Steffen Staab [2003.02320] Knowledge Graphs Sabbir M. Rashid Axel-Cyrille Ngonga Ngomo 2020-03-04T20:20:32Z Axel Polleres Eva Blomqvist Draws together many topics & perspectives regarding Knowledge Graphs. 18 co-authors, lead by Aidan Hogan. (Regarding language models for embedding, they refer to [Wang et al. Knowledge Graph Embedding: A Survey of Approaches and Applications](/doc/2019/05/knowledge_graph_embedding_a_su)) 2020-03-07T09:20:34Z José Emilio Labra Gayo Knowledge Graphs Sabrina Kirrane Gerard de Melo Aidan Hogan Michael Cochez Claudio Gutierrez Antoine Zimmermann Sebastian Neumaier 2020-04-17T00:07:00Z Anisa Rula Roberto Navigli Lukas Schmelzeisen Claudia d'Amato 2020-03-01T02:28:59Z Transformers are Graph Neural Networks | NTU Graph Deep Learning Lab 2020-03-01 > The key idea: Sentences are fully-connected graphs of words, and Transformers are very similar to Graph Attention Networks (GATs) which use multi-head attention to aggregate features from their neighborhood nodes (i.e., words). [ twitter](https://twitter.com/chaitjo/status/1233220586358181888) 2020-03-01T03:17:11Z 2020-03-01 [about this blog post](/doc/2020/03/transformers_are_graph_neural_n) Chaitanya Joshi sur Twitter : "Excited to share a blog post on the connection between #Transformers for NLP and #GraphNeuralNetworks"