]> How to do Unsupervised Clustering with Keras – Chengwei Zhang – Medium 2018-06-09T09:23:35Z 2018-06-09 Python Data Science Handbook (Jake VanderPlas) 2018-06-29T10:50:30Z 2018-06-29 2018-06-08 2018-06-08T15:23:26Z A Tri-Partite Neural Document Language Model for Semantic Information Retrieval (2018 - ESWC conference) from the abstract: Previous work in information retrieval have shown that using evidence, such as concepts and relations, from external knowledge sources could enhance the retrieval performance... This paper presents a new tri-partite neural document language framework that leverages explicit knowledge to jointly constrain word, concept, and document learning representations to tackle a number of issues including polysemy and granularity mismatch. The Natural Language Decathlon: Multitask Learning as Question Answering (2018) Salesforce research 2018-06-21T12:55:41Z 2018-06-21 "In @TensorFlow 1.9, it is much easier to use Keras with the Data API: just pass data iterators, specify the number of steps per epoch, and you're good to go! Plus it works in both graph mode and eager mode, kudos to the TF team!… https://t.co/EH3hY50N0o" 2018-06-10T09:18:12Z 2018-06-10 Aurélien Geron sur Twitter : "In @TensorFlow 1.9, it is much easier to use Keras with the Data API..." Extending NLP - Stardog 2018-06-14T13:21:28Z 2018-06-14 > can we develop one model, train it in an unsupervised way on a large amount of data, and then fine-tune the model to achieve good performance on many different tasks? Our results indicate that this approach works surprisingly well; the same core model can be fine-tuned for very different tasks with minimal adaptation. a scalable, task-agnostic system based on a combination of two existing ideas: transformers and unsupervised pre-training unsupervised generative pre-training of language models followed by discriminative fine-tunning Improving Language Understanding with Unsupervised Learning 2018-06-12T09:16:15Z 2018-06-12 la sonde Parker Solar Probe partira de Floride cet été pour s’avancer au plus près de l’étoile Un ticket pour le Soleil | CNRS Le journal 2018-06-28T01:30:27Z 2018-06-28 2018-06-23 2018-06-23T00:55:49Z Training Classifiers with Natural Language Explanations [1806.05662] GLoMo: Unsupervisedly Learned Relational Graphs as Transferable Representations Modern deep transfer learning approaches have mainly focused on learning generic feature vectors from one task that are transferable to other tasks, such as word embeddings in language and pretrained convolutional features in vision. However, these approaches usually transfer unary features and largely ignore more structured graphical representations. This work explores the possibility of learning generic latent relational graphs that capture dependencies between pairs of data units (e.g., words or pixels) from large-scale unlabeled data and transferring the graphs to downstream tasks. 2018-06-23T00:58:21Z 2018-06-23 2018-06-08 2018-06-08T13:15:55Z Niger Islamic State hostage: 'They want to kill foreign soldiers' | The Guardian 2018-06-21 2018-06-21T13:12:11Z Taskonomy | Stanford 2018-06-07 2018-06-07T23:58:17Z 2017 Deloitte State of Cognitive Survey sebastianruder/NLP-progress: Repository to track the progress in Natural Language Processing (NLP), including the datasets and the current state-of-the-art for the most common NLP tasks. 2018-06-23 2018-06-23T01:04:30Z Télérama sur Twitter : "Bansky Paris Invasion ! Venu incognito comme toujours, le célèbre street artist a déjà laissé deux œuvres qui témoignent de son passage dans les 18e et 19e arrondissements. Elles livrent un message fort au gouvernement français. #Banksy #Paris #streetart https://t.co/AbiT6RfsEw… https://t.co/t302gOpZri" 2018-06-24T20:03:47Z 2018-06-24 "Bansky Paris Invasion !" 2018-06-17T12:27:17Z 2018-06-17 Why You Don’t Need Data Scientists – Kurt Cagle – Medium Markup for Autos - schema.org 2018-06-19T10:55:59Z 2018-06-19 Why do low-dimensional word vectors exist? > a text corpus is imagined as being generated by a random walk in a latent variable space, and the word production is via a loglinear distribution. This model is shown to imply several empirically discovered past methods for word embedding like word2vec, GloVe, PMI etc [Related paper](/doc/?uri=http%3A%2F%2Fwww.aclweb.org%2Fanthology%2FQ16-1028) 2018-06-10T15:07:37Z 2018-06-10 Sanjeev Arora on "A theoretical approach to semantic representations" - YouTube (2016) A Word Embedding Approach to Predicting the Compositionality of Multiword Expressions (2015) 2018-06-08T07:46:42Z 2018-06-08 > introduction to extremely simple ways of computing sentence embeddings, which on many standard tasks, beat many state-of-the-art deep learning methods. Related to [this paper](/doc/?uri=https%3A%2F%2Fopenreview.net%2Fforum%3Fid%3DSyK00v5xx) (BTW, contains a good intro to text embeddings) 2018-06-25 2018-06-25T21:00:24Z Deep-learning-free Text and Sentence Embedding, Part 1 – Off the convex path Evaluation of sentence embeddings in downstream and linguistic probing tasks 2018-06-27T11:48:33Z 2018-06-27 2018-06-19 2018-06-19T10:06:38Z Understanding the Working of Universal Language Model Fine Tuning (ULMFiT) – Let the Machines Learn Pl@ntNet identifiez une plante à partir d'une photo 2018-06-06 2018-06-06T22:17:42Z 2018-06-25 2018-06-25T21:04:28Z Deep-learning-free Text and Sentence Embedding, Part 2 – Off the convex path > Can we design a text embedding with the simplicity and transparency of SIF while also incorporating word order information? yes we can. [1806.01261] Relational inductive biases, deep learning, and graph networks (2018) > generalizing beyond one's experiences--a hallmark of human intelligence from infancy--remains a formidable challenge for modern AI 2018-06-13 2018-06-13T13:34:03Z design challenges of constructing effective and efficient neural sequence labeling systems 2018-06-28 2018-06-28T01:21:31Z [1806.04470] Design Challenges and Misconceptions in Neural Sequence Labeling 2018-06-19 2018-06-19T10:15:34Z [1806.06259] Evaluation of sentence embeddings in downstream and linguistic probing tasks a simple approach using bag-of-words with a recently introduced language model for deep context-dependent word embeddings proved to yield better results in many tasks when compared to sentence encoders trained on entailment datasets > We also show, however, that we are still far away from a universal encoder that can perform consistently across several downstream tasks. 2018-06-08 2018-06-08T00:20:41Z Chatbots were the next big thing: what happened? – The Startup – Medium Au Sahara, voyager devient un crime 2018-06-03T15:11:20Z 2018-06-03 2018-06-24 2018-06-24T20:06:31Z Banksy peint les murs de Paris pour illustrer la crise des migrants Un Univers sans matière noire? | CNRS Le journal 2018-06-08T14:02:56Z 2018-06-08 D2KLab/entity2rec: entity2rec generates item recommendation from knowledge graphs 2018-06-04T00:10:02Z 2018-06-04 2018-06-09T09:26:53Z 2018-06-09 Reinforcement Learning from scratch – Insight Data