]> [pdf](https://storage.googleapis.com/cantookhub-media-eden/45/6ae1e47bbb8a3f93751e43e51f4e8a54f892fd.pdf) 2018-08-02T21:27:53Z Le Zarmatarey : contribution à l'histoire des populations d'entre Niger et Dallol Mawri / par Boubé Gado | Gallica 2018-08-02 Solid: Empowering people through choice 2018-08-03T09:00:53Z 2018-08-03 How a Cashless Society Could Embolden Big Brother - The Atlantic 2018-08-02T21:30:21Z 2018-08-02 2018-08-04 Bioengineers Are Closer Than Ever To Lab-Grown Lungs | WIRED 2018-08-04T14:21:37Z 2018-08-04 2018-08-04T22:44:28Z Inversion of Control vs Dependency Injection - Stack Overflow 2018-08-14T22:02:18Z 2018-08-14 Learning Meaning in Natural Language Processing - The Semantics Mega-Thread 2018-08-21T17:25:23Z 2018-08-21 Simple guide to Neural Arithmetic Logic Units (NALU): Explanation, Intuition and Code a neural network model that can learn simple to complex numerical functions with great extrapolation (generalisation) ability 2018-08-08T13:48:49Z Mathematics of Machine Learning and Deep Learning - Plenary talk at International Congress of Mathematicians 2018 2018-08-08 [article](/doc/?uri=https%3A%2F%2Fwww.dropbox.com%2Fs%2Fy59petiffzq63gt%2Fmain.pdf%3Fdl%3D0) Mathematics of Machine Learning: An introduction 2018-08-08 2018-08-08T13:53:29Z Serverless for data scientists 2018-08-29T16:17:29Z 2018-08-29 Using machine learning for concept extraction on clinical documents from multiple data sources (2011) 2018-08-13 2018-08-13T17:46:50Z Comparing deep learning and concept extraction based methods for patient phenotyping from clinical narratives (2018) 2018-08-12T20:11:48Z 2018-08-12 > A CNN for NLP learns which combinations of adjacent words are associated with a given concept. How can I use machine learning to propose tags for content? - Quora 2018-08-07T17:44:18Z 2018-08-07 Jersey vs. RESTEasy: A JAX-RS Implementation Comparison - Genuitec 2018-08-05T18:48:14Z 2018-08-05 Automatic Keyphrase Extraction: A Survey of the State of the Art (2014) 2018-08-10 2018-08-10T10:51:50Z [same author](/doc/?uri=http%3A%2F%2Fwww.hlt.utdallas.edu%2F%7Evince%2Fpapers%2Fcoling10-keyphrase.pdf) 2018-08-28 2018-08-28T09:23:39Z For what tasks is Pytorch preferable to Tensorflow? - Quora La revanche des bactériophages sur CRISPR-Cas9 - CNRS 2018-08-02T21:51:17Z 2018-08-02 a learning experience — for us and the machines 2018-08-28 2018-08-28T19:21:40Z OpenAI’s Dota 2 defeat is still a win for artificial intelligence  - The Verge the essence of bullshit is an indifference to the way things really are The Bullshit Web — Pixel Envy 2018-08-05T15:49:23Z 2018-08-05 Andrej Risteski Tengyu Ma Linear Algebraic Structure of Word Senses, with Applications to Polysemy Yuanzhi Li Sanjeev Arora Yingyu Liang 2018-08-28 2018-12-07T17:30:03Z [1601.03764] Linear Algebraic Structure of Word Senses, with Applications to Polysemy Word embeddings are ubiquitous in NLP and information retrieval, but it is unclear what they represent when the word is polysemous. Here it is shown that multiple word senses reside in linear superposition within the word embedding and simple sparse coding can recover vectors that approximately capture the senses. The success of our approach, which applies to several embedding methods, is mathematically explained using a variant of the random walk on discourses model (Arora et al., 2016). A novel aspect of our technique is that each extracted word sense is accompanied by one of about 2000 "discourse atoms" that gives a succinct description of which other words co-occur with that word sense. Discourse atoms can be of independent interest, and make the method potentially more useful. Empirical tests are used to verify and support the theory. 1601.03764 2018-08-28T11:00:08Z 2016-01-14T22:02:18Z Sanjeev Arora > Here it is shown that multiple word senses reside in linear superposition within the word embedding and simple sparse coding can recover vectors that approximately capture the senses > Each extracted word sense is accompanied by one of about 2000 “discourse atoms” that gives a succinct description of which other words co-occur with that word sense. > The success of the approach is mathematically explained using a variant of the random walk on discourses model ("random walk": a generative model for language). Under the assumptions of this model, there exists a linear relationship between the vector of a word w and the vectors of the words in its contexts (It is not the average of the words in w's context, but in a given corpus the matrix of the linear relationship does not depend on w. It can be estimated, and so we can compute the embedding of a word from the contexts it belongs to) [Related blog post](/doc/?uri=https%3A%2F%2Fwww.offconvex.org%2F2016%2F07%2F10%2Fembeddingspolysemy%2F) 2018-08-05T18:50:15Z 2018-08-05 Apache Shiro | Simple. Java. Security. 2018-08-07 2018-08-07T17:51:23Z Commerce de peaux d’âne en Afrique, un conte moderne à la chinoise The Best Textbooks on Every Subject 2018-08-19T22:42:09Z 2018-08-19 > Make progress by accumulation, not random walks. > What if we could compile a list of the best textbooks on every subject? That would be extremely useful. 2018-08-28 2018-08-28T11:25:11Z A Latent Variable Model Approach to PMI-based Word Embeddings (2016) [Related YouTube video](/doc/?uri=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DKR46z_V0BVw) Based on a generative model (random walk on words involving a latent discourse vector), a rigorous justification for models such as word2vec and GloVe, including the hyperparameter choices for the latter, and a mathematical explanation for why these word embeddings allow analogies to be solved using linear algebra. Un rapport pointe les failles des études internationales (et libérales) sur l’Afrique 2018-08-06T19:31:59Z 2018-08-06 2018-08-16T11:40:04Z 2018-08-16 Daily Python Tip sur Twitter : "Wanna know which line of your function is eating all the time? Measure it with #lprun:… " Jim Ratcliffe, l’homme le plus riche du Royaume-Uni, partisan du Brexit, est un fervent soutien de la déréglementation et des impôts les plus bas 2018-08-14 2018-08-14T14:22:59Z Jim Ratcliffe, le brexiteur milliardaire qui part se réfugier… à Monaco IBM Q is an industry-first initiative to build commercially available universal quantum computers for business and science. 2018-08-21 2018-08-21T09:01:17Z Quantum Computing - IBM Q 2018-08-04 2018-08-04T01:21:08Z In Bed with Madonna Documentary film chronicling Madonna's 1990 World Tour 2018-08-12T18:29:53Z 2018-08-12 A Framework for Semi supervised Concept Extraction from MOOC content (2017) Using Machine Learning to Support Continuous Ontology Development (2010) 2018-08-07 2018-08-07T16:00:18Z The problem with programming and how to fix it – Alarming Development 2018-08-05T11:04:44Z 2018-08-05 Programming today is exactly what you’d expect to get by paying an isolated subculture of nerdy young men to entertain themselves for fifty years 2018-08-27T00:13:24Z Terrance DeVries 2018-02-13T21:31:36Z Learning Confidence for Out-of-Distribution Detection in Neural Networks 1802.04865 Modern neural networks are very powerful predictive models, but they are often incapable of recognizing when their predictions may be wrong. Closely related to this is the task of out-of-distribution detection, where a network must determine whether or not an input is outside of the set on which it is expected to safely perform. To jointly address these issues, we propose a method of learning confidence estimates for neural networks that is simple to implement and produces intuitively interpretable outputs. We demonstrate that on the task of out-of-distribution detection, our technique surpasses recently proposed techniques which construct confidence based on the network's output distribution, without requiring any additional labels or access to out-of-distribution examples. Additionally, we address the problem of calibrating out-of-distribution detectors, where we demonstrate that misclassified in-distribution examples can be used as a proxy for out-of-distribution examples. [1802.04865] Learning Confidence for Out-of-Distribution Detection in Neural Networks Graham W. Taylor 2018-08-27 Terrance DeVries 2018-02-13T21:31:36Z Text feature extraction based on deep learning: a review (2017) 2018-08-13 2018-08-13T14:21:24Z outlines the common methods used in text feature extraction first, and then expands frequently used deep learning methods in text feature extraction and its applications, and forecasts the application of deep learning in feature extraction 2018-08-22 2018-08-22T15:20:22Z Cloud Datalab – Outil interactif d'analyse de données  |  Google Cloud > a layer of optical computing prior to electronic computing, improving performance on image classification tasks while adding minimal electronic computational cost or processing time 2018-08-28 2018-08-28T09:39:54Z Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification | Scientific Reports How to Prepare for a Machine Learning Interview - Semantic Bits 2018-08-07T10:34:14Z 2018-08-07 the things you have to master to become a Machine Learning expert L'écrivain V. S. Naipaul est mort, et c'était un prix Nobel de littérature tout sauf consensuel - Livres - Télérama.fr 2018-08-12T20:02:52Z 2018-08-12 Learning to Understand Phrases by Embedding the Dictionary (2016) > The composed meaning of the words in a dictionary definition (a tall, long-necked, spotted ruminant of Africa) should correspond to the meaning of the word they define (giraffe) 2018-08-23T22:28:38Z 2018-08-23 2018-08-28 2018-08-28T09:58:45Z A Road to Common Lisp / Steve Losh 2018-08-13 2018-08-13T17:50:11Z Tax haven link to rainforest destruction and illegal fishing - BBC News [1601.00670] Variational Inference: A Review for Statisticians Alp Kucukelbir 2018-05-09T20:52:28Z Variational Inference: A Review for Statisticians 2016-01-04T21:28:04Z David M. Blei One of the core problems of modern statistics is to approximate difficult-to-compute probability densities. This problem is especially important in Bayesian statistics, which frames all inference about unknown quantities as a calculation involving the posterior density. In this paper, we review variational inference (VI), a method from machine learning that approximates probability densities through optimization. VI has been used in many applications and tends to be faster than classical methods, such as Markov chain Monte Carlo sampling. The idea behind VI is to first posit a family of densities and then to find the member of that family which is close to the target. Closeness is measured by Kullback-Leibler divergence. We review the ideas behind mean-field variational inference, discuss the special case of VI applied to exponential family models, present a full example with a Bayesian mixture of Gaussians, and derive a variant that uses stochastic optimization to scale up to massive data. We discuss modern research in VI and highlight important open problems. VI is powerful, but it is not yet well understood. Our hope in writing this paper is to catalyze statistical research on this class of algorithms. David M. Blei Jon D. McAuliffe 2018-08-07T10:37:09Z 2018-08-07 1601.00670 [Supplement to this](/doc/?uri=https%3A%2F%2Fdl.acm.org%2Fcitation.cfm%3Fid%3D3159660) 2018-08-07 Supplementary : Extreme Multi-label Learning with Label Features for Warm-start Tagging, Ranking & Recommendation 2018-08-07T14:57:57Z 2018-08-03 2018-08-03T14:18:12Z Petrichor: why does rain smell so good? - BBC News Representations for Language: From Word Embeddings to Sentence Meanings (2017) - Slides 2018-08-28T10:35:07Z 2018-08-28 [YouTube](/doc/?uri=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DnFCxTtBqF5U) Obituary: VS Naipaul - BBC News 2018-08-13T18:07:12Z 2018-08-13 2018-08-07T16:31:30Z 2018-08-07 Automatic Tag Recommendation Algorithms for Social Recommender Systems - Microsoft Research (2009) 2018-08-06 2018-08-06T18:28:42Z Saumon grillé au beurre rouge façon Joël Robuchon : la recette de Nicolas Chatenier 2018-08-03T10:38:01Z 2018-08-03 Small height evolved twice on 'Hobbit' island of Flores - BBC News Google AI Blog: Transformer: A Novel Neural Network Architecture for Language Understanding 2018-08-17T10:03:28Z 2018-08-17 what are the pros and cons of the various unsupervised word and sentence/ document embedding models? - Quora 2018-08-19 2018-08-19T13:28:39Z J. Zico Kolter For most deep learning practitioners, sequence modeling is synonymous with recurrent networks. Yet recent results indicate that convolutional architectures can outperform recurrent networks on tasks such as audio synthesis and machine translation. Given a new sequence modeling task or dataset, which architecture should one use? We conduct a systematic evaluation of generic convolutional and recurrent architectures for sequence modeling. The models are evaluated across a broad range of standard tasks that are commonly used to benchmark recurrent networks. Our results indicate that a simple convolutional architecture outperforms canonical recurrent networks such as LSTMs across a diverse range of tasks and datasets, while demonstrating longer effective memory. We conclude that the common association between sequence modeling and recurrent networks should be reconsidered, and convolutional networks should be regarded as a natural starting point for sequence modeling tasks. To assist related work, we have made code available at http://github.com/locuslab/TCN . 2018-08-05 2018-08-05T10:43:56Z We conclude that the common association between sequence modeling and recurrent networks should be reconsidered, and convolutional networks should be regarded as a natural starting point for sequence modeling tasks [1803.01271] An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling 2018-03-04T00:20:29Z Shaojie Bai Shaojie Bai Vladlen Koltun 1803.01271 2018-04-19T14:32:38Z An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling Application Security With Apache Shiro 2018-08-05T18:53:39Z 2018-08-05 2018-08-06 Tradeoff batch size vs. number of iterations to train a neural network - Cross Validated 2018-08-06T18:22:42Z 2018-08-22 2018-08-22T23:00:27Z El secreto de sus ojos 2018-08-09 2018-08-09T11:04:47Z Énergie : les promesses de l'hydrogène | CNRS Le journal 2018-08-20 L’huile de palme menace aussi les primates d’Afrique 2018-08-20T22:50:33Z Contextual String Embeddings for Sequence Labeling (2018) 2018-08-24T10:08:38Z 2018-08-24 > we propose to leverage the internal states of a trained character language model to produce a novel type of word embedding which we refer to as contextual string embeddings. Our proposed embeddings have the distinct properties that they (a) are trained without any explicit notion of words and thus fundamentally model words as sequences of characters, and (b) are contextualized by their surrounding text, meaning that the same word will have different embeddings depending on its contextual use. [Github](https://github.com/zalandoresearch/flair) > A very simple framework for state-of-the-art NLP. Developed by Zalando Research. paper: ["Contextual String Embeddings for Sequence Labeling (2018)"](/doc/?uri=http%3A%2F%2Faclweb.org%2Fanthology%2FC18-1139) 2018-08-24T10:13:33Z zalandoresearch/flair: A very simple framework for state-of-the-art NLP 2018-08-24 2018-08-23 2018-08-23T22:37:54Z 2018 Conference on Empirical Methods in Natural Language Processing - EMNLP 2018