]> 2016-01-19T15:53:54Z 2016-01-19 Le vivant a sa matière noire | CNRS Le journal Facebook is selling old wine (Internet.org) in a new bottle (Free Basics), users be aware - Times of India 2016-01-03T12:25:36Z 2016-01-03 2016-01-19 Introduction to Semi-Supervised Learning with Ladder Networks | Rinu Boney 2016-01-19T16:01:29Z 2016-01-09T00:50:37Z How friendly is your AI? It depends on the rewards | Robohub 2016-01-09 2016-01-12T18:36:13Z 2016-01-12 Software links « Deep Learning 2016-01-14 2016-01-14T13:45:25Z Derivation: Error Backpropagation & Gradient Descent for Neural Networks | The Clever Machine 2016-01-27T00:22:11Z 2016-01-27 Banksy's new artwork criticises use of teargas in Calais refugee camp | Art and design | The Guardian 2016-01-04T14:01:21Z Tasting the Light: Device Lets the Blind "See" with Their Tongues - Scientific American 2016-01-04 2016-01-31 2016-01-31T13:15:03Z Why 'The System' Is Rigged And The U.S. Electorate Is Angry - Forbes In an effort to offset declining performance and profits due to increased competition, these companies embraced the notion that the very purpose of a corporation is to maximize shareholder value as reflected in the current stock price.<br/> <a href="http://www.forbes.com/sites/stevedenning/2014/06/17/why-the-worlds-dumbest-idea-is-finally-dying">see also</a> Jeffrey C. Lagarias 2016-01-12T23:36:39Z [1511.08154] Notes on Cardinal's Matrices 2015-11-25T19:02:43Z 1511.08154 David Montague 2016-01-12 These notes are motivated by the work of Jean-Paul Cardinal on symmetric matrices related to the Mertens function. He showed that certain norm bounds on his matrices implied the Riemann hypothesis. Using a different matrix norm we show an equivalence of the Riemann hypothesis to suitable norm bounds on his matrices in the new norm. Then we specify a deformed version of his Mertens function matrices that unconditionally satisfies a norm bound that is of the same strength as his Riemann hypothesis bound. Notes on Cardinal's Matrices Jeffrey C. Lagarias 2015-11-25T19:02:43Z Re: sparql performance parameters and limitations 2016-01-25T18:12:33Z 2016-01-25 2016-01-07 2016-01-07T00:43:58Z TensorFlow is Terrific – A Sober Take on Deep Learning Acceleration 2016-01-04 What does it mean to not use hypermedia? - Google Groups 2016-01-04T11:58:54Z Cross-validation: evaluating estimator performance — scikit-learn documentation 2016-01-11T17:52:23Z 2016-01-11 2016-01-03T14:36:12Z Attention and Memory in Deep Learning and NLP – WildML 2016-01-03 cf. visual attention In standard [#seq2seq](/tag/sequence_to_sequence_learning) NMT, the decoder is supposed to generate a translation solely based on the last hidden state of the encoder - which therefore must capture everything from the source sentence (it must be a sentence embedding). Not good. Hence the attention mechanism. > we allow the decoder to “attend” to different parts of the source sentence at each step of the output generation. Importantly, we let the model learn what to attend to based on the input sentence and what it has produced so far > each decoder output word now depends on a weighted combination of all the input states, not just the last state. Possible to interpret what the model is doing by looking at the Attention weight matrix Cost: We need to calculate an attention value for each combination of input and output word (-> attention is a bit of a misnomer: we look at everything in details before deciding what to focus on) > attention mechanism is simply giving the network access to its internal memory, which is the hidden state of the encoder > Unlike typical memory, the memory access mechanism here is soft, which means that the network retrieves a weighted combination of all memory locations, not a value from a single discrete location Diriger, c'est prévoir, Jaujard avait prévu Comment Jacques Jaujard a sauvé le Louvre 2016-01-07T19:28:14Z 2016-01-07 Support Vector Machines — scikit-learn documentation 2016-01-11T17:20:26Z 2016-01-11 2016-01-03 2016-01-03T23:01:25Z Deeplearning4j - Open-source, distributed deep learning for the JVM a collection of algorithms for building Semantic Spaces. Semantics space algorithms capture the statistical regularities of words in a text corpora and map each word to a high-dimensional vector that represents the semantics. fozziethebeat/S-Space - Java - GitHub 2016-01-18T01:22:07Z 2016-01-18 Recurrent Neural Networks (RNN) have obtained excellent result in many natural language processing (NLP) tasks. However, understanding and interpreting the source of this success remains a challenge. In this paper, we propose Recurrent Memory Network (RMN), a novel RNN architecture, that not only amplifies the power of RNN but also facilitates our understanding of its internal functioning and allows us to discover underlying patterns in data. We demonstrate the power of RMN on language modeling and sentence completion tasks. On language modeling, RMN outperforms Long Short-Term Memory (LSTM) network on three large German, Italian, and English dataset. Additionally we perform in-depth analysis of various linguistic dimensions that RMN captures. On Sentence Completion Challenge, for which it is essential to capture sentence coherence, our RMN obtains 69.2% accuracy, surpassing the previous state-of-the-art by a large margin. 2016-01-09 Ke Tran 2016-01-06T18:44:07Z Ke Tran Recurrent Memory Networks for Language Modeling > Recurrent Neural Networks (RNN) have obtained excellent result in many natural language processing (NLP) tasks. However, understanding and interpreting the source of this success remains a challenge. > > In this paper, we propose Recurrent Memory Network (RMN), a novel RNN architecture, that not only amplifies the power of RNN but also facilitates our understanding of its internal functioning and allows us to discover underlying patterns in data. > > We demonstrate the power of RMN on language modeling and sentence completion tasks. > > On language modeling, RMN outperforms Long Short-Term Memory (LSTM) network on three large German, Italian, and English dataset. Additionally we perform in-depth analysis of various linguistic dimensions that RMN captures. On Sentence Completion Challenge, for which it is essential to capture sentence coherence, our RMN obtains 69.2% accuracy, surpassing the previous state-of-the-art by a large margin. 2016-04-22T11:13:11Z 1601.01272 Arianna Bisazza [1601.01272] Recurrent Memory Networks for Language Modeling Christof Monz 2016-01-09T00:35:09Z 2016-01-12T00:45:15Z 2016-01-12 Sample pipeline for text feature extraction and evaluation — scikit-learn documentation 2016-01-31 2016-01-31T13:22:54Z Bitcoin’s Blockchain Can Revolutionize Supply Chain Transparency Spend Matters 2016-01-13 Tomas Mikolov Kai Chen [1301.3781] Efficient Estimation of Word Representations in Vector Space 2013-01-16T18:24:43Z Greg Corrado Jeffrey Dean 2016-01-13T23:07:45Z Tomas Mikolov 2013-09-07T00:30:40Z Efficient Estimation of Word Representations in Vector Space 1301.3781 We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day to learn high quality word vectors from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring syntactic and semantic word similarities. We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day to learn high quality word vectors from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring syntactic and semantic word similarities. DSpace@MIT: Object detectors emerge in Deep Scene CNNs 2016-01-13 2016-01-13T23:57:15Z Object detectors emerge from training CNNs to perform scene classification...With object detectors emerging as a result of learning to recognize scenes, our work demonstrates that the same network can perform both scene recognition and object localization in a single forward-pass, without ever having been explicitly taught the notion of objects. Mini AI app using TensorFlow and Shiny – Opiate for the masses 2016-01-15T01:15:01Z 2016-01-15 Neural backpropagation - Wikipedia, the free encyclopedia 2016-01-03T16:21:06Z 2016-01-03 Introduction to Machine learning - Google Slides 2016-01-18T00:10:41Z 2016-01-18 A tutorial on hidden markov models in speech recognition applications 2016-01-31T13:12:44Z 2016-01-31 2016-01-18 2016-01-18T00:29:55Z CS231n Convolutional Neural Networks for Visual Recognition Game-playing software holds lessons for neuroscience : Nature News & Comment 2016-01-12 2016-01-12T18:33:23Z 2016-01-04 2016-01-04T11:51:02Z Why is the Web Loosely Coupled? A Multi-Faceted Metric for Service Design This paper presents a systematic study of the degree of coupling found in service-oriented systems Dropping OPTIONAL blocks from SPARQL CONSTRUCT queries - bobdc.blog 2016-01-25T17:55:36Z 2016-01-25 Do We Have Free Will? The Brain-Computer Duel – Neuroscience News 2016-01-04T18:31:25Z 2016-01-04 2016-01-08 2016-01-08T11:48:50Z Hashing Language | Some Ben? A better markdown cheatsheet 2016-01-14T02:36:01Z 2016-01-14 2016-01-27T23:59:26Z 2016-01-27 Première défaite d’un professionnel du go contre une intelligence artificielle 2016-01-15 2016-01-15T10:46:13Z openlink/rdf-editor: The OpenLink RDF Edito... - GitHub The OpenLink RDF Editor enables editing of RDF documents (in TURTLE notation) stored in a variety of HTTP accessible documents. Actual document access requires the target document is served from a system that supports at least one of the following open standards: Linked Data Platform (LDP), WebDAV, SPARQL 1.1 Update, or the SPARQL Graph Protocol. 2016-01-21T14:05:16Z 2016-01-21 An overview of gradient descent optimization algorithms Cheap cab ride? You must have missed Uber’s true cost | Evgeny Morozov | Opinion | The Guardian 2016-01-31T13:09:47Z 2016-01-31 Uber has so much money that, in at least some North American locations, it has been offering rides at rates so low that they didn’t even cover the combined cost of fuel and vehicle depreciation.<br/> The reason why Uber has so much cash is because, well, governments no longer do. 2016-01-13 2016-01-13T23:04:14Z The Unreasonable Reputation of Neural Networks | [ thinking machines ] El Salvador Asks People Not to Have Children for Two Years Due to Zika Virus | TIME 2016-01-27T13:57:55Z 2016-01-27 Mike Hearn 2016-01-16 2016-01-16T00:36:11Z The resolution of the Bitcoin experiment — Medium Colorizing Black and White Photos with deep learning 2016-01-09T00:27:11Z 2016-01-09 2016-01-03T17:07:39Z The CRISPR Revolution 2016-01-03 2016-01-12T18:35:11Z 2016-01-12 10 Free Deep Learning Tools - Butler Analytics The Website Obesity Crisis 2016-01-05T10:55:01Z 2016-01-05 2016-01-23 2016-01-23T21:43:58Z Le danger Github – Le blog de Carl Chenet The Silk Road's Dark-Web Dream Is Dead | WIRED 2016-01-16T12:33:19Z 2016-01-16 Probability Cheatsheet 2016-01-05T15:16:51Z 2016-01-05