]> The backpropagation algorithm 2017-08-21T16:13:15Z 2017-08-21 a proof of the backpropagation algorithm based on a graphical approach in which the algorithm reduces to a graph labeling problem. This method is not only more general than the usual analytical derivations, which handle only the case of special network topologies, but also much easier to follow. It also shows how the algorithm can be efficiently implemented in computing systems in which only local information can be transported through the network. 2017-08-21T14:12:50Z Validating RDF data with SHACL - bobdc.blog 2017-08-21 2017-08-27 2017-08-27T02:20:49Z A Survival Guide to a PhD 2017-08-23T14:56:41Z 2017-08-23 A Few Useful Things to Know about Machine Learning Vector Representations of Words  |  TensorFlow 2017-08-28T15:41:07Z 2017-08-28 Andrew M. Dai Quoc V. Le [1507.07998] Document Embedding with Paragraph Vectors 2015-07-29T01:04:28Z Andrew M. Dai 1507.07998 Document Embedding with Paragraph Vectors 2015-07-29T01:04:28Z 2017-08-20 2017-08-20T23:29:27Z Paragraph Vectors has been recently proposed as an unsupervised method for learning distributed representations for pieces of texts. In their work, the authors showed that the method can learn an embedding of movie review texts which can be leveraged for sentiment analysis. That proof of concept, while encouraging, was rather narrow. Here we consider tasks other than sentiment analysis, provide a more thorough comparison of Paragraph Vectors to other document modelling algorithms such as Latent Dirichlet Allocation, and evaluate performance of the method as we vary the dimensionality of the learned representation. We benchmarked the models on two document similarity data sets, one from Wikipedia, one from arXiv. We observe that the Paragraph Vector method performs significantly better than other methods, and propose a simple improvement to enhance embedding quality. Somewhat surprisingly, we also show that much like word embeddings, vector operations on Paragraph Vectors can perform useful semantic results. Christopher Olah How does word2vec work? Can someone walk through a specific example? - Quora 2017-08-28T16:26:41Z 2017-08-28 Sagascience - Jean Rouch | L’ethnologue-cinéaste 2017-08-23T12:51:32Z 2017-08-23 What should I do to increase my skills in deep learning? - Quora 2017-08-16T10:36:57Z 2017-08-16 DeepL 2017-08-30T11:23:25Z 2017-08-30 Interview-confession du plus grand serial-killer : le moustique | Réalités Biomédicales 2017-08-24T00:26:21Z 2017-08-24 2017-08-26 2017-08-26T11:14:33Z Learning React.js is easier than you think – EdgeCoders machine learning tools for the analysis of time series tslearn 2017-08-09T19:35:49Z 2017-08-09 [1412.1897] Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images 2014-12-05T05:29:43Z Anh Nguyen Jason Yosinski Anh Nguyen 2017-08-24 1412.1897 Deep neural networks (DNNs) have recently been achieving state-of-the-art performance on a variety of pattern-recognition tasks, most notably visual classification problems. Given that DNNs are now able to classify objects in images with near-human-level performance, questions naturally arise as to what differences remain between computer and human vision. A recent study revealed that changing an image (e.g. of a lion) in a way imperceptible to humans can cause a DNN to label the image as something else entirely (e.g. mislabeling a lion a library). Here we show a related result: it is easy to produce images that are completely unrecognizable to humans, but that state-of-the-art DNNs believe to be recognizable objects with 99.99% confidence (e.g. labeling with certainty that white noise static is a lion). Specifically, we take convolutional neural networks trained to perform well on either the ImageNet or MNIST datasets and then find images with evolutionary algorithms or gradient ascent that DNNs label with high confidence as belonging to each dataset class. It is possible to produce images totally unrecognizable to human eyes that DNNs believe with near certainty are familiar objects, which we call "fooling images" (more generally, fooling examples). Our results shed light on interesting differences between human vision and current DNNs, and raise questions about the generality of DNN computer vision. 2017-08-24T00:47:56Z Jeff Clune 2015-04-02T23:12:56Z Bhuwan Dhingra 2017-03-02T23:58:54Z Ruslan Salakhutdinov Hanxiao Liu The focus of past machine learning research for Reading Comprehension tasks has been primarily on the design of novel deep learning architectures. Here we show that seemingly minor choices made on (1) the use of pre-trained word embeddings, and (2) the representation of out-of-vocabulary tokens at test time, can turn out to have a larger impact than architectural choices on the final performance. We systematically explore several options for these choices, and provide recommendations to researchers working in this area. 2017-08-28 Bhuwan Dhingra 2017-08-28T00:22:38Z 2017-03-02T23:58:54Z William W. Cohen [1703.00993] A Comparative Study of Word Embeddings for Reading Comprehension abstract: The focus of past machine learning research for Reading Comprehension tasks has been primarily on the design of novel deep learning architectures. Here we show that seemingly minor choices made on 1. the use of pre-trained word embeddings, and 2. the representation of out-of-vocabulary tokens at test time, can turn out to have a larger impact than architectural choices on the final performance A Comparative Study of Word Embeddings for Reading Comprehension 1703.00993 marin français, réduit en esclavage au Maroc, qui aurait accompagné son maître à Tombouctou (16-17e siècle) Paul IMBERT 2017-08-09T16:47:48Z 2017-08-09 Poor Law Amendment Act 1834 - Wikipedia 2017-08-10 2017-08-10T00:05:32Z 2017-08-01 2017-08-01T19:01:59Z Le sable - Enquête sur une disparition | ARTE+7 2017-08-20 2017-08-20T20:10:27Z Calculus on Computational Graphs: Backpropagation -- colah's blog 2017-08-06T10:46:42Z 2017-08-06 Un dimanche à Kigali, du mémorial du génocide à « l’hôtel des mille combines » A People's History of the United States - Wikipedia 2017-08-30 2017-08-30T23:21:29Z Heroes of Deep Learning: Andrew Ng interviews Geoffrey Hinton - YouTube 2017-08-16 2017-08-16T10:32:18Z "A thought is just a big vector of neural activity" - not a symbolic expression 2017-08-17 2017-08-17T13:18:41Z Why We Terminated Daily Stormer Rwanda : comment le génocide est enseigné à l’école 2017-08-02T10:43:40Z 2017-08-02 The agricultural labor conundrum | Robohub 2017-08-09T10:27:52Z 2017-08-09 Andrej Karpathy Academic Website 2017-08-27T01:37:54Z 2017-08-27 The hard problem of consciousness is a distraction from the real one | Aeon Essays 2017-08-25T15:56:15Z 2017-08-25 In the 19th century, the German polymath Hermann von Helmholtz proposed that the brain is a prediction machine, and that what we see, hear and feel are nothing more than the brain’s best guesses about the causes of its sensory inputs. David Weiss 2017-08-04T00:43:05Z Ryan McDonald [1708.00214] Natural Language Processing with Small Feed-Forward Networks Natural Language Processing with Small Feed-Forward Networks 1708.00214 2017-08-04 Alex Salcianu Slav Petrov We show that small and shallow feed-forward neural networks can achieve near state-of-the-art results on a range of unstructured and structured language processing tasks while being considerably cheaper in memory and computational requirements than deep recurrent models. Motivated by resource-constrained environments like mobile phones, we showcase simple techniques for obtaining such small neural network models, and investigate different tradeoffs when deciding how to allocate a small memory budget. google guys: > We show that small and shallow feed- forward neural networks can achieve near state-of-the-art results on a range of unstructured and structured language processing tasks while being considerably cheaper in memory and computational requirements than deep recurrent models. Motivated by resource-constrained environments like mobile phones, we showcase simple techniques for obtaining such small neural network models, and investigate different tradeoffs when deciding how to allocate a small memory budget. Jan A. Botha Ji Ma 2017-08-01T09:13:44Z Anton Bakalov Jan A. Botha Emily Pitler 2017-08-01T09:13:44Z CS231n Convolutional Neural Networks for Visual Recognition 2017-08-23T18:42:51Z 2017-08-23 How the backpropagation algorithm works 2017-08-21T16:42:49Z 2017-08-21 Hypothesis – The Internet, peer reviewed. 2017-08-23T18:40:49Z 2017-08-23 2017-08-15T15:51:20Z 2017-08-15 The W3C standard constraint language for RDF: SHACL - bobdc.blog L’incendie au musée d’Abomey relance le débat sur la conservation des trésors du Bénin 2017-08-31T13:14:53Z 2017-08-31 Jean Rouch – Les Maîtres Fous [1955] [1/2] - YouTube 2017-08-23T14:26:51Z 2017-08-23 They call on the new gods, the gods of the city, the gods of the technology, the gods of power: the Haouka Modern Software Over-Engineering Mistakes – RDX – Medium 2017-08-18T12:34:53Z 2017-08-18 RDF-Ext v1 Release | bergis reptile zoo of software, hardware and ideas 2017-08-09T00:08:14Z 2017-08-09 Le jour de l’umuganda, tout le monde travaille au Rwanda 2017-08-02T10:44:38Z 2017-08-02 2017-08-24 2017-08-24T00:11:43Z Décapité, ce ver repousse avec la tête… d’une autre espèce | Réalités Biomédicales Ruslan Salakhutdinov - Carnegie Mellon School of Computer Science 2017-08-28T00:19:53Z 2017-08-28