]> 2017-09-07 2017-09-07T13:23:34Z Url encode decode online tool - InfoHeap 1709.02840 2018-05-17T18:28:52Z 2017-09-26 Osvaldo Simeone [1709.02840] A Brief Introduction to Machine Learning for Engineers Osvaldo Simeone This monograph aims at providing an introduction to key concepts, algorithms, and theoretical results in machine learning. The treatment concentrates on probabilistic models for supervised and unsupervised learning problems. It introduces fundamental concepts and algorithms by building on first principles, while also exposing the reader to more advanced topics with extensive pointers to the literature, within a unified notation and mathematical framework. The material is organized according to clearly defined categories, such as discriminative and generative models, frequentist and Bayesian approaches, exact and approximate inference, as well as directed and undirected models. This monograph is meant as an entry point for researchers with a background in probability and linear algebra. 2017-09-26T14:08:05Z 2017-09-08T19:21:26Z A Brief Introduction to Machine Learning for Engineers 2017-09-09T14:02:59Z 2017-09-09 Translating Embeddings (TransE) – Pierre-Yves Vandenbussche Translating Embeddings (TransE), a method for the prediction of missing relationships in knowledge graphs ([paper](http://papers.nips.cc/paper/5071-translating-embeddings-for-modeling-multi-rela)) Road movie durant la grande dépression ; un arnaqueur (Ryan O'Neal) vendeur de bibles et une petite fille Paper Moon 2017-09-25T22:34:37Z 2017-09-25 2017-09-12 2017-09-12T12:21:25Z Word2Vec Resources · Chris McCormick free online book 2017-09-12 2017-09-12T13:39:15Z Neural networks and deep learning 2017-09-18 Vectorland: Brief Notes from Using Text Embeddings for Search 2017-09-18T18:58:10Z > the elegance is in the learning model, but the magic is in the structure of the information we model > The source-target training pairs dictate **what notion of "relatedness"** will be modeled in the embedding space > is Eminem more similar to Rihanna or rap? An open, multilingual knowledge graph ConceptNet 2017-09-18T16:53:50Z 2017-09-18 Models that use pretrained word vectors must learn how to use them. Our work picks up where word vectors left off by looking to improve over randomly initialized methods for contextualizing word vectors through training on an intermediate task -> We teach a neural network how to understand words in context by first teaching it how to translate English to German 2017-09-18T15:12:24Z Learned in translation: contextualized word vectors (Salesforce Research) 2017-09-18 2017-09-07 2017-09-07T01:41:14Z Facial Recognition: Cracking the Brain’s Code | CNRS News > The Google Structured Data Testing Tool in its current incarnation is largely oriented towards vocabulary that Google understands, even though it is perfectly harmless to include other information in your structured data 2017-09-26T15:47:52Z 2017-09-26 Re: Google Structured Data Testing Tool fails on valid JSON-LD from Dan Brickley on 2016-04-07 (public-schemaorg@w3.org from April 2016) category:structured-data jsonld (in Webmaster Central Help Forum) - Google Product Forums 2017-09-26 2017-09-26T15:55:40Z 2019-12-02T22:53:39Z [1709.08568] The Consciousness Prior Yoshua Bengio A new prior is proposed for learning representations of high-level concepts of the kind we manipulate with language. This prior can be combined with other priors in order to help disentangling abstract factors from each other. It is inspired by cognitive neuroscience theories of consciousness, seen as a bottleneck through which just a few elements, after having been selected by attention from a broader pool, are then broadcast and condition further processing, both in perception and decision-making. The set of recently selected elements one becomes aware of is seen as forming a low-dimensional conscious state. This conscious state is combining the few concepts constituting a conscious thought, i.e., what one is immediately conscious of at a particular moment. We claim that this architectural and information-processing constraint corresponds to assumptions about the joint distribution between high-level concepts. To the extent that these assumptions are generally true (and the form of natural language seems consistent with them), they can form a useful prior for representation learning. A low-dimensional thought or conscious state is analogous to a sentence: it involves only a few variables and yet can make a statement with very high probability of being true. This is consistent with a joint distribution (over high-level concepts) which has the form of a sparse factor graph, i.e., where the dependencies captured by each factor of the factor graph involve only very few variables while creating a strong dip in the overall energy function. The consciousness prior also makes it natural to map conscious states to natural language utterances or to express classical AI knowledge in a form similar to facts and rules, albeit capturing uncertainty as well as efficient search mechanisms implemented by attention mechanisms. "consciousness seen as the formation of a low-dimensional combination of a few concepts constituting a conscious thought, i.e., **consciousness as awareness at a particular time instant**": the projection of a big vector (all the things conscious and unconscious in brain). Attention: additional mechanism describing what mind chooses to focus on. [YouTube video](/doc/?uri=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DYr1mOzC93xs) 1709.08568 Yoshua Bengio The Consciousness Prior 2017-09-29 2017-09-25T15:59:11Z 2017-09-29T14:44:19Z 2017-09-22 Solar System Exploration: : Galileo Legacy Site 2017-09-22T01:24:47Z Riding the Demon: On the Road in West Africa - Peter Chilson - Google Livres 2017-09-29T21:01:50Z 2017-09-29 2017-09-18T17:02:59Z Using Text Embeddings for Information Retrieval 2017-09-18 [Notes winter17](https://github.com/stanfordnlp/cs224n-winter17-notes) 2017-09-10 CS224n: Natural Language Processing with Deep Learning 2017-09-10T12:32:37Z I’ve seen the future, it’s full of HTML. – Mikeal – Medium 2017-09-29T01:29:54Z 2017-09-29 What if we could leverage all the code in npm and allow people to use our libraries with as little effort as a <script> include? Bacteria Use Brainlike Bursts of Electricity to Communicate | Quanta Magazine 2017-09-06T10:18:24Z 2017-09-06 A ten-minute introduction to sequence-to-sequence learning in Keras 2017-09-30 2017-09-30T10:54:53Z Sequence-to-sequence learning (Seq2Seq) is about training models to convert sequences from one domain (e.g. sentences in English) to sequences in another domain (e.g. the same sentences translated to French). the tweaks to make training feasible Word2Vec Tutorial Part 2 - Negative Sampling · Chris McCormick 2017-09-10T17:23:52Z 2017-09-10 2017-09-01 2017-09-01T18:52:27Z Research Blog: Transformer: A Novel Neural Network Architecture for Language Understanding Tweets out of Context 2017-09-24T16:07:05Z 2017-09-24 A short sci-fi story in the form of a Twitter bug report (2013) This Tiny Country Feeds the World 2017-09-03T12:16:14Z 2017-09-03 The Netherlands has become an agricultural giant by showing what the future of farming could look like. 2017-09-12 2017-09-12T15:17:28Z MinHash Tutorial with Python Code · Chris McCormick 2017-09-18T15:30:46Z 2017-09-18 Deep Learning for NLP Best Practices silicon chips that combine analog computation with digital communication, emulating the brain's unique mix of analog and digital techniques. 2017-09-07 2017-09-07T13:04:04Z Brains in Silicon 2017-09-18 2017-09-18T14:34:22Z Risks that threaten human civilisation Risks with Infinite impact Concept Search on Wikipedia · Chris McCormick 2017-09-10T17:25:47Z 2017-09-10 using gensim to perform concept searches on English Wikipedia. 2017-09-09T13:49:22Z 2017-09-09 How does Keras compare to other Deep Learning frameworks like Tensor Flow, Theano, or Torch? - Quora Toyota Motor Europe use of schema.org and auto.schema.org vocabularies | Automotive Ontology Community Group 2017-09-25T15:07:24Z 2017-09-25 2017-09-26 WildML – Artificial Intelligence, Deep Learning, and NLP 2017-09-26T14:10:17Z 2017-09-26 2017-09-26T12:13:46Z What is the best tutorial on RNN, LSTM, BRNN, and BLSTM with visualization? - Quora 2017-09-19T14:22:17Z 2017-09-19 Mandeville soutient qu'une société ne peut avoir en même temps morale et prospérité et que le vice, entendu en tant que recherche de son intérêt propre, est la condition de la prospérité Mandeville : la Fable des Abeilles skip-gram Word2Vec Tutorial - The Skip-Gram Model · Chris McCormick 2017-09-10T17:16:26Z 2017-09-10 1607.01759 Piotr Bojanowski [1607.01759] Bag of Tricks for Efficient Text Classification Tomas Mikolov Edouard Grave 2017-09-10 2016-07-06T19:40:15Z 2017-09-10T12:07:48Z Armand Joulin This paper explores a simple and efficient baseline for text classification. Our experiments show that our fast text classifier fastText is often on par with deep learning classifiers in terms of accuracy, and many orders of magnitude faster for training and evaluation. We can train fastText on more than one billion words in less than ten minutes using a standard multicore~CPU, and classify half a million sentences among~312K classes in less than a minute. Armand Joulin 2016-08-09T17:38:43Z A simple and efficient baseline for text classification. **Our word features can be averaged** together to form good sentence representations. Our experiments show that fastText is often on par with deep learning classifiers in terms of accuracy, and many orders of magnitude faster for training and evaluation. We can train fastText on more than one billion words in less than ten minutes using a standard multicore~CPU, and classify half a million sentences among~312K classes in less than a minute. Bag of Tricks for Efficient Text Classification 2017-09-18 2017-09-18T14:14:51Z TensorFlow Neural Machine Translation (seq2seq) Tutorial Alors que s’éteignent les éléphants d’Afrique, les Chinois ont pris le contrôle des routes de l’ivoire 2017-09-07T20:01:10Z 2017-09-07 2017-09-04T21:04:00Z 2017-09-04 Benin City, the mighty medieval capital now lost without trace | Cities | The Guardian 2017-09-04 2017-09-04T21:09:42Z RDFIO: extending Semantic MediaWiki for interoperable biomedical data management | Journal of Biomedical Semantics | Full Text The Father Of Mobile Computing Is Not Impressed 2017-09-17T16:42:30Z 2017-09-17 > Suppose you do something on the iPhone and you don’t like it, how do you undo it? > We can eliminate the learning curve for reading by getting rid of reading and going to recordings. That’s basically what they’re doing: Basically, let’s revert back to a pre-tool time. > “Simple things should be simple, complex things should be possible.” They’ve got simple things being simple and they have complex things being impossible, so that’s wrong. 2017-09-22 2017-09-22T01:01:40Z An open letter to the W3C Director, CEO, team and membership | Electronic Frontier Foundation