CNN 4 NLP
http://www.semanlink.net/tag/convolutional_neural_network_and_nn_4_nlp
Documents tagged with CNN 4 NLPConvolutional Neural Networks for Sentence Classification (2014)
http://www.aclweb.org/anthology/D14-1181
experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks.
[Github project](https://github.com/yoonkim/CNN_sentence) with code, updates to paper, and links to valuable resources, such as a [Denny Britz](/tag/denny_britz)'s [implementation in TensorFlow](https://github.com/dennybritz/cnn-text-classification-tf)
2017-11-07T09:47:58ZUsually RNNs are used for NLP, when do CNNs in NLP make sense? - Quora
https://www.quora.com/Usually-RNNs-are-used-for-NLP-when-do-CNNs-in-NLP-make-sense
> In fact the emerging consensus is that even for NLP, CNNs beat RNNs!
2017-11-06T19:04:57ZImplementing a CNN for Text Classification in TensorFlow – WildML
http://www.wildml.com/2015/12/implementing-a-cnn-for-text-classification-in-tensorflow/
2017-11-06T18:56:50ZRecurrent Convolutional Neural Networks for Text Classification (S Lai - 2015)
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.822.3091&rep=rep1&type=pdf
Comments about this paper [here](https://medium.com/paper-club/recurrent-convolutional-neural-networks-for-text-classification-107020765e52) and [thre](https://medium.com/paper-club/cnns-for-text-classification-b45bde0bb254)
2017-11-06T09:12:22Z[1701.00185] Self-Taught Convolutional Neural Networks for Short Text Clustering
https://arxiv.org/pdf/1701.00185.pdf
> We propose a flexible short text clustering framework which explores the feasibility and effectiveness of combining CNN and traditional unsupervised dimensionality reduction methods.
>
> Non-biased deep feature representations can be learned through our self- taught CNN framework which does not use any external tags/labels or complicated NLP pre-processing.
> The original raw text features are firstly embedded into compact binary codes by using one existing unsupervised dimensionality reduction methods. Then, word embeddings are explored and fed into convolutional neural networks to learn deep feature representations, meanwhile the output units are used to fit the pre-trained binary codes in the training process. Finally, we get the optimal clusters by employing K-means to cluster the learned representations.
[conf paper, same authors](http://www.aclweb.org/anthology/W15-1509) ; [gitgub repo (matlab)](https://github.com/jacoxu/STC2)
2017-11-04T19:27:04ZUsing pre-trained word embeddings in a Keras model
https://blog.keras.io/using-pre-trained-word-embeddings-in-a-keras-model.html
Text classification using pre-trained GloVe embeddings (loaded into a frozen Keras Embedding layer) and a [convolutional neural network](/tag/convolutional_neural_network)
2017-10-23T01:07:38ZUnderstanding Convolutional Neural Networks for NLP | WildML
http://www.wildml.com/2015/11/understanding-convolutional-neural-networks-for-nlp/
2015-11-08T11:53:24Z