Graph Attention Networks
http://www.semanlink.net/tag/graph_attention_networks
Documents tagged with Graph Attention Networks[2110.10778] Contrastive Document Representation Learning with Graph Attention Networks
http://www.semanlink.net/doc/2022/03/2110_10778_contrastive_docume
> most of the pretrained
Transformers models can only handle relatively
short text. It is still a challenge when it
comes to modeling very long documents. In
this work, we propose to use a graph attention
network on top of the available pretrained
Transformers model to learn document embeddings
2022-03-10T13:54:40Z[2003.11644] MAGNET: Multi-Label Text Classification using Attention-based Graph Neural Network
http://www.semanlink.net/doc/2020/08/2003_11644_multi_label_text_c
> **Existing methods tend to ignore the relationship among labels**.
This model employs [Graph Attention Networks](tag:graph_attention_networks) (GAT) to find the correlation between
labels. The generated classifiers are applied to sentence feature vectors obtained from the text feature extraction network (BiLSTM) to enable end-to-end training.
> GAT network takes the node features and adjacency
matrix that represents the graph data as inputs.
The adjacency matrix is constructed based on
the samples. **In our case, we do not have a graph
dataset. Instead, we learn the adjacency matrix**, hoping
that the model will determine the graph, thereby
learning the correlation of the labels.
> Our intuition is that by modeling the correlation
among labels as a weighted graph, we force the GAT
network to learn such that the adjacency matrix and
the attention weights together represent the correlation.
// TODO compare with [this](doc:2019/06/_1905_10070_label_aware_docume)
2020-08-14T16:11:43ZChaitanya Joshi sur Twitter : "Excited to share a blog post on the connection between #Transformers for NLP and #GraphNeuralNetworks"
http://www.semanlink.net/doc/2020/03/chaitanya_joshi_sur_twitter_
[about this blog post](/doc/2020/03/transformers_are_graph_neural_n)
2020-03-01T03:17:11ZTransformers are Graph Neural Networks | NTU Graph Deep Learning Lab
http://www.semanlink.net/doc/2020/03/transformers_are_graph_neural_n
> The key idea: Sentences are fully-connected graphs of words, and Transformers are very similar to Graph Attention Networks (GATs) which use multi-head attention to aggregate features from their neighborhood nodes (i.e., words).
[ twitter](https://twitter.com/chaitjo/status/1233220586358181888)
2020-03-01T02:28:59Z