Parents:

Graph Embeddings

Traditionally, networks are usually represented as adjacency matrices. This suffers from data sparsity and high-dimensionality. Network embeddings aim to **represent network
vertices into a low-dimensional vector space, by preserving
both network topology structure and node content information**.
Algorithms are typically unsupervised
and can be broadly classified into
three groups ([source](/doc/2019/07/_1901_00596_a_comprehensive_su)):
- matrix factorization
- random walks
- deep learning approaches (graph neural networks - GNNs)
- graph convolution networks (GraphSage)
- graph attention networks,
- graph auto-encoders (e.g., DNGR and SDNE)
- graph generative networks,
- graph spatial-temporal networks.
Node embeddings (intuition: similar nodes should have similar vectors).
- Laplacian EigenMap (an eigenvector based computation, OK when matrix is not too large)
- LINE Large-scale Information Network Embedding, most cited paper at WWW2015; Breadth first search
- DeepWalk (Perozi et al. 2014) (the technique to learn word embeddings adapted to nodes: treating nodes as words and generating short random walks as sentences)
- Node2Vec (2016) (mixed strategy)
etc.

Related Tags:

14 Documents (Long List)

- [1808.02590] A Tutorial on Network Embeddings (2018)
*(About)*

2019-08-25 - benedekrozemberczki/awesome-graph-classification: A collection of important graph embedding, classification and representation learning papers with implementations.
*(About)*

2019-08-05 - [1709.07604] A Comprehensive Survey of Graph Embedding: Problems, Techniques and Applications (2018)
*(About)*

2019-05-29 - PyTorch-BigGraph: Faster embeddings of large graphs - Facebook Code
*(About)*

> A new tool from Facebook AI Research that enables training of multi-relation graph embeddings for very large graphs. PyTorch-BigGraph (PBG) handles graphs with billions of nodes and trillions of edges. Since PBG is written in PyTorch, researchers and engineers can easily swap in their own loss functions, models, and other components. [Github](https://github.com/facebookresearch/PyTorch-BigGraph), [Blog post](https://ai.facebook.com/blog/open-sourcing-pytorch-biggraph-for-faster-embeddings-of-extremely-large-graphs)

2019-05-12 - International Workshop on Deep Learning for Graphs and Structured Data Embedding
*(About)*

2019-04-30 - Graph embedding Day - Lyon
*(About)*

2018-09-10 - [1809.00782] Open Domain Question Answering Using Early Fusion of Knowledge Bases and Text (2018)
*(About)*

QA over the combination of a KB and entity-linked text, which is appropriate when an incomplete KB is available with a large text corpus. > In practice, some questions are best answered using text, while others are best answered using KBs. A natural question, then, is how to effectively combine both types of information. Surprisingly little prior work has looked at this problem.

2018-09-06 - [1806.05662] GLoMo: Unsupervisedly Learned Relational Graphs as Transferable Representations
*(About)*

Modern deep transfer learning approaches have mainly focused on learning generic feature vectors from one task that are transferable to other tasks, such as word embeddings in language and pretrained convolutional features in vision. However, these approaches usually transfer unary features and largely ignore more structured graphical representations. This work explores the possibility of learning generic latent relational graphs that capture dependencies between pairs of data units (e.g., words or pixels) from large-scale unlabeled data and transferring the graphs to downstream tasks.

2018-06-23 - How do we capture structure in relational data?
*(About)*

2018-05-07 - Convolutional Neural Networks on Graphs
*(About)*

2018-05-05 - TUTORIAL: Representation Learning on Networks - TheWebConf 2018
*(About)*

Network representation learning (NRL): Approaches that automatically learn to encode network structure into low-dimensional embeddings, using techniques based on deep learning and nonlinear dimensionality reduction **Goal of representation learning for networks: efficient task-independant feature learning for ML in networks.** But it's hard. DL toolbox are designed for single sequences or grids (for instance CNN for images, RNN or word2vec are fixed size), but networks are far more complex! from the abstract: > In this tutorial, we will cover key advancements in NRL over the last decade, with an emphasis on fundamental advancements made in the last two years. We will discuss classic matrix factorization-based methods (e.g., Laplacian eigenmaps), random-walk based algorithms (e.g., DeepWalk and node2vec), as well as very recent advancements in graph convolutional networks (GCNs). We will cover methods to embed individual nodes (see [node embeddings](/tag/node_embeddings)) as well as approaches to embed entire (sub)graphs, and in doing so, we will present a unified framework for NRL.

2018-05-05 - Revisiting Semi-Supervised Learning with Graph Embeddings (arxiv [1603.08861]) (2016)
*(About)*

2018-02-13 - TUTORIAL: Representation Learning on Networks - TheWebConf 2018
*(About)*

2018-01-27 - WORKSHOP: BigNet @ WWW 2018 Workshop on Learning Representations for Big Networks
*(About)*

2018-01-27

Properties

- sl:creationDate : 2018-01-27
- sl:creationTime : 2018-01-27T15:18:45Z
- rdf:type : sl:Tag
- skos:altLabel :
- Graph representation learning
- Representation Learning on Networks@en
- Network embeddings
- Network Representation Learning@en

- skos:prefLabel : Graph Embeddings