Graph Embeddings
Traditionally, networks are usually represented as adjacency matrices. This suffers from data sparsity and high-dimensionality. Network embeddings aim to **represent network vertices into a low-dimensional vector space, by preserving both network topology structure and node content information**. Algorithms are typically unsupervised and can be broadly classified into three groups ([source](/doc/2019/07/_1901_00596_a_comprehensive_su)): - matrix factorization - random walks - deep learning approaches (graph neural networks - GNNs) - graph convolution networks (GraphSage) - graph attention networks, - graph auto-encoders (e.g., DNGR and SDNE) - graph generative networks, - graph spatial-temporal networks. Node embeddings (intuition: similar nodes should have similar vectors). - Laplacian EigenMap (an eigenvector based computation, OK when matrix is not too large) - LINE Large-scale Information Network Embedding, most cited paper at WWW2015; Breadth first search - DeepWalk (Perozi et al. 2014) (the technique to learn word embeddings adapted to nodes: treating nodes as words and generating short random walks as sentences) - Node2Vec (2016) (mixed strategy) etc.
Related Tags:
ExpandDescendants
14 Documents (Long List
Properties