Can Graph Neural Networks Solve Real-world Problems?

Prince Canuma
8 min readJan 23, 2020

--

In this article, we will learn about GNNs and its structure as well as its applications

A social network graph

Deep learning has changed the way we process data using the increasing computational “cheap” power(Moore’s law) to solve real-world problems and accomplish some cognitive tasks that our brains do almost effortlessly such as image classification, natural language processing, vídeo processing and etc.

But most of these tasks have data or structure that is typically represented in the Euclidean space. However, we are witnessing a rise of problems where data generated is from non-Euclidean domain and are better represented as graphs with complex relationships and interdependency between objects.

The complexity of graph data has imposed significant challenges for both classical ML and modern DL algorithms. Recently, many studies extending DL approaches for graph data have emerged.

On a great paper released late last year entitled: “A Comprehensive Survey on Graph Neural Networks”[1] the authors proposed a new taxonomy to divide the state-of-the-art graph neural networks into 4 main categories:

  • Recurrent Graph Neural Network(RecGNN for short)
  • Convolutional Graph Neural Network(ConvGNN)
  • Graph AutoEncoders(GAEs)
  • Spatio-Temporal Graph Neural Network(STGNN)

In the same paper the authors gave as examples of applications of GNNs as follows:

  • In e-commence, a graph-based learning system can exploit the interactions between users and products to make highly accurate recommendations.
  • In chemistry, molecules are modelled as graphs, and their bioactivity needs to be identified for drug discovery.

Take a minute, and just imagine if we gather the structure of all “approved” drugs or all of them in general and trained a GNNs to learn the molecular pattern/structure of drugs and it learned to identify whether a molecule found here or in space(counting on you Elon Musk!!!) I believe the results could be rather surprising and could help us discover or rediscover drugs that can cure diseases that today are considered incurable. If we think about it most diseases and even death itself could be seen as a technical problem. How can some live hundreds of years and some just a few decades? Can we prolong our lives indefinitely?

I know that last statement could bring a heated argument in the comment section, but I’m not married to any idea, please share your thoughts. Furthermore, I would like to let you know that I’m a Christian and I believe in GOD.

But how cool would that be!!! We might invent our own version of the Lazarus pit since we are slowly but surely blurring the line between sci-fi and reality.

Going back…

GNNs have similar properties as its DL counterparts, capable of doing regression, classification, generating data for empty nodes, edges and so much more that we are yet to find out. The field is still at its beginning stages, who knows what we could do in a few years and a couple of thousand papers from now.

For me, the most interesting applications are for recommendation and health, with health being the no. 1.

Now, graphs can be irregular, a graph may have a variable size of unordered nodes, and nodes from a graph may have a different number of neighbours, resulting in some important operations (e.g., convolutions) being easy to compute in the image domain, but difficult to apply to the graph domain. Furthermore, a core assumption of existing machine learning algorithms is that instances are independent of each other. This assumption no longer holds for graph data because each instance (node) is related to others by links of various types, such as citations, friendships, and interactions. But this is not a handicap, because recently we have seen an increasing interest in extending or perhaps I should say porting of deep learning approaches to the graph domain, specifically designed to work with graph data.

Convolutional Neural Networks(ConvNets) are out of the scope of this article but if you want to know more please click here.

Taxonomy of GNNs

Recurrent graph neural networks (RecGNNs) mostly are pioneer works of graph neural networks. RecGNNs aim to learn node representations with recurrent neural architectures. They assume a node in a graph constantly exchanges information/message with its neighbours until a stable equilibrium is reached. RecGNNs are conceptually important and inspired later research on convolutional graph neural networks. In particular, the idea of message passing is inherited by spatial based convolutional graph neural networks.

Convolutional graph neural networks (ConvGNNs) generalize the operation of convolution from grid data to graph data. The main idea is to generate a node ∨’s representation by aggregating its own features X∨ and neighbours’ features X∪, where ∪ ∈ N(∨).

  • Here N denotes neighbour and X denotes feature vector for node ∨.

Different from RecGNNs, ConvGNNs stack multiple graph convolutional layers to extract high-level node representations. ConvGNNs play a central role in building up many other complex GNN models. Figure 2a shows a ConvGNN for node classification. Figure 2b demonstrates a ConvGNN for graph classification.

Graph autoencoders (GAEs) are unsupervised learning frameworks which encode nodes/graphs into a latent vector space and reconstruct graph data from the encoded information. GAEs are used to learn network embeddings and graph generative distributions. For network embedding, GAEs learn latent node representations through reconstructing graph structural information such as the graph adjacency matrix. For graph generation, some methods generate nodes and edges of a graph step by step while other methods output a graph all at once. Figure 2c presents a GAE for network embedding.

Spatial-temporal graph neural networks (STGNNs) aim to learn hidden patterns from spatial-temporal graphs, which become increasingly important in a variety of applications such as traffic speed forecasting, driver manoeuvre anticipation, and human action recognition. The key idea of STGNNs is to consider spatial dependency and temporal dependency at the same time. Many current approaches integrate graph convolutions to capture spatial dependency with RNNs or CNNs to model the temporal dependency. Figure 2d illustrates an STGNN for spatial-temporal graph forecasting.

FIG. 2 credits to paper [1]

Frameworks

With the graph structure and node content information as inputs, the outputs of GNNs can focus on different graph analytics tasks with one of the following mechanisms:

  • Node-level outputs relate to node regression and node classification tasks. RecGNNs and ConvGNNs can extract high-level node representations by information propagation/ graph convolution. With a multi-perceptron or a softmax layer as the output layer, GNNs are able to perform node-level tasks in an end-to-end manner.
  • Edge-level outputs relate to the edge classification and link prediction tasks. With two nodes’ hidden representations from GNNs as inputs, a similarity function or a neural network can be utilized to predict the label/connection strength of an edge.
  • Graph-level outputs relate to the graph classification task. To obtain a compact representation on the graph level, GNNs are often combined with pooling and readout operations.

Training Frameworks. Many GNNs (e.g., ConvGNNs) can be trained in a (semi-) supervised or purely unsupervised way within an end-to-end learning framework, depending on the learning tasks and label information available at hand.

Semi-supervised learning for node-level classification

Given a single network with partial nodes being labelled and others remaining unlabeled, ConvGNNs can learn a robust model that effectively identifies the class labels for the unlabeled nodes. To this end, an end-to-end framework can be built by stacking a couple of graph convolutional layers followed by a softmax layer for multi-class classification.

Supervised learning for graph-level classification

Graph-level classification aims to predict the class label(s) for an entire graph. The end-to-end learning for this task can be realized with a combination of graph convolutional layers, graph pooling layers, and/or readout layers. While graph convolutional layers are responsible for exacting high-level node representations, graph pooling layers play the role of downsampling, which coarsens each graph into a sub-structure each time. A readout layer collapses node representations of each graph into a graph representation. By applying a multi-layer perceptron and a softmax layer to graph representations, we can build an end-to-end framework for graph classification. An example is given in Fig 2b.

Unsupervised learning for graph embedding. When no class labels are available in graphs, we can learn the graph embedding in a purely unsupervised way in an end-to-end framework. These algorithms exploit edge-level information in two ways. One simple way is to adopt an autoencoder framework where the encoder employs graph convolutional layers to embed the graph into the latent representation upon which a decoder is used to reconstruct the graph structure. Another popular way is to utilize the negative sampling approach which samples a portion of node pairs as negative pairs while existing node pairs with links in the graphs are positive pairs. Then a logistic regression layer is applied to distinguish between positive and negative pairs.

There is so much more to this than I could possibly fit in this article. I’m preparing a notebook to give you a brief intro to GNNs code and some of the top frameworks out there for building them and visualizing graph data.

Email me at prince.gdt@gmail.com to have access to the notebook once I’m done.

Things to remember…

  • Graphs are great at representing complex relationships and interdependency between objects.
  • Graph Neural Network is divided into 4 main categories RecGNN, ConvGNN, GAEs, STGNN.
  • Graph Neural Network aims to port the successful approaches of DL that will, in turn, will drive new discoveries in the field such as health, social interactions, activity recognition/anticipation and so much more.

Thank you for reading, this has been a fun discovery for me too, this kinds of algorithms fascinate me for their raw potential to consume GB or even PB of data and uncover hidden patterns it far better than us. Please leave your comments.

REFERENCES

[1] https://arxiv.org/abs/1901.00596

[2]https://www.youtube.com/watch?v=bA261BF0bdk

[3]https://www.youtube.com/watch?v=7JELX6DiUxQ&t=79s

Get in touch

Twitter: @CanumaGdt

LinkedIn: https://www.linkedin.com/in/prince-canuma-05814b121/

Email: prince.gdt@gmail.com

--

--

Prince Canuma
Prince Canuma

Written by Prince Canuma

Helping research & production teams achieve MLOps success | Ex-@neptune_ai