Skip to main content
Understanding Representation Power of Graph Neural Nets

Understanding Representation Power of Graph Neural Nets

Date7th Mar 2022

Time12:00 PM

Venue Google Meet

PAST EVENT

Details

Learning over graph-structured data, such as social networks, chemical molecules, biological interactions, etc. requires effective representation and extended architectures to employ deep learning methods. There has been a recent interest in graph neural networks (GNNs) as deep models with a strong inductive bias toward graph-structured data. While most GNN architectures' development includes novel ways to "compose, aggregate, and update" graph features, these model perturbations are generally based upon empirical hints and fragile heuristics. Our work attempts to fix a few of these fundamental problems in graph learning. In this talk, we first present our characterization of higher-order labeling functions that further help us to formalize the representation power of multi-stacked GNNs. We look at other equivalent formulations of popular GNN architectures and further explore the scenarios where they are bound to fail to learn. We also extend the popular ideas of simplicity bias in the context of graphs and show how it translates to structured domains.

Speakers

Mr. Prakhar Krishna Kumar, Roll No: CS19D002

CSE