Since their introduction around a decade ago, graph neural networks (GNNs) have quickly become the state-of-the-art method for many graph learning tasks. However, existing graph neural networks are known to suffer from various problems. For example, no known graph neural network are known to be perfectly expressive, so designing a graph neural network can be a matter of managing different trade-offs. In this talk, I will discuss several problems affecting graph neural networks. First, I will discuss a recent work on a problem called oversquashing, where a GNN struggles to send information between different nodes in the graph due to certain bottlenecks in the graph's topology. Our work presents a way to quantify which graphs are most susceptible to oversquashing and proposes techniques to remedy oversquashing by altering the graph's topology. After that, I will discuss challenges for using transformers as graph neural networks. Graph transformers rely on positional encodings to represent the graph, but the number of competing positional encodings has outpaced our ability to understand their differences. I will discuss ways of comparing these positional encodings and our recent work aimed at unifying some recent trends in the design of positional encodings.
Mitchell Black is an NSF EnCORE Postdoctoral Scholar at the University of California San Diego. His research aims to understand problems in machine learning using the tools of theoretical computer science. He is particularly interested in the fields of graph neural networks, spectral graph theory, topology, and their intersections. He completed his PhD in Computer Science from Oregon State University in 2024.