Graph Learning in the Era of Foundation Models
-
Graphical Abstract
-
Abstract
Graph-structured data arise widely in social networks, transportation systems, and biological domains. graph neural networks (GNNs) leverage message-passing mechanism to aggregate neighborhood information and achieve strong performance on node classification, link prediction, and graph classification tasks. However, with growing data scale and increasingly complex application scenarios, GNNs face inherent limitations in expressiveness and generalization. Recent progress in foundation models, particularly large language models (LLMs), has revealed remarkable capabilities in generalization and reasoning, inspiring new paradigms for graph machine learning. Building on this inspiration, the concept of graph foundation models (GFMs) is proposed to develop general-purpose models pretrained on large-scale graph corpora and adaptable to diverse downstream tasks. This article systematically reviews recent advances in GFMs, categorizes existing approaches by their reliance on GNNs and LLMs, and summarizes our practical experience in related developments. Finally, we outline key challenges and promising research directions to guide future work.
-
-