LOGIN: A Large Language Model Consulted Graph Neural Network Training Framework

Y Qiao, X Ao, Y Liu, J Xu, X Sun, Q He - arXiv preprint arXiv:2405.13902, 2024 - arxiv.org
Y Qiao, X Ao, Y Liu, J Xu, X Sun, Q He
arXiv preprint arXiv:2405.13902, 2024arxiv.org
Recent prevailing works on graph machine learning typically follow a similar methodology
that involves designing advanced variants of graph neural networks (GNNs) to maintain the
superior performance of GNNs on different graphs. In this paper, we aim to streamline the
GNN design process and leverage the advantages of Large Language Models (LLMs) to
improve the performance of GNNs on downstream tasks. We formulate a new paradigm,
coined" LLMs-as-Consultants," which integrates LLMs with GNNs in an interactive manner …
Recent prevailing works on graph machine learning typically follow a similar methodology that involves designing advanced variants of graph neural networks (GNNs) to maintain the superior performance of GNNs on different graphs. In this paper, we aim to streamline the GNN design process and leverage the advantages of Large Language Models (LLMs) to improve the performance of GNNs on downstream tasks. We formulate a new paradigm, coined "LLMs-as-Consultants," which integrates LLMs with GNNs in an interactive manner. A framework named LOGIN (LLM Consulted GNN training) is instantiated, empowering the interactive utilization of LLMs within the GNN training process. First, we attentively craft concise prompts for spotted nodes, carrying comprehensive semantic and topological information, and serving as input to LLMs. Second, we refine GNNs by devising a complementary coping mechanism that utilizes the responses from LLMs, depending on their correctness. We empirically evaluate the effectiveness of LOGIN on node classification tasks across both homophilic and heterophilic graphs. The results illustrate that even basic GNN architectures, when employed within the proposed LLMs-as-Consultants paradigm, can achieve comparable performance to advanced GNNs with intricate designs. Our codes are available at https://github.com/QiaoYRan/LOGIN.
arxiv.org