InferTurbo: A scalable system for boosting full-graph inference of graph neural network over huge graphs

D Zhang, X Song, Z Hu, Y Li, M Tao… - 2023 IEEE 39th …, 2023 - ieeexplore.ieee.org
D Zhang, X Song, Z Hu, Y Li, M Tao, B Hu, L Wang, Z Zhang, J Zhou
2023 IEEE 39th International Conference on Data Engineering (ICDE), 2023ieeexplore.ieee.org
With the rapid development of Graph Neural Networks (GNNs), more and more studies focus
on system design to improve training efficiency while ignoring the efficiency of GNN
inference. Actually, GNN inference is a non-trivial task, especially in industrial scenarios with
giant graphs, given three main challenges, ie, scalability tailored for full-graph inference on
huge graphs, inconsistency caused by stochastic acceleration strategies (eg, sampling), and
the serious redundant computation issue. To address the above challenges, we propose a …
With the rapid development of Graph Neural Networks (GNNs), more and more studies focus on system design to improve training efficiency while ignoring the efficiency of GNN inference. Actually, GNN inference is a non-trivial task, especially in industrial scenarios with giant graphs, given three main challenges, i.e., scalability tailored for full-graph inference on huge graphs, inconsistency caused by stochastic acceleration strategies (e.g., sampling), and the serious redundant computation issue. To address the above challenges, we propose a scalable system named InferTurbo to boost the GNN inference tasks in industrial scenarios. Inspired by the philosophy of "think-like-a-vertex", a GAS-like (Gather-Apply-Scatter) schema is proposed to describe the computation paradigm and data flow of GNN inference. The computation of GNNs is expressed in an iteration manner, in which a vertex would gather messages via in-edges and update its state information by forwarding an associated layer of GNNs with those messages and then send the updated information to other vertexes via out-edges. Following the schema, the proposed InferTurbo can be built with alternative backends (e.g., batch processing system or graph computing system). Moreover, InferTurbo introduces several strategies like shadow-nodes and partial-gather to handle nodes with large degrees for better load balancing. With InferTurbo, GNN inference can be hierarchically conducted over the full graph without sampling and redundant computation. Experimental results demonstrate that our system is robust and efficient for inference tasks over graphs containing some hub nodes with many adjacent edges. Meanwhile, the system gains a remarkable performance compared with the traditional inference pipeline, and it can finish a GNN inference task over a graph with tens of billions of nodes and hundreds of billions of edges within 2 hours.
ieeexplore.ieee.org