Abstract: Training Graph Neural Networks (GNNs) on large graphs presents unique challenges due to the large memory and computing requirements. Distributed GNN training, where the graph is partitioned ...
Accelerating Large-Scale Out-of-GPU-Core GNN Training with Two-Level Historical Caching (Accepted by APPT 2025) HCGNN is a high-performance system for training Graph Neural Networks (GNNs) on ...
In fact, graph sampling can also be understood as data augmentation or training regularization (e.g., we may see the edge sampling as a minibatch version of DropEdge). Efficiency: While "neighbor ...
Abstract: When distribution shifts occur between testing and training graph data, out-of-distribution (OOD) samples undermine the performance of graph neural networks (GNNs). To improve adaptive OOD ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results