This paper proposes DAC-GNN, a novel accelerator architecture for Graph Neural Networks (GNNs) that addresses challenges in memory hierarchy management due to the inherent irregularity of graph structures. The key contributions include a dataflow analysis framework, dynamic caching policy, decoupled spatial architecture, and prefetching mechanism, leading to significant improvements in speedup and energy efficiency.
Key findings
DAC-GNN achieves 3.2x speedup and 4.5x energy efficiency improvement on average over state-of-the-art GNN accelerators.
The proposed caching strategy reduces off-chip memory accesses by 68% on average across standard benchmark datasets.
DAC-GNN leverages runtime dataflow information to maximize on-chip data reuse, improving cache hit rates.
Limitations & open questions
The paper does not discuss the scalability of the proposed architecture for larger-scale graphs.
The evaluation plan is outlined but not fully executed, leaving room for future work on real-world performance.