This paper proposes HiGAlign, a novel framework for graph alignment that addresses the limitations of single-scale operations and over-smoothing in deep architectures. HiGAlign introduces a multi-scale hierarchical attention mechanism that operates across GNN layers, maintaining representations at different granularities, selectively aggregating information across scales and depths, and refining correspondences from coarse to fine.
Key findings
HiGAlign prevents over-smoothing by maintaining a separation coefficient bounded away from zero across layers.
Extensive experiments demonstrate state-of-the-art performance with significant improvements on benchmark datasets.
Limitations & open questions
The paper does not discuss potential limitations or challenges in applying HiGAlign to very large-scale graphs.