This paper introduces a method to create universal triggers that poison the internal prediction logic of GNN models across different graph types, node feature distributions, and architectures, addressing limitations in current backdoor attack research.
Key findings
Proposes a novel trigger generation mechanism that targets GNN model logic.
Develops a framework for attacks to generalize across heterogeneous graphs.
Establishes comprehensive protocols for evaluating transferability across datasets.
Conducts risk analysis and defense evaluation for practical implications of attacks.
Limitations & open questions
The paper is a research proposal and may not cover long-term effects of the proposed attacks.
The effectiveness of the proposed triggers in real-world, large-scale deployments is yet to be determined.