This research proposal introduces ADAPT-AV (Adversarial Defense through Adaptive Perturbation Training for AVs), a framework designed to protect autonomous vehicles from unbounded sensor attacks that bypass traditional perturbation constraints. The approach integrates three key components: cross-modal inconsistency detection using attention mechanisms, curriculum-based adversarial training for progressive robustness, and dynamic sensor-trust weighting for adaptive fusion. Unlike conventional defenses limited to bounded Lp-norm perturbations, ADAPT-AV addresses realistic physical threats including LiDAR spoofing, camera dazzling, and multi-sensor compromise. The proposed experimental plan evaluates the framework on nuScenes, KITTI, and CARLA benchmarks against state-of-the-art baselines.
Key findings
Multi-modal inconsistency detection can identify unbounded sensor attacks through cross-modal attention mechanisms that reveal anomalous sensor inputs.
Curriculum-based adversarial training enables progressive robustness enhancement against unbounded perturbations without catastrophic forgetting.
Dynamic sensor-trust weighting provides graceful degradation when individual sensors are compromised by adjusting fusion weights based on real-time attack detection confidence.
The framework addresses the critical gap between theoretical adversarial robustness and practical defense against physical attacks on AV perception systems.
Limitations & open questions
Experimental validation remains pending on proposed benchmarks (nuScenes, KITTI, CARLA) as this is a research proposal rather than completed work.
Risk analysis identifies potential failure modes and limitations requiring mitigation strategies for safe real-world deployment.
Defense effectiveness may depend on specific attack patterns not encountered during the curriculum training phase.