ABSTRACT
This paper introduces a novel framework, Jacobian-Informed CNOP Learning (JICL), that learns to predict conditional nonlinear optimal perturbations directly from AI model Jacobians, offering significant speedups over traditional CNOP computation methods.
PAPER · PDF
Loading PDF...
Key findings
JICL eliminates the need for iterative optimization at inference time.
Achieves 10x–100x speedup over traditional optimization-based CNOP computation.
Enables CNOP computation for black-box AI surrogate models.
Provides uncertainty estimates through the network’s prediction confidence.
Limitations & open questions
Jacobian information may be insufficient for accurate CNOP prediction in some cases.