This paper introduces a cross-modal knowledge distillation framework for robust anomaly detection in microservice environments where observability data is often incomplete. The approach trains a teacher model on complete multimodal data and distills its knowledge into student models for various missing-modality scenarios, achieving robust performance without runtime imputation.
Key findings
Proposes a cross-modal knowledge distillation framework for robust AIOps anomaly detection.
Achieves 3.7% F1 improvement over the best baseline and maintains performance even with 90% missing modalities.
Enables reliable anomaly detection without expensive imputation at inference time.
Limitations & open questions
The framework's performance in real-world production environments with diverse data patterns is yet to be tested.
The current implementation may not account for all types of data anomalies or failures.