This study introduces DICD, a framework that learns to separate content from degradation characteristics in images, using contrastive learning and adversarial decomposition. It aims to improve performance in computer vision tasks under various image degradations.
Key findings
DICD learns embeddings invariant to image degradations while preserving semantic discriminability.
Contrastive learning aligns semantically similar samples across different degradations.
Adversarial decomposition ensures minimal degradation information in content embeddings.
Improves top-1 accuracy by 8.3% under severe corruption compared to existing methods.
Limitations & open questions
The framework's performance on specific degradation types is not individually analyzed.
The generalization capability to completely unseen corruptions is not fully evaluated.