This paper introduces CrossModalFingerprinter, a framework for persistent model identification across various transformations including fine-tuning, quantization, pruning, and modality adaptation. It uses SVD-based decomposition to extract 128-dimensional signatures from model weights, enabling verification even with substantial architectural changes.
Key findings
CrossModalFingerprinter persists across fine-tuning, quantization, pruning, and cross-modal adaptations.
The method extracts 128-dimensional signatures using SVD-based decomposition.
Achieves zero false positive rate and 8.3% verification accuracy across diverse transformations.
Perfect lineage tracking success (100%) on transformation chains.
Limitations & open questions
The approach's applicability to models with highly variable architectures is not fully explored.
The reliance on SVD-based decomposition may limit its applicability to certain types of neural networks.