This research proposes FedU-ML, a novel framework for computing U-statistics of degree k > 2 in federated metric learning settings. It leverages secure multi-party computation to enable privacy-preserving estimation of higher-order statistics without centralizing raw data, offering theoretical guarantees on convergence rates, sample complexity, and differential privacy.
Key findings
FedU-ML addresses the privacy and computational challenges in federated metric learning.
The framework provides theoretical guarantees including convergence rates and differential privacy bounds.
Achieves improved accuracy over existing local differential privacy baselines while maintaining communication efficiency.
Limitations & open questions
The practical scalability of FedU-ML in extremely large federated networks is yet to be tested.
The integration of FedU-ML with other federated learning algorithms requires further exploration.