Varun Chandrasekaran* , Suman Banerjee* , Diego Perino# , Nicolas Kourtellis# University of Wisconsin-Madison* , Telefonica Research#
2024 IEEE International Conference on Big Data (BigData)
This research paper explores the challenges of privacy in traditional federated learning (FL) systems, where gradient updates are shared with a central server. These gradient updates can be exploited by adversaries to infer private information about the training data.
To address this, the paper proposes a novel approach called Hierarchical Federated Learning (HFL) with privacy. HFL introduces an intermediary level in the FL architecture. By carefully adding calibrated noise at this intermediate level, the system achieves a better balance between privacy and model accuracy compared to existing methods.
Key takeaways in the context of edge computing applications:
- Enhanced Privacy: HFL offers improved privacy guarantees for edge devices by minimizing the exposure of sensitive data gradients. This is crucial in edge computing environments where data often resides on resource-constrained devices.
- Improved Accuracy: The hierarchical approach allows for more flexible noise injection strategies, leading to better model accuracy while maintaining strong privacy guarantees. This is essential for deploying accurate and reliable machine learning models on edge devices.
- Reduced Communication Overhead: By introducing the intermediate level, HFL can potentially reduce communication overhead between edge devices and the central server, which is a critical factor in resource-constrained edge environments.
In essence, this research provides a valuable framework for developing more privacy-preserving and efficient FL systems for edge computing applications. It has the potential to unlock the full potential of edge AI by enabling the deployment of accurate and private machine learning models on a wide range of edge devices.
Read the paper here.
Contact us with questions.