Differential Privacy Approach to Solve Gradient Leakage Attack in a Federated Machine Learning Environment

Krishna Yadav, BB Gupta, Kwok Tai Chui, Konstantions Psannis, International Conference on Computational Data and Social Networks ().
Share
tweet

Abstract

The growth of federated machine learning in recent times has dramatically leveraged the traditional machine learning technique for intrusion detection. Keeping the dataset for training at decentralized nodes, federated machine learning have kept the people’s data private; however, federated machine learning mechanism still suffers from gradient leakage attacks. Adversaries are now taking advantage of those gradients and can reconstruct the people’s private data with greater accuracy. Adversaries are using these private network data later on to launch more devastating attacks against users. At this time, it becomes essential to develop a solution that prevents these attacks. This paper has introduced differential privacy, which uses Gaussian and Laplace mechanisms to secure updated gradients during the communication. Our result shows that clients can achieve a significant level of accuracy with differentially private gradients.