Federated Learning: Decentralized Machine Learning for Privacy Preservation
DOI:
https://doi.org/10.64056/n2nbva22Keywords:
Federated Learning, Decentralized Learning, Privacy Preservation, Data Privacy, Secure Aggregation, Machine LearningAbstract
Federated Learning (FL) is an emerging paradigm in machine learning that enables multiple devices or organizations to collaboratively train a global model without sharing raw data. This decentralized approach addresses data silo and privacy challenges by ensuring sensitive information remains local to each participant. In this article, we survey recent advances in FL and propose a hypothetical experiment comparing federated and centralized learning on a standard dataset. Our experimental methodology employs the Federated Averaging algorithm across simulated clients and measures model accuracy, convergence, and privacy trade-offs. The results indicate that FL can achieve performance nearly comparable to a centralized model while preserving privacy: for example, achieving roughly 98% accuracy versus 98.5% in the centralized case, with only marginal degradation when adding differential privacy. These findings reinforce the practicality of FL in privacy-sensitive domains, supporting trends observed in applications like mobile keyboards and healthcare. We conclude that FL is a viable strategy for privacy-preserving ML and discuss future challenges in scalability and security.