An Introduction to Federated Learning: Decentralized Data, Centralized Intelligence
Photo by Chris Ried on Unsplash
In many real-world applications, training machine learning models on client data is challenging due to data exchange issues and user privacy concerns. To address these problems, McMahan et al. introduced federated learning in 2016.
Definition of federated learning
Federated learning is a machine learning approach where a model is trained across multiple clients under the orchestration of a central server. Instead of sharing raw data, clients share only model weight updates with the server. The server then aggregates these updates to improve the global model.
The diagram is taken from the [3] paper.
How Federated Learning Works
The server orchestrates the training process by repeatedly following these steps:
Select Clients: The server selects a sample of clients that meet the eligibility criteria.
Distribute Model Weights: Clients download the current model weights from the server.
Local Training: Clients train the model locally on their own data.
Collect Updates: The server collects model updates from the clients.
Aggregate Updates: The server aggregates the updates to refine the global model.
Update Global Model: The server updates the shared global model based on the aggregated updates.
Real-Life Usage
Google uses federated learning extensively in its Gboard mobile keyboard, Pixel phone features, and Android Messages. Apple has also adopted this technology in iOS 13 for the QuickType keyboard and the "Hey Siri" vocal classifier.
For more information about federated learning, please use the references below: