- 9.1.1 How does Federated Learning work?

    Federated learning is a machine learning approach that enables training models across multiple decentralized devices or servers while keeping data localized. This approach is particularly useful when data privacy concerns prevent centralized data aggregation or when data cannot be easily transferred due to its size or sensitivity. Here's a step-by-step explanation of how federated learning works:

    1. Initialization:

      • A central server or coordinator initializes the federated learning process by deploying a machine learning model. This model is typically a neural network but can be any other type of machine learning model.
    2. Selection of Participants:

      • The central server selects the participants for training. These participants can be individual devices (e.g., smartphones, IoT devices) or local servers in a network.
    3. Distribution of Model:

      • The initial model parameters are distributed to the selected participants. Each participant receives a copy of the model.
    4. Local Training:

      • Each participant independently trains the model on its local dataset. This training is done using local computation resources without sending raw data to the central server. The participants use techniques such as stochastic gradient descent (SGD) to update the model parameters based on their local data.
    5. Model Update Aggregation:

      • After training locally, each participant sends only the updated model parameters (not the raw data) back to the central server. These updates can be in the form of gradients, which are mathematical representations of the direction and magnitude of parameter changes.
    6. Aggregation and Model Update:

      • The central server aggregates the received model updates from all participants. This aggregation can be done through techniques like averaging or weighted averaging, where the updates are combined to produce a new global model.
    7. Iteration:

      • Steps 4-6 are repeated for multiple iterations or epochs. Each iteration involves distributing the current global model to participants, local training on their data, sending updates back to the central server, and aggregating these updates to improve the global model.
    8. Model Evaluation:

      • Periodically or after a certain number of iterations, the central server evaluates the performance of the global model on a validation dataset. This evaluation helps determine whether the federated learning process is improving the model's performance.
    9. Termination:

      • The federated learning process continues for a predefined number of iterations or until certain convergence criteria are met. Once the process is complete, the final global model is deployed for inference.

    Taiwan AI Labs (AILabs.tw) Copyright © 2023Powered by Bludit