Federated unlearning (FUL) is a promising solution for removing negative influences from the global model. However, ensuring the reliability of local models in FL systems remains challenging. Existing FUL studies mainly focus on eliminating bad data influences and neglecting scenarios where other factors, such as adversarial attacks and communication constraints, also contribute to negative influences that require mitigation. In this paper, we introduce Local Model Refining (LMR), a FUL method designed to address the negative impacts of bad data as well as other factors on the global model. LMR consists of three components: (i) Identifying and categorizing unreliable local models into two classes based on their influence source: bad data or other factors. (ii) Bad Data Influence Unlearning (BDIU): BDIU is a client-side algorithm that identifies affected layers in unreliable models and employs gradient ascent to mitigate bad data influences. Boosting training is applied when necessary under specific conditions. (iii) Other Influence Unlearning (OIU): OIU is a server-side algorithm that identifies unaffected parameters in the unreliable local model and combines them with corresponding parameters of the previous global model to construct the updated local model. Finally, LMR aggregates updated local models with remaining local models to produce the unlearned global model. Extensive evaluation shows LMR enhances accuracy and accelerates average unlearning speed by 5x compared to comparison methods on MNIST, FMNIST, CIFAR-10, and CelebA datasets.
Keywords: Bad data removal; Distributed learning; Federated learning; Federated unlearning; Parameter updating.
Copyright © 2024 Elsevier Ltd. All rights reserved.