FairRFL: Fair and Robust Federated Learning in the Presence of Selfish Clients
- Autori: Augello, A.; Gupta, A.; Lo Re, G.; Das Sajal, K.
- Anno di pubblicazione: 2026
- Tipologia: Articolo in rivista
- OA Link: http://hdl.handle.net/10447/702105
Abstract
Federated Learning (FL) is a paradigm that enables collaborative machine learning without disclosing the local data of the participants. However, in real-world FL deployment scenarios, some unscrupolous clients may alter the training process to skew the global model towards their local optimum, unfairly prioritizing their data distribution. Their influence can degrade overall model performance for normal clients and reduce fairness in the system. We call this novel category of misbehaving clients “selfish”. This work proposes a Fair and Robust strategy for aggregation in the Federated Learning (FL) server to mitigate the effect of Selfish clients (FairRFL). FairRFL incorporates a novel technique to recover (or estimate) the true updates from selfish clients by using robust statistics, specifically the median of norms. The presented strategy, through the inclusion of the recovered updates in the aggregation process, is robust against selfish behavior. Through extensive empirical evaluations with WISDM-W and CIFAR-10 datasets, we observe that a selfish client can increase the model accuracy on its data by up to 39% and more than quadruple the accuracy variance among clients, which FairRFL can address perfectly and recover performance fairness across normal clients.
