SWiFT: Soft-Mask Weight Fine-tuning for Bias Mitigation

Junyu Yan1Orcid, Feng Chen1Orcid, Yuyang Xue1Orcid, Yuning Du1Orcid, Konstantinos Vilouras1Orcid, Sotirios A. Tsaftaris1Orcid, Steven McDonagh1Orcid
1: School of Engineering, University of Edinburgh, EH9 3FB Edinburgh, UK
Publication date: 2025/08/21
https://doi.org/10.59275/j.melba.2025-de23
PDF

Abstract

Recent studies have shown that Machine Learning (ML) models can exhibit bias in real-world scenarios, posing significant challenges in ethically sensitive domains such as healthcare. Such bias can negatively affect model fairness, model generalization abilities and further risks amplifying social discrimination. There is a need to remove biases from trained models. Existing debiasing approaches often necessitate access to original training data and need extensive model retraining; they also typically exhibit trade-offs between model fairness and discriminative performance. To address these challenges, we propose Soft-Mask Weight Fine-Tuning (SWiFT), a debiasing framework that efficiently improves fairness while preserving discriminative performance with much less debiasing costs. Notably, SWiFT requires only a small external dataset and only a few epochs of model fine-tuning. The idea behind SWiFT is to first find the relative, and yet distinct, contributions of model parameters to both bias and predictive performance. Then, a two-step fine-tuning process updates each parameter with different gradient flows defined by its contribution. Extensive experiments with three bias sensitive attributes (gender, skin tone, and age) across four dermatological and two chest X-ray datasets demonstrate that SWiFT can consistently reduce model bias while achieving competitive or even superior diagnostic accuracy under common fairness and accuracy metrics, compared to the state-of-the-art. Specifically, we demonstrate improved model generalization ability as evidenced by superior performance on several out-of-distribution (OOD) datasets. Our code will be made public upon acceptance.

Keywords

Algorithmic Fairness · Bias Identification · Bias Removal · Mask Fine-tuning

Bibtex @article{melba:2025:015:yan, title = "SWiFT: Soft-Mask Weight Fine-tuning for Bias Mitigation", author = "Yan, Junyu and Chen, Feng and Xue, Yuyang and Du, Yuning and Vilouras, Konstantinos and Tsaftaris, Sotirios A. and McDonagh, Steven", journal = "Machine Learning for Biomedical Imaging", volume = "3", issue = "Special issue on FAIMI", year = "2025", pages = "347--366", issn = "2766-905X", doi = "https://doi.org/10.59275/j.melba.2025-de23", url = "https://melba-journal.org/2025:015" }
RISTY - JOUR AU - Yan, Junyu AU - Chen, Feng AU - Xue, Yuyang AU - Du, Yuning AU - Vilouras, Konstantinos AU - Tsaftaris, Sotirios A. AU - McDonagh, Steven PY - 2025 TI - SWiFT: Soft-Mask Weight Fine-tuning for Bias Mitigation T2 - Machine Learning for Biomedical Imaging VL - 3 IS - Special issue on FAIMI SP - 347 EP - 366 SN - 2766-905X DO - https://doi.org/10.59275/j.melba.2025-de23 UR - https://melba-journal.org/2025:015 ER -

2025:015 cover