Bias and fairness in transfer learning
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Transfer learning involves using knowledge from one task to improve performance and reduce training time on a related task. However, recent studies highlight a critical issue: the fairness of models trained with transfer learning. One study showed that transfer learning can transfer intentionally planted biases from the source task to the target task. This thesis explores a different but equally critical problem: whether transfer learning can introduce new biases or lead to greater biases in the target task. Our investigation reveals that transfer learning can introduce varying degrees of bias in the target task that were not present in the source task. We examined two applications that commonly use transfer learning. Our findings indicate that, in both cases, transfer learning increases bias concerning sex, age, and race compared to non-transfer learning methods trained from scratch, which are nearly as accurate. These results emphasize the need for understanding the limitations and risks of transfer learning, especially in high risk domains like healthcare and security, and call for further research into conditions under which transfer learning introduces and amplifies bias.