Bias Mitigation in Deep Learning Models for Facial Recognition

Authors

  • Dr. Tomás Alvarez Department of Machine Learning Universidad de Innovación, Chile Author

DOI:

https://doi.org/10.63345/

Keywords:

Facial recognition, deep learning, algorithmic bias, dataset diversification, adversarial debiasing

Abstract

Facial recognition systems powered by deep learning have become pervasive across domains such as security, commerce, healthcare, and digital identity verification. Despite their high accuracy under controlled conditions, numerous studies have revealed persistent demographic biases, disproportionately affecting underrepresented populations across race, gender, and age. Such disparities raise critical ethical, social, and legal concerns, undermining the legitimacy and trustworthiness of artificial intelligence applications. This paper critically investigates the root causes of bias in deep learning-based facial recognition models and systematically evaluates mitigation strategies at three levels: pre-processing through balanced datasets and synthetic augmentation, in-processing via fairness-constrained optimization and adversarial debiasing, and post-processing through calibrated score adjustments. Using benchmark datasets including LFW, CelebA, and FairFace, alongside deep architectures such as ResNet, Vision Transformers, and adversarially trained CNNs, this study demonstrates significant reductions in subgroup disparities with minimal compromise to overall accuracy. 

Downloads

Download data is not yet available.

References

https://www.mdpi.com/applsci/applsci-14-05739/article_deploy/html/images/applsci-14-05739-g001.png

https://csdl-images.ieeecomputer.org/trans/tk/2018/10/figures/jiang1-2810873.gif

• Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT*)*, Proceedings of Machine Learning Research, 81, 1–15. MIT Media Lab

• Grother, P., Ngan, M., & Hanaoka, K. (2019). Face Recognition Vendor Test (FRVT) Part 3: Demographic effects (NISTIR 8280). National Institute of Standards and Technology. NIST

• Raji, I. D., & Buolamwini, J. (2019). Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial AI products. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (AIES). ACM Digital Library

• Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT*)*. ACM Digital Library

• Karkkainen, K., & Joo, J. (2021). FairFace: Face attribute dataset for balanced race, gender, and age for bias measurement and mitigation. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). CVF Open Access

• Huang, G. B., Ramesh, M., Berg, T., & Learned-Miller, E. (2007). Labeled Faces in the Wild: A database for studying face recognition in unconstrained environments (Technical Report 07-49). University of Massachusetts, Amherst. people.cs.umass.edu

• Liu, Z., Luo, P., Wang, X., & Tang, X. (2015). Deep learning face attributes in the wild. Proceedings of the IEEE International Conference on Computer Vision (ICCV), 3730–3738. CVF Open Access

• Schroff, F., Kalenichenko, D., & Philbin, J. (2015). FaceNet: A unified embedding for face recognition and clustering. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 815–823. CVF Open Access

• Deng, J., Guo, J., Xue, N., & Zafeiriou, S. (2019). ArcFace: Additive angular margin loss for deep face recognition. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 4690–4699. CVF Open Access

• Wang, M., & Deng, W. (2021). Deep face recognition: A survey. Neurocomputing, 429, 215–244. ScienceDirect

• Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2012). Fairness through awareness. Proceedings of the 3rd Innovations in Theoretical Computer Science Conference (ITCS ’12). ACM Digital Library

• Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. Advances in Neural Information Processing Systems (NeurIPS). arXiv

• Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153–163. PubMed

• Kleinberg, J., Mullainathan, S., & Raghavan, M. (2017). Inherent trade-offs in the fair determination of risk scores. Proceedings of the 8th Innovations in Theoretical Computer Science Conference (ITCS ’17), LIPIcs, 67, 43:1–43:23. drops.dagstuhl.de

• Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, B., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model cards for model reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT*)*. ACM Digital Library

• Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé III, H., & Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86–92. ACM Digital Library

• Zhang, B. H., Lemoine, B., & Mitchell, M. (2018). Mitigating unwanted biases with adversarial learning. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (AIES). ACM Digital Library

• Agarwal, A., Beygelzimer, A., Dudík, M., Langford, J., & Wallach, H. (2018). A reductions approach to fair classification. Proceedings of the 35th International Conference on Machine Learning (ICML), Proceedings of Machine Learning Research, 80. Proceedings of Machine Learning Research

• Calmon, F. P., Wei, D., Vinzamuri, B., Ramamurthy, K. N., & Varshney, K. R. (2017). Optimized pre-processing for discrimination prevention. Advances in Neural Information Processing Systems (NeurIPS). NeurIPS Papers

• Morales, A., Fierrez, J., Vera-Rodriguez, R., & Tolosana, R. (2021). SensitiveNets: Learning agnostic representations with application to face images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(6), 2158–2164. ScienceDirect

Downloads

Published

01-09-2025

Issue

Section

Review Article

How to Cite

Bias Mitigation in Deep Learning Models for Facial Recognition. (2025). Scientific Journal of Artificial Intelligence and Blockchain Technologies, 2(3), Sept(36-44). https://doi.org/10.63345/

Similar Articles

1-10 of 27

You may also start an advanced similarity search for this article.