Attempting an Machine Learning (ML) Fairness Study in Detecting Bias in Facial Recognition Algorithms
Vol. 9, Issue 1, Jan-Dec 2023 | Page: 555-60
Abstract
Modern facial recognition (FR) systems are increasingly deployed in critical applications like law enforcement, access control, and marketing. However, biases embedded within these systems have raised alarms due to disparate performance across demographic groups. This study investigates the presence and extent of bias in commercial and open-source FR models, evaluates bias detection methods, and analyzes mitigation strategies. Using benchmark datasets (e.g., MORPH, FairFace, and CelebA) with annotated race, gender, and age, we assess model accuracy, false match rate (FMR), and false non-match rate (FNMR) across subgroups. We apply fairness metrics including demographic parity, equalized odds, and disparate impact. We summarize results in comprehensive tables, conduct comparative analysis, and review the efficacy of debiasing interventions such as data balancing, adversarial training, and fairness-constrained optimization. Our findings reveal consistent performance gaps—e.g., up to 15 % lower accuracy for dark-skinned females than light-skinned males—and demonstrate that combined strategies yield the most consistent improvements. Guidelines for fairness-aware FR deployment are proposed to aid researchers and practitioners.
References
- Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104, 671–732. doi:10.2139/ssrn.2477899
- Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 1–15. doi:10.21430/M32K8N8V
- Dwork, C., et al. (2012). Fairness through awareness. Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, 214–226. doi:10.1145/2090236.2090255
- US Equal Employment Opportunity Commission (1978). Uniform Guidelines on Employee Selection Procedures.
- Klare, B. et al. (2012). Face recognition performance: Role of demographic information? IEEE Transactions on Information Forensics and Security, 7(6), 1789–1801. doi:10.1109/TIFS.2012.2214212
- Wang, Y., et al. (2020). Mitigating bias in face recognition using data augmentation. IEEE Winter Conference on Applications of Computer Vision, 1795–1804. doi:10.1109/WACV45572.2020.9093440
- Zhang, J., et al. (2019). Debiasing face verification with adversarial autoencoders. CVPR, 4007–4016. doi:10.1109/CVPR.2019.00411
- He, H., & Garcia, E. A. (2009). Learning from imbalanced data. IEEE Transactions on Knowledge and Data Engineering, 21(9), 1263–1284. doi:10.1109/TKDE.2008.239
- Zemel, R. et al. (2013). Learning fair representations. ICML, 325–333.
- Zafar, M. B. et al. (2017). Fairness constraints: Mechanisms for fair classification. AISTATS, 962–970.
Dhairya Kulnath Kakkar
Lancer's Convent School, Paschim Vihar, Rohini
Received: 14-02-2023, Accepted: 05-04-2023, Published Online: 28-05-2023