This paper investigates fairness in machine learning models for decision support systems used in critical areas such as job screening and loan approvals. Despite the wide use of ML algorithms, biased outcomes often arise due to sensitive attributes like gender and ethnicity, as well as data scarcity caused by privacy or legal constraints. The study specifically evaluates the effectiveness of SMOTE-driven oversampling methods in enhancing classification performance while mitigating bias. The findings demonstrate significant improvements in fairness and accuracy, providing actionable insights for researchers and practitioners striving to create equitable, reliable, and trustworthy AI models.
Presented at ICMLT 2024: Proceedings of the 2024 9th International Conference on Machine Learning Technologies, held in Kota Kinabalu, Sabah, from 26–28 August 2024, and published on 11 September 2024.
Link/DOI: 10.1145/3674029.3674034