Abstract
Artificial Intelligence (AI) is reshaping Human Resource Management (HRM) by enabling data-driven decision-making across recruitment, performance evaluation, employee engagement, and workforce analytics. While these advancements enhance operational efficiency and strategic outcomes, they simultaneously introduce significant ethical challenges warranting systematic examination. This study presents a comprehensive systematic review of the ethical implications of AI adoption in HRM, with a focus on algorithmic bias, transparency, accountability, and data privacy. A structured search of major academic databases including Scopus, Web of Science, and Google Scholar was conducted using terms such as 'AI in HRM,' 'algorithmic bias,' 'AI ethics,' and 'HR governance,' yielding a final corpus of 45 peer-reviewed studies published between 2015 and 2024. The review demonstrates that AI-driven recruitment and decision-making systems may inherit and amplify biases embedded in historical datasets, thereby producing discriminatory outcomes for certain demographic groups. Furthermore, the opacity characteristic of advanced AI models constrains explainability and undermines stakeholder trust. The proliferation of AI-enabled employee monitoring technologies raises additional concerns regarding data privacy and individual autonomy. In response, the study examines ethical governance frameworks including fairness auditing, Explainable AI (XAI) techniques, and regulatory compliance mechanisms and underscores the critical role of HR professionals in ensuring responsible AI implementation. The review contributes to the extant literature by synthesizing current knowledge and proposing practical strategies for ethical and sustainable AI integration within HRM.