Gender bias in AI-based decision-making systems: a systematic literature review

Authors

  • Ayesha Nadeem University of Technology Sydney
  • Olivera Marjanovic Prof. Olivera Marjanovic
  • Babak Abedin Macquaire University, Sydney

DOI:

https://doi.org/10.3127/ajis.v26i0.3835

Keywords:

Artificial intelligence, Fairness, Gender Bias

Abstract

The related literature and industry press suggest that artificial intelligence (AI)-based decision-making systems may be biased towards gender, which in turn impacts individuals and societies. The information system (IS) field has recognised the rich contribution of AI-based outcomes and their effects; however, there is a lack of IS research on the management of gender bias in AI-based decision-making systems and its adverse effects. Hence, the rising concern about gender bias in AI-based decision-making systems is gaining attention. In particular, there is a need for a better understanding of contributing factors and effective approaches to mitigating gender bias in AI-based decision-making systems. Therefore, this study contributes to the existing literature by conducting a Systematic Literature Review (SLR) of the extant literature and presenting a theoretical framework for the management of gender bias in AI-based decision-making systems. The SLR results indicate that the research on gender bias in AI-based decision-making systems is not yet well established, highlighting the great potential for future IS research in this area, as articulated in the paper. Based on this review, we conceptualise gender bias in AI-based decision-making systems as a socio-technical problem and propose a theoretical framework that offers a combination of technological, organisational, and societal approaches as well as four propositions to possibly mitigate the biased effects. Lastly, this paper considers future research on the management of gender bias in AI-based decision-making systems in the organisational context.

Author Biographies

Olivera Marjanovic, Prof. Olivera Marjanovic

Head Of School, Professional Practice And Leadership School of Professional Practice and Leadership, University of technology Sydney Australia

Babak Abedin, Macquaire University, Sydney

 

 

References

Agarwal, P. (2020). Gender bias in STEM: Women in Tech still facing discrimination. Forbes. Retrieved from

https://www.forbes.com/sites/pragyaagarwaleurope/2020/03/04/gender-bias-in-stem-women-in-tech-report-facing-discrimination/?sh=72c9e78670fb

Ahn, Y., Lin, Y.R. (2020). Fair sight: Visual analytics for fairness in decision making. IEEE Transactions on Visualization and Computer Graphics, 26 (1). doi: https:// doi.org/10.1109/TVCG.2019.2934262

Altman, M., Wood, A., Vayena, E. (2018). A harm-reduction framework for algorithmic fairness. IEEE Security & Privacy, 16(3), 34-45.

Arrieta, A. B., Diaz – Rodriguez, N., Ser, J. D., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil- Lopez, S., Molina, D., Benjamins, R., Chatila, R., Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities, and challenges towards responsible AI. Information Fusion, 58, 82-115.

Bandara, W., Miskon, S., Fielt, E. (2011). A systematic, tool-supported method for conducting literature reviews in information systems. In Proceedings of the 19th European Conference on Information Systems, Helsinki, Finland. https://aisel.aisnet.org/ecis2011/221

Baskerville, R. L., Myers, M. D., & Yoo, Y. (2020). Digital first: The ontological reversal and new challenges for IS. MIS Quarterly, 44(2), 509 -523. doi: 10.25300/MISQ/2020/14418

Bellamy, R. K. E., Dey, K., Hind, M., Hoffman, S. C., Houde, S., Kannan, K., Lohia, P., Martino, J., Mehta, S., Mojsilovic, A., Nagar, S., Ramamurthy, K. N., Richards, J., Saha, D., Sattigeri, P., Singh, M., Varshney, K. R., Zhang, Y. (2018). AI fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. Journal of Research and Development, 1 (1), 99. doi: https://arxiv.org/pdf/1810.01943.pdf

Berente, N., Seidel, S., Safadi, H. (2019). Data-Driven Computationally-Intensive Theory Development. Information Systems Research, 30(1), 50-64.

Berente, N., Gu, B., Recker, J., Santhanam, R. (2019). Managing AI: Special issue, MIS Quarterly, 45(3), 1433-1450.

Benjamin, R. (2019). Assessing risk, automating racism. A health care algorithm reflects underlying racial bias in society. Social Science. doi: 10.1126/science.aaz3873

Berk, R., Heidari, H., Jabbari, S., Kearns, M., Roth, A. (2018). Fairness in criminal justice risk assessments: The state of the art. Sociological Methods and Research. 50 (1). https://doi.org/10.1177/0049124118782533

Bolukbasi, T., Chang, K.W., Zou, J., Saligrama, V., Kalai, A. (2016). Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In Proceedings of the 30th International Conference on Neural Information Processing Systems (pp. 4356–4364). https://dl.acm.org/doi/10.5555/3157382.3157584

Borges, A. F. S., Laurindo, F.J.B., Spinola, M. M., Goncalves, R. F., Mattos, C.A. (2021). The strategic use of artificial intelligence in the digital era: systematic literature review and future research directions. International Journal of Information Management, 57. doi: https://doi.org/10.1016/j.ijinfomgt.2020.102225

Canetti, R., Cohen, A., Dikkala, N., Ramnarayan, G., Scheffler, S., Smith, A. (2019). From soft classifiers to hard decisions: How far can we be? Proceedings of the Conference on Fairness, Accountability, and Transparency, 309-318. doi: https://doi.org/10.1145/3287560.3287561

Cao, J., Basoglu, K., Sheng. H., Lowry, P. (2015). A systematic review of social network research in Information Systems: Building a foundation for exciting Future research. Communications of the Association for Information Systems, 36(37), 727-758.

Caplan, R., Donovan, J., Hanson, L., Matthews, J. (2018). Algorithmic Accountability: A Primer: Prepared for the Congressional Progressive Caucus. Data & Society, Washington, DC, USA. doi: https://datasociety.net/library/algorithmic-accountability-a-primer/

Chen, I. Y., Szolovits, P., Ghassemi, M. (2019). Can AI help reduce disparities in general medical and mental health care? AMA Journal of Ethics, 22 (2), 167- 179.

Cirillo, D., Catuara – Solarz, S., Morey, C., Guney, E., Subirats, L., Mellino, S., Gigante, A., Valencia, A., Rementeria, M. J., Chadha, A. S., Nikolaos, M. (2020). Sex and gender differences and biases in artificial intelligence for biomedicine and healthcare. Digital Medicine, 8 (3). doi: https://www.nature.com/articles/s41746-020-0288-5

Clifton, J., Glasmeier, A., Gray, M. (2020). When machines think for us: the consequences for work and place. Cambridge Journal of Regions, Economy, and Society,13(1), 3-23.

Costa, P., Ribas, L. (2019). AI becomes her: Discussing gender and artificial intelligence. A Journal of Speculative Research,17 (2), 171-193.

Conboy, K., Crowston, K., Lundstrom, J.E., Jarvenpaa, S., Ram, S., Mikalef, P. (2022). Artificial intelligence in Information systems: State of the art and research roadmap. Communications of the Association for Information Systems, 50, 420- 438.

Collins, C., Dennehy, D., Conboy, K., Mikalef, P. (2021). Artificial intelligence in information system research: A systematic literature review and research agenda. International Journal of Information Management, 60. doi: https://doi.org/10.1016/j.ijinfomgt.2021.102383

Crawford, K. (2016). A.I.’s White Guy Problem. (Sunday Review Desk) (OPINION). The New York Times. doi:https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html

Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. doi: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G

Daugherty, P., Wilson, H., Chowdhury, R. (2018). Using artificial intelligence to promote diversity. MIT Sloan Management Review. doi:https://sloanreview.mit.edu/article/using-artificial-intelligence-to-promote-diversity/

Dawson, D., Schleiger, E., Horton, J., McLaughlin, J., Robinson, C., Quezada, G., Scowcroft, J., Hajkowicz, S. (2019). Artificial Intelligence: Australia’s Ethics Framework. Data61 CSIRO, Australia. doi: https://www.csiro.au/en/research/technology-space/ai/ai-ethics-framework

Dwivedi, Y.K., Hughes, L, …, Williams, M. D. (2019). Artificial Intelligence: Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International Journal of Information Management, 57(7). doi:10.1016/j.ijinfomgt.2019.08.002

Eubanks, V. (2018). Automating inequalities: How high-tech tools profile, police, and punish the poor. Law Technology and Humans. doi: https://dl.acm.org/doi/10.5555/3208509

European Union (2021). Europe Fit for the Digital Age: Commission Proposes New Rules and Actions for Excellence and Trust in Artificial Intelligence. Available at

https://ec.europa.eu/commission/presscorner/detail/en/ip_21_1682

Feuerriegel, S., Dolata, M., Schwabe, G. (2020). Fair AI: Challenges and opportunities. Business and Information Systems Engineering, 62(4), 379-384.

Feast, J., (2019). 4 ways to address gender bias in AI, Harvard Business Review. doi: https://hbr.org/2019/11/4-ways-to-address-gender-bias-in-ai

Galleno, A., Krentz, M., Tsusaka, M., Yousif, N. (2019). How AI could help or hinder women in the workforce. Boston Consulting Group.

doi:https://www.bcg.com/publications/2019/artificial-intelligence-ai-help-hinder-women-workforce.

Grari. V., Ruf. B., Lamprier. S., Detyniecki. M. (2020). Achieving fairness with decision tress: An adversarial approach. Data Science and Engineering, 5(2). 99- 110.

Hardt, M., Price, E., Srebro, N. (2016). Equality of opportunity in supervised learning. In Proceeding of the 30th International Conference on Neural Information Processing Systems. doi: https://proceedings.neurips.cc/paper/2016/file/9d2682367c3935defcb1f9e247a97c0d-Paper

Hayes, P., Poel, I.V.D., Steen, M. (2020). Algorithms and values in justice and security. AI & Society, 35 (3), 533- 555.

Hoffmann, A.L. (2019). Where fairness fails: data, algorithms, and the limits of anti-discrimination discourse. Information, Communication & Society, 22(7), 900-915.

ICO (2018). What is automated individual decision-making and profiling. UK Information

Commissioner’s Office, 1-23. doi: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-dataprotection-regulation-gdpr/automated-decision-making-and-profiling/what-is-automated-individual-decisionmaking-and-profiling/#id2

Ibrahim, S.A., Charlson, M.E., Neill, D.B. (2020). Big data analytics and the structure for equity in healthcare: The promise and perils. Health Equity, 4 (1), 99- 101.

Johnson, K.N. (2019). Automating the risk of bias. George Washington Law Review, 87(6). Doi: https://www.gwlr.org/wp-content/uploads/2020/01/87-Geo.-Wash.-L.-Rev.-1214.pdf

Jobin, A., Lenca, M., Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1, 389-399.

Kaplan, A., Haenlein, M. (2019). Siri, siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implication of artificial intelligence. Business Horizons, 62, 15-35.

Kitchenham, B., Budgen, D., Brereton, O.P. (2011). Using mapping studies as the basis for further research – A participant -observer case study. Information and Software Technology, 53 (6), 638- 651.

Kordzadeh, N., Ghasemaghaei, M. (2021). Algorithmic bias: review synthesis, and future research directions. European Journal of Information Systems, 31 (3), 388- 409.

Kyriazanos, D. M., Thanos, K.G., Thomopoulos, S.C.A. (2019). Automated decisions making in airports checkpoints: Bias detection toward smarter security and fairness. IEEE Security & Applications, 17 (2), 8-16.

Lambrecht, A., Tucker, C. (2019). Algorithmic bias? An empirical study of apparent gender-biased discrimination in the display of STEM career ads. Management Science, 65(7), 2947-3448.

Leavy, S. (2018). Gender bias in artificial intelligence: The need for diversity and gender theory in Machine learning. 2018 IEEE/ACM First international workshop on gender equality in software engineering, Gothenburg, Sweden.

doi: https://ieeexplore.ieee.org/document/8452744

Lee, N. T. (2018). Detecting racial bias in algorithms and machine learning. Journal of Information, Communication, and Ethics in Society, 16 (3), 252-260.

Marabelli, M., Newell, S., Handunge, V. (2021). The lifecycle of algorithmic decision- making systems: Organizational choices and ethical challenges. Journal of Strategic Information Systems, 30(3). doi: https://doi.org/10.1016/j.jsis.2021.101683

Markus (2017). Datification, organizational strategy, and IS research: What’s the score? The Journal of Strategic Information Systems, 26(3), 233-241. doi:https://doi.org/10.1016/j.jsis.2017.08.003.

Martinez, C. F., Fernandez, A. (2020). AI and recruiting software: Ethical and legal implications. Journal of Behavioural Robotics, 11(1). doi: https://doi.org/10.1515/pjbr-2020-0030

Martin, K. (2019). Ethical implications and accountability of algorithms. Journal of Business Ethics, 160, 835- 850.

Marjanovic, O., Cecez-Kecmanovic, D., Vidgen, R. (2021). Algorithmic pollution: Making the invisible visible. Journal of Information Technology, 36(3), 391-408.

Masiero, S., Aaltonen, A. (2020). Gender bias in IS research: A literature review. In Proceedings of the 41st International Conference on Information Systems, Hyderabad, India. doi: http://dx.doi.org/10.2139/ssrn.3751440

Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A. (2019). A survey on bias and fairness in machine learning. ACM Computing Surveys 54 (6).

https://doi.org/10.1145/3457607

Mikolov, T, Chen, Corrado, G. S., Dean, J. (2013). Efficient Estimation of Word Representations in Vector Space. In Proceedings of the International Conference on Learning Representations. Retrieved from

https://storage.googleapis.com/pub-tools-public-publication-data/pdf/41224.pdf

Miron, M., Tolan, S., Gomez, E., Castillo, C. (2020). Evaluating causes of algorithmic bias in juvenile criminal recidivism. Artificial Law and Intelligence, 29(2), 111-147.

Nadeem, A., Abedin, B., Marjanovic, O. (2020). Gender bias in AI: A review of contributing factors and mitigating strategies. In Proceedings of the Australasian Conference on Information Systems, Wellington, New Zealand. https://aisel.aisnet.org/acis2020/27

Nadeem, A., Marjanovic, O., Abedin, B. (2021). Gender bias in AI: Implications for managerial practices. 13E 2021. Responsible AI and analytics for an ethical and inclusive digitized society. doi: https://link.springer.com/chapter/10.1007/978-3-030-85447-8_23

Niehaus, F., Wiesche, M. (2021). A Socio-Technical perspective on organizational interaction with AI: A literature review. In Proceedings of European Conference on Information Systems 2021. doi: https://aisel.aisnet.org/ecis2021_rp/156

Ntoutsi, E., Fafalios, P., …, Staab, S., (2019). Bias in data-driven artificial intelligence systems – An introductory survey. Data Mining and Knowledge Discovery, 10(3). doi: https://doi.org/10.1002/widm.1356

Noriega., M. (2020). The application of artificial intelligence in police interrogations: An analysis addressing the proposed effect AI has on racial and gender bias, cooperation, and false confessions. Futures, 117. doi:10.1016/j.futures.2019.102510

Pare`, G., Trudel, M-C., Jaana, M., Kitsiou, S. (2015). Synthesizing information systems knowledge: A typology of literature reviews. Information & Management, 52(2) 183-199.

Parikh, R.B., Teeple, S., Navathe, A.M. (2019). Addressing bias in artificial intelligence in health care. JAMA. doi: https://pubmed.ncbi.nlm.nih.gov/31755905/

Parsheera, S. (2018). A gendered perspective on Artificial Intelligence. In Proceeding of ITU Kaleidoscope – Machine learning for a 5G Future.

doi: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3374955

Petersen, K., Vakkalanka, S., Kuzniarz, J. (2015). Guidelines for conducting systematic mapping studies in software engineering: An update. Information and Software Technology, 64, 1-18.

Paulus, J. K., Kent, D. M. (2020). Predictably unequal: Understanding and addressing concerns that algorithmic clinical prediction may increase health disparities. Digital Medicine, 99 (3). https://doi.org/10.1038/s41746-020-0304-9

Piano, S.L. (2020). Ethical principles in machine learning and artificial intelligence: cases from the field and possible ways forward. Humanities and Social Sciences Communications. doi: https://www.nature.com/articles/s41599-020-0501-9

Prates, M., Avelar, P., Lamb, L.C. (2018). Assessing gender bias in machine translation – A case study with google translate. Neural Computing and Applications. 32, 6363 – 6381.

Qureshi, B., Kamiran, F., Karim, A., Ruggieri, S., Pedreschi, D. (2020). Causal inference for social discrimination reasoning. Journal of Intelligent Information Systems, 54, 425-437.

Robert, L.P., Pierce, C., Marquis, L., Kim, S., Alahmad, R. (2020). Designing fair AI for managing employees in organizations: a review, critique, and design agenda. Human - Computer Interaction. 35(5-6). doi: https://doi.org/10.1080/07370024.2020.1735391

Robnett, R. D. (2015). Gender bias in STEM fields: Variation in prevalence and links to STEM self-concept. Psychology of Women Quarterly, 40(1).

https://doi.org/10.1177/0361684315596162

Sarker, S., Chatterjee, S., Xiao, X., Elbanna, A. (2019). The socio technical axis of cohesion for the IS discipline: Its historical legacy and its continued relevance. MIS Quarterly, 43(3), 695-719.

Sun, T., Gaut, A., Tang, S., Huang, Y., Elsherief, M., Zhao, J., Mirza, D., Belding, E., Chang, K. W., Wang, W. Y. (2019). Mitigating gender bias in Natural Language Processing: A literature review. In Proceeding of 57th Annual Meeting of Association for Computational Linguistics, Florence, Italy. doi: 10.18653/v1/P19-1159

Soleimani, M., Intezari, A., Pauleen, D.J. (2021). Mitigating cognitive biases in developing AI-assisted recruitment systems: A knowledge-sharing approach. International Journal of Knowledge Management, 18(1). Doi: 10.4018/IJKM.290022

Schonberger, D. (2019). Artificial intelligence in healthcare: a critical analysis of the legal and ethical implications. International Journal of Law and Information Technology, 27 (2), 171-203.

Teodorescu, M., Morse, L., Awwad, Y., Kane. G. (2021). Failures of Fairness in Automation Require a Deeper Understanding of Human-ML Augmentation. MIS Quarterly, 45(3), 1483-1500.

Thelwall, M. (2017). Gender bias in machine learning for sentiment analysis. Online Information Review, 42(3), 343- 354.

Trewin, S., Basson, S., Muller, M., Branham, S., Treviranus, J., Gruen, D., Hebert, D., Lyckowski, N., Manser, E. (2019). Considerations for AI fairness for people with disabilities. AI Matters, 5(3). doi: 10.1145/3362077.3362086

United Nations Educational, Scientific and cultural organization (UNESCO) (2020). Artificial intelligence, and gender equality: key finding of UNESCO’s global dialogue. Available at https://unesdoc.unesco.org/ark:/48223/pf0000374174

Veale, M., Binns, R. (2017). Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. Big Data and Society, 1-17. doi: https://doi.org/10.1177/2053951717743530

Wang, L. (2020). The three harms of gendered technology. Australasian Journal of Information Systems, 24. doi: https://doi.org/10.3127/ajis.v24i0.2799

West, S.M., Whittaker, M., Crawford, K. (2019). Discriminating systems: Gender, race and power in AI. AI Now Institute. Retrieved from https://ainowinstitute.org/ discriminatingsystems.html

Webster, J., Watson, R.T. (2002). Analysing the past to prepare for the future: Writing a literature review, MIS Quarterly, 26(2), 3-23.

Wolfswinkel, J., Furtmueller, E., Wilderom, C. (2013). Using grounded theory as a method for rigorous reviewing literature. European Journal of Information Systems, 22 (1), 45-55.

Wu, W., Huang, T., Gong, K. (2019). Ethical principles and governance technology development of AI in China. Engineering, 6(3), 302-309.

Zhong, Z. (2018). A tutorial on fairness in machine learning, Towards Data Science. https://towardsdatascience.com/a-tutorial-on-fairness-in-machine-learning-3ff8ba1040

Downloads

Published

2022-12-21

How to Cite

Nadeem, A., Marjanovic, O. ., & Abedin, B. . (2022). Gender bias in AI-based decision-making systems: a systematic literature review. Australasian Journal of Information Systems, 26. https://doi.org/10.3127/ajis.v26i0.3835

Issue

Section

Selected Papers from the Australasian Conference on Information Systems (ACIS)