ETHICS AND GENDER IN AI: IDENTIFY GENDER BIAS IN AI THROUGH COMPLEX THINKING
Abstract
This work explores the identification and mitigation of biases by applying categories from Complex Thinking, especially gender biases, in artificial intelligence (AI) models, as well as best practices to ensure fairness and inclusion in the development of AI algorithms.
Downloads
References
Binns, R. (2018). “Fairness in machine learning: Lessons from political philosophy”. Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency, 149-159.
Bolukbasi, T., Chang, K. W., Zou, J. Y., Saligrama, V., & Kalai, A. T. (2016). “Man is to computer programmer as woman is to homemaker? Debiasing word embeddings”. Advances in neural information processing systems, 29.
Buolamwini, J., & Gebru, T. (2018). “Gender shades: Intersectional accuracy disparities in commercial gender classification”. Proceedings of the 1st Conference on Fairness, Accountability, and Transparency, 77-91.
Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters.
De-Arteaga, M., Romanov, A., Wallach, H., Chayes, J., Borgs, C., Chouldechova, A., ... & Geyik, S. (2019). “Bias in bios: A case study of semantic representation bias in a high-stakes setting”. Proceedings of the Conference on Fairness, Accountability, and Transparency, 120-128.
Doshi-Velez, F., & Kim, B. (2017). “Towards a rigorous science of interpretable machine learning”. arXiv:1702.08608v2.
Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2012). “Fairness through awareness”. Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, 214-226.
Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin's Press.
Fricker, M. (2007). Epistemic Injustice. Power and the Ethics of Knowing. Oxford Univesity Press.
Hardt, M., Price, E., & Srebro, N. (2016). “Equality of opportunity in supervised learning”. Advances in neural information processing systems, 29, 3315-3323.
Holstein, K., Wortman Vaughan, J., Daumé III, H., Dudik, M., & Wallach, H. (2019). “Improving fairness in machine learning systems: What do industry practitioners need?”. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1-16.
Hupfer, S., O'Rourke, E., Park, J. S., Young, M., & Choi, J. (2020). “The Gendered Design of AI Assistants: Speaking, Serving, and Gender Stereotypes”. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1-13.
Jobin, A., Ienca, M., & Vayena, E. (2019). “The global landscape of AI ethics guidelines”. Nature Machine Intelligence, 1(9), 389-399.
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). “A survey on bias and fairness in machine learning”. ACM Computing Surveys (CSUR), 54(6), 1-35.
Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., ... & Gebru, T. (2019). “Model cards for model reporting”. Proceedings of the Conference on Fairness, Accountability, and Transparency, 220-229.
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.
Raji, I. D., & Buolamwini, J. (2019). “Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial AI products”. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 429-435.
Raji, I. D., & Yang, X. (2019). “Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing”. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 33-44.
Saunders, L., & Byrne, B. (2020). “Reducing gender bias in neural machine translation as a domain adaptation problema”. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 772-782.
Wang, Y., Wu, L., & Wang, H. (2019). “Mitigating bias in facial recognition datasets”. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2761-2768.
West, S. M., Whittaker, M., & Crawford, K. (2019). Discriminating systems: Gender, race, and power in AI. AI Now Institute.
Zhao, J., Wang, T., Yatskar, M., Ordonez, V., & Chang, K. W. (2017). “Men also like shopping: Reducing gender bias amplification using corpus-level constraints”. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 2979-2989.
Copyright (c) 2024 Iberoamerican Journal of Complexity and Economics Sciences

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
The authors transfer exclusively the right to publish their article to the Iberoamerican Journal of Complexity and Economics Sciences, which may formally edit or modify the approved text to comply with its own editorial regulations and with universal grammatical standards, before its publication; Likewise, our journal may translate the approved manuscripts into as many languages as it deems necessary and disseminate them in various countries, always giving public recognition to the author or authors of the research.