ÉTICA Y GÉNERO EN LA IA: IDENTIFICAR SESGOS DE GÉNERO EN IA MEDIANTE PENSAMIENTO COMPLEJO
Resumen
El presente trabajo explora la identificación y mitigación de sesgos aplicando categorías propias del Pensamiento Complejo, especialmente los sesgos de género, en los modelos de inteligencia artificial (IA), así como las mejores prácticas para garantizar la equidad y la inclusión en el desarrollo de algoritmos de IA.
Descargas
Citas
Binns, R. (2018). “Fairness in machine learning: Lessons from political philosophy”. Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency, 149-159.
Bolukbasi, T., Chang, K. W., Zou, J. Y., Saligrama, V., & Kalai, A. T. (2016). “Man is to computer programmer as woman is to homemaker? Debiasing word embeddings”. Advances in neural information processing systems, 29.
Buolamwini, J., & Gebru, T. (2018). “Gender shades: Intersectional accuracy disparities in commercial gender classification”. Proceedings of the 1st Conference on Fairness, Accountability, and Transparency, 77-91.
Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters.
De-Arteaga, M., Romanov, A., Wallach, H., Chayes, J., Borgs, C., Chouldechova, A., ... & Geyik, S. (2019). “Bias in bios: A case study of semantic representation bias in a high-stakes setting”. Proceedings of the Conference on Fairness, Accountability, and Transparency, 120-128.
Doshi-Velez, F., & Kim, B. (2017). “Towards a rigorous science of interpretable machine learning”. arXiv:1702.08608v2.
Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2012). “Fairness through awareness”. Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, 214-226.
Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin's Press.
Fricker, M. (2007). Epistemic Injustice. Power and the Ethics of Knowing. Oxford Univesity Press.
Hardt, M., Price, E., & Srebro, N. (2016). “Equality of opportunity in supervised learning”. Advances in neural information processing systems, 29, 3315-3323.
Holstein, K., Wortman Vaughan, J., Daumé III, H., Dudik, M., & Wallach, H. (2019). “Improving fairness in machine learning systems: What do industry practitioners need?”. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1-16.
Hupfer, S., O'Rourke, E., Park, J. S., Young, M., & Choi, J. (2020). “The Gendered Design of AI Assistants: Speaking, Serving, and Gender Stereotypes”. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1-13.
Jobin, A., Ienca, M., & Vayena, E. (2019). “The global landscape of AI ethics guidelines”. Nature Machine Intelligence, 1(9), 389-399.
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). “A survey on bias and fairness in machine learning”. ACM Computing Surveys (CSUR), 54(6), 1-35.
Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., ... & Gebru, T. (2019). “Model cards for model reporting”. Proceedings of the Conference on Fairness, Accountability, and Transparency, 220-229.
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.
Raji, I. D., & Buolamwini, J. (2019). “Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial AI products”. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 429-435.
Raji, I. D., & Yang, X. (2019). “Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing”. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 33-44.
Saunders, L., & Byrne, B. (2020). “Reducing gender bias in neural machine translation as a domain adaptation problema”. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 772-782.
Wang, Y., Wu, L., & Wang, H. (2019). “Mitigating bias in facial recognition datasets”. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2761-2768.
West, S. M., Whittaker, M., & Crawford, K. (2019). Discriminating systems: Gender, race, and power in AI. AI Now Institute.
Zhao, J., Wang, T., Yatskar, M., Ordonez, V., & Chang, K. W. (2017). “Men also like shopping: Reducing gender bias amplification using corpus-level constraints”. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 2979-2989.
Derechos de autor 2024 Revista Iberoamericana de Complejidad y Ciencias Económicas
Esta obra está bajo licencia internacional Creative Commons Reconocimiento-NoComercial-SinObrasDerivadas 4.0.
Los autores ceden exclusivamente el derecho de publicación de su artículo a la Revista Iberoamericana de Complejidad y Ciencias Económicas, que podrá editar o modificar formalmente el texto aprobado para cumplir con las normas editoriales propias y con los estándares gramaticales universales, antes de su publicación; asimismo, nuestra revista podrá traducir los manuscritos aprobados a cuantos idiomas considere necesario y difundirlos en varios países, dándole siempre el reconocimiento público al autor o autores de la investigación.