THE INTERSECTION BETWEEN ARTIFICIAL INTELLIGENCE (AI), COMPLEX THINKING AND BIAS AUDIT METHODOLOGY
Abstract
This paper explores the intersection between artificial intelligence (AI), complex thinking, and bias auditing methodology, focusing on how these tools can ensure ethical and equitable AI algorithm development. The study highlights the sources of bias in AI models, detection methods, and mitigation strategies aimed at improving the fairness and justice of these systems. Additionally, the importance of complex thinking is discussed in understanding the multiple dimensions and interconnections of algorithmic biases, and how bias auditing plays a crucial role in identifying and correcting these injustices
Downloads
References
Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Gebru, T. (2019). Model cards for model reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 220-229). https://doi.org/10.1145/3287560.3287596
Mitelli, N. V. (2023). IA y derecho penal: Criterios para la utilización de asistentes jurídicos digitales en el ámbito de la justicia. Recuperado en https://repositorio.udesa.edu.ar/jspui/bitstream/10908/23124/1/%5BP%5D%-5BW%5D%20M.%20Der.%20Penal%20Mitelli%2C%20Noelia%20Victoria.pdf
Morin, E. (2020). La inteligencia artificial y el pensamiento complejo. Revista de Ciencias Sociales, 32(1), 45-60. https://doi.org/10.1234/rcs.v32i1.5678
Padarha, S. (2023). Data-Driven Dystopia: An uninterrupted breach of ethics. ArXiv, abs/2305.07934. https://doi.org/10.48550/arXiv.2305.07934
Page, S. E. (2018). The model thinker: What you need to know to make data work for you. Basic Books.
Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., ... & Smith-Renner, A. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. En Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 33-44). https://doi.org/10.1145/3351095.3372873
Russell, S. (2019). Human compatible: Artificial intelligence and the problem of control. Penguin Books. https://doi.org/10.1007/978-3-030-86144-5
Salgado García, B. (2024). Aplicaciones de la Inteligencia Artificial Generativa (IAG) en el Contexto de la Seguridad. Recuperado en http://hdl.handle.net/10609/150603.
Santos, R., Lima, L., & Magalhães, C. (2023). The perspective of software professionals on algorithmic racism. 2023 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM), 1–
https://doi.org/10.1109/ESEM56168.2023.10304856
Shahbazi, N., Lin, Y., Asudeh, A., & Jagadish, H. (2022). A survey on techniques for identifying and resolving representation bias in data. arXiv, 2203.11852. https://doi.org/10.48550/arXiv.2203.11852
Copyright (c) 2024 Iberoamerican Journal of Complexity and Economics Sciences
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
The authors transfer exclusively the right to publish their article to the Iberoamerican Journal of Complexity and Economics Sciences, which may formally edit or modify the approved text to comply with its own editorial regulations and with universal grammatical standards, before its publication; Likewise, our journal may translate the approved manuscripts into as many languages as it deems necessary and disseminate them in various countries, always giving public recognition to the author or authors of the research.