RefeRencias
•
Binns, R. (2018). “Fairness in machine lear-
ning: Lessons from political philosophy”. Pro-
ceedings of the 2018 Conference on Fairness,
Accountability, and Transparency, 149-159.
• Fricker, M. (2007). Epistemic Injustice. Power
and the Ethics of Knowing. Oxford Univesity
Press.
•
Hardt, M., Price, E., & Srebro, N. (2016).
“Equality of opportunity in supervised learn-
ing”. Advances in neural information pro-
cessing systems, 29, 3315-3323.
•
Bolukbasi, T., Chang, K. W., Zou, J. Y., Saligra-
ma, V., & Kalai, A. T. (2016). “Man is to compu-
ter programmer as woman is to homemaker?
Debiasing word embeddings”. Advances in
neural information processing systems, 29.
•
Holstein, K., Wortman Vaughan, J., Daumé III,
H., Dudik, M., & Wallach, H. (2019). “Improv-
ing fairness in machine learning systems:
What do industry practitioners need?”. Pro-
ceedings of the 2019 CHI Conference on Hu-
man Factors in Computing Systems, 1-16.
•
Buolamwini, J., & Gebru, T. (2018). “Gender
shades: Intersectional accuracy disparities
in commercial gender classification”. Proce-
edings of the 1st Conference on Fairness, Ac-
countability, and Transparency, 77-91.
•
Hupfer, S., O’Rourke, E., Park, J. S., Young, M.,
& Choi, J. (2020). “The Gendered Design of
AI Assistants: Speaking, Serving, and Gender
Stereotypes”. Proceedings of the 2020 CHI
Conference on Human Factors in Computing
Systems, 1-13.
•
•
Dastin, J. (2018). Amazon scraps secret AI re-
cruiting tool that showed bias against wo-
men. Reuters.
De-Arteaga, M., Romanov, A., Wallach, H.,
Chayes, J., Borgs, C., Chouldechova, A., ... & Ge-
yik, S. (2019). “Bias in bios: A case study of se-
mantic representation bias in a high-stakes
setting”. Proceedings of the Conference on
Fairness, Accountability, and Transparency,
• Jobin, A., Ienca, M., & Vayena, E. (2019). “The
global landscape of AI ethics guidelines”. Na-
ture Machine Intelligence, 1(9), 389-399.
•
Mehrabi, N., Morstatter, F., Saxena, N., Ler-
man, K., & Galstyan, A. (2021). “A survey on
bias and fairness in machine learning”. ACM
Computing Surveys (CSUR), 54(6), 1-35.
1
20-128.
•
•
Doshi-Velez, F., & Kim, B. (2017). “Towards a
rigorous science of interpretable machine
learning”. arXiv:1702.08608v2.
•
Mitchell, M., Wu, S., Zaldivar, A., Barnes, P.,
Vasserman, L., Hutchinson, B., ... & Gebru, T.
(2019). “Model cards for model reporting”.
Proceedings of the Conference on Fairness,
Accountability, and Transparency, 220-229.
Dwork, C., Hardt, M., Pitassi, T., Reingold, O.,
&
Zemel, R. (2012). “Fairness through awa-
reness”. Proceedings of the 3rd Innovations
in Theoretical Computer Science Conference,
2
14-226.
•
•
Noble, S. U. (2018). Algorithms of oppression:
How search engines reinforce racism. NYU
Press.
•
Eubanks, V. (2018). Automating Inequality:
How High-Tech Tools Profile, Police, and Puni-
sh the Poor. St. Martin’s Press.
Raji, I. D., & Buolamwini, J. (2019). “Action-
6
1