Final Reflection: Bias in Algorithms

Lily Gallagher
1 min readMay 31, 2021

I gained a bit of a pessimistic attitude after this past week because removing bias from algorithms feels like an impossible task. Although many solutions were offered in the TED Talks we watched, there are none I see as a fool-proof solution for eliminating the implicit bias so deeply ingrained within all of us. As a Psych major and Gender Studies minor, I am critical of the degree to which humans are able to curb their unconscious attitudes/stereotypes, and in turn, create AI that is free from bias. Even if we highly diversified the teams coding this technology, we could not eliminate internalized racism, sexism, homophobia, fatphobia, etc. that permeates every aspect of how we navigate the world. However, I do have faith that the amount of bias in AI can be reduced. I believe there is great improvement to be made, but it would be naive to expect perfection.

I was surprised to learn from this past week’s content that our legal system uses AI (algorithms, specifically) to predict recidivism. I’m was shocked that I hadn’t learned about this in my Forensic Psychology class, which I took last term. The more I come to know about the interworkings of the American judicial system, the more I am appalled by how poorly it serves BIPOC communities. For example, according to the NAACP, “5% of illicit drug users are African American, yet African Americans represent 29% of those arrested and 33% of those incarcerated for drug offenses” (2021). I wonder to what degree biased algorithms played into this statistic.

--

--