Skip to Content

Programmed Prejudice: Fighting Racism in AI

Illustration by Kevin Silva.
Illustration by Kevin Silva.

With more technological advances becoming heavily integrated into daily life, the rise of artificial intelligence has become something to welcome and fear.

For some, this new tech is nothing to be worried about. It is a tool, after all. But for others, there are growing concerns about the information going into the AI database and how it could be used to perpetuate biases.

Contrary to the belief that AI is a recent concept or idea, it has been around since the first half of the 20th century, according to an article on Harvard University’s website. The idea blossomed from science fiction, and scientists had already become familiarized with the concept by the 1950s. The rest is history.

While the technology is impressive, there have been growing concerns about how it is being used, who is feeding the information into these AI programs, and what information is being fed, specifically with technology like facial recognition and ChatGPT.

According to a segment on NPR’s All Things Considered, University of California, Los Angeles professor Safiya Noble talked about how AI is already being integrated into the criminal justice system.

“We saw from the very important research done by Julia Angwin and her team around the COMPAS recidivism prediction software a couple of years ago how Black people who were charged with crimes were more than four times likely to be sentenced to very severe punishment, as opposed to white offenders who were committing violent crimes and were more likely to be released on bail,” Noble said.

COMPAS, short for Correctional Offender Management Profiling for Alternative Sanctions, is software used by U.S. courts to determine and predict if a defendant is likely to commit a crime again.

According to Noble, this specific program uses histories of arrests in specific ZIP codes to calculate its outcomes; because of this, Angwin and her team found the software to be disproportionate due to the history of over-policing and over-arresting within the Black and Latino communities.

Noble said the change to disparities in this technology should start with including a more diverse workforce.

“We have a total exclusion, for the most part, of Black and Latino/Latina workers, Indigenous workers and women of color,” Noble said. “There’s no question that if you had a more diverse workforce, you might detect some of these things. You might ask different questions. You might ask some of the harder questions. But fundamentally, we have to have a robust human and civil rights framework for evaluating these technologies.”

In another instance, the use of AI in facial recognition technology has also received criticism for racial bias.

The National Institute of Standards and Technology conducted a study that found several facial recognition technologies with “demographic differentials,” confirming racial bias within these programs, according to an article in The Washington Post.

The article also states that Asian and Black communities were 100 times more likely to be misidentified as compared to white men, and that Native Americans had the highest false-positive rate of all ethnicities.

In 2020, Netflix released the film “Coded Bias,” which concerned computer scientist Joy Buolamwini, who discovered disparities in facial recognition technology while working on a creative project at the Massachusetts Institute of Technology.

Buolamwini tried to create an “Aspire Mirror” for her science fabrication class, in which the mirror would use facial recognition and reflect an aspiring image to motivate and empower whoever used it.

What she found was that the tech had difficulty identifying her face. But when she tried putting on a white mask, it had no problem. The realization that AI was not neutral further sent her into a rabbit hole of where this data input was coming from and what she could do to change it.

In 2016, following this discovery, Buolamwini formed the Algorithmic Justice League, a nonprofit organization that combines art and research to increase awareness surrounding the issue of AI further perpetuating racism.

This discrimination is not only present in facial recognition and the criminal justice system — it is also becoming a growing concern in the workforce.

Reuters released an article in 2018 revealing that Amazon had been utilizing AI in its recruitment process and that the tech showed a bias against women. Amazon’s tech taught itself to prefer male candidates and to penalize any resumes that used the word “women’s” in it.

So, what is the solution to this seemingly relentless, discriminatory entity?

The critics of AI say that more consideration and regulation should be considered when developing and analyzing these programs. As Noble noted, a diverse workforce is an excellent place to start. And whether you love it or hate it, AI looks like it’s here to stay.

And while the Algorithmic Justice League may not have Superman, Batman or Wonder Woman, there’s no doubt they will be there to combat any AI biases standing in their way.

More to Discover