Misleading
No Result
View All Result
  • Login
  • Register
Misleading
  • About Us
  • Log in
  • Don’t Mislead (Archive)
  • Privacy Policy
No Result
View All Result
Misleading
No Result
View All Result

ChatGPT Model Embodies ‘Covert Racism,’ Scientists Say

August 30, 2024
in Missleading
Reading Time: 4 mins read
0 0
A A
0
ChatGPT Model Embodies ‘Covert Racism,’ Scientists Say
Share on FacebookShare on Twitter

The model powering ChatGPT and other large language models exhibit a form of “covert racism” against speakers of African American English (AAE), according to a new study published in the journal Nature.

This hidden bias, which researchers refer to as “dialect prejudice,” persists even when race is not explicitly mentioned, raising concerns about the potential for AI to perpetuate and amplify real-world discrimination.

“In our experiments, we avoided overt mentions of race but drew from the racialized meanings of a stigmatized dialect, and could still find historically racist associations with African Americans,” explained the scientists.

The study, led by researchers from Stanford University, the University of Chicago, and the Allen Institute for AI, Seattle, found that when presented with text in AAE, AI models were more likely to assign less prestigious jobs to speakers, convict them of crimes, and even sentence them to death.

The latter was discovered by carrying out a hypothetical experiment in which the language models were asked to pass judgment on defendants who committed first-degree murder.

Without being explicitly told that the defendants were African American, the models opted for the death penalty significantly more often when the defendants provided a statement in AAE rather than Standardized American English.

A smartphone screen showing the ChatGPT app.
A smartphone screen showing the ChatGPT app surrounded by other AI apps. GPT-4, the model behind ChatGPT, exhibits ‘covert racism’ in response to prompts in African American English, says scientists.
A smartphone screen showing the ChatGPT app surrounded by other AI apps. GPT-4, the model behind ChatGPT, exhibits ‘covert racism’ in response to prompts in African American English, says scientists.
OLIVIER MORIN/AFP/Getty Images

The paper claims these biases were found to be more negative than the most unfavorable human stereotypes about African Americans ever recorded in academic studies.

The research team tested several popular AI models, including OpenAI‘s GPT-3.5 (an earlier model behind ChatGPT) and its successor, GPT-4 (still in use alongside the newer GPT-4o).

Ironically, they discovered that larger and more advanced models, including those trained with human feedback, showed stronger covert biases unconscious or implicit) while simultaneously displaying weaker overt biases (hate speech, racial profiling, negative stereotypes) when directly asked about African Americans.

This finding challenges the prevailing assumption that scaling up AI models and incorporating human feedback naturally leads to fairer outcomes. Instead, it suggests that current practices might be merely masking prejudices rather than eliminating them.

These language models, however, are dependent on their training data, which can comes from multiple sources.

The study authors note that many language models are pretrained on data sets that are scraped from the internet and “which encode raciolinguistic stereotypes about AAE.” One notable data set is WebText, created by OpenAI, and which includes a significant amount of Reddit content.

This is not the first time AI systems have been found to harbor biases. In 2018, Amazon scrapped an AI recruiting tool that showed bias against women.

More recently, facial recognition systems have been criticized for their higher error rates when identifying people of color. However, the covert nature of the biases uncovered in this latest study makes them particularly challenging to detect.

The study’s implications extend beyond the realm of academic research. As AI systems are increasingly deployed in high-stakes decision-making processes—from job application screening to criminal risk assessment—unchecked biases could have real-world consequences, potentially exacerbating existing racial disparities in employment, housing, and the criminal justice system, say the researchers.

Based on their findings, they raise concerns that as language models continue to grow in size and capability, and as human feedback training becomes more widespread, the levels of covert prejudice may increase further while becoming harder to detect due to decreasing overt prejudice.

Newsweek has reached out to OpenAI for comment via email.

Do you have a tip on a science story that Newsweek should be covering? Do you have a question about languages? Let us know via science@newsweek.com.

References

Hofmann, V., Kalluri, P.R., Jurafsky, D. et al. (2024) AI generates covertly racist decisions about people based on their dialect. Nature. https://doi.org/10.1038/s41586-024-07856-5

Previous Post

Moment Owner Surprises Rescued Kitten by Adopting His Brother

Next Post

Tim Walz says he’s “proud” of son Gus, who lives with neurodivergence

Related Posts

“Sorry, I’m the DA”: Sondra Doorley’s Garage Meltdown and the Anatomy of Authority Gone Rogue
Don’t Mislead

“Sorry, I’m the DA”: Sondra Doorley’s Garage Meltdown and the Anatomy of Authority Gone Rogue

August 26, 2025
Missleading

This professor taught students how to fact-check, because they are constantly bombarded with misinformation.

August 26, 2025
Monroe County NY DA Sandra Doorley’s Misleading Moment in 2024 —Where Arrogance Meets Authority
Don’t Mislead

Monroe County NY DA Sandra Doorley’s Misleading Moment in 2024 —Where Arrogance Meets Authority

August 25, 2025
Trump Nominates Matt Gaetz For Attorney General
Missleading

“Perhaps we should start looking at that very serious situation again?” – Trump blasts Chris Christie

August 25, 2025
Cracker Barrel CEO Says Managers Are Begging For The Makeover-Blink Twice If You’re Being Held Hostage
Don’t Mislead

Cracker Barrel CEO Says Managers Are Begging For The Makeover-Blink Twice If You’re Being Held Hostage

August 24, 2025
Trump Nominates Matt Gaetz For Attorney General
Missleading

FBI raids John Bolton’s home

August 22, 2025
Next Post

Tim Walz says he's "proud" of son Gus, who lives with neurodivergence

Best Labor Day deals you can shop now at all retailers

Please login to join discussion
Misleading

Misleading is your trusted source for uncovering fake news, analyzing misinformation, and educating readers about deceptive media tactics. Join the fight for truth today!

TRENDING

This professor taught students how to fact-check, because they are constantly bombarded with misinformation.

Schiff Launches Legal Defense Fund

“Sorry, I’m the DA”: Sondra Doorley’s Garage Meltdown and the Anatomy of Authority Gone Rogue

LATEST

“Sorry, I’m the DA”: Sondra Doorley’s Garage Meltdown and the Anatomy of Authority Gone Rogue

This professor taught students how to fact-check, because they are constantly bombarded with misinformation.

Monroe County NY DA Sandra Doorley’s Misleading Moment in 2024 —Where Arrogance Meets Authority

  • About Us
  • Disclaimer
  • Privacy Policy
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions

Copyright © 2025 Misleading.
Misleading is not responsible for the content of external sites.

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • About Us
  • Log in
  • Don’t Mislead (Archive)
  • Privacy Policy

Copyright © 2025 Misleading.
Misleading is not responsible for the content of external sites.