Misleading
No Result
View All Result
  • Login
  • Register
Misleading
  • About Us
  • Log in
  • Don’t Mislead (Archive)
  • Privacy Policy
No Result
View All Result
Misleading
No Result
View All Result

AI-Generated Junk Science Is Flooding Google Scholar, Study Claims

September 9, 2024
in Missleading
Reading Time: 4 mins read
0 0
A A
0
AI-Generated Junk Science Is Flooding Google Scholar, Study Claims
Share on FacebookShare on Twitter

A new study claims to have uncovered a disturbing trend in the world of academic research: AI tools like ChatGPT being used to produce fake scientific papers that are infiltrating Google Scholar, one of the most widely used academic search engines.

These AI-generated studies, often indistinguishable from legitimate research, are spreading across academic databases and repositories, raising concerns about the integrity of online scientific literature.

The research, published in the Harvard Kennedy School Misinformation Review, identified 139 papers suspected of being generated by AI tools. Notably, more than half of these questionable studies focused on policy-relevant topics, including health, environmental issues, and computing technology.

“The public release of ChatGPT in 2022, together with the way Google Scholar works, has increased the likelihood of lay people (e.g., media, politicians, patients,
students) coming across questionable (or even entirely GPT-fabricated) papers and other problematic research findings,” wrote the paper’s authors.

Newsweek has contacted Google via email to comment on the study’s claims.

The authors warn that this influx of AI-generated content could facilitate what they term “evidence hacking,” the strategic manipulation of society’s knowledge base. This development poses a potential threat to public trust in science and the reliability of evidence-based decision making.

Evidence hacking is nothing new. “Just think of the tobacco or oil industries and how they delayed regulation in large part by questioning the existing scientific evidence by creating counter-evidence and placing it in strategic places,” lead author from the Swedish School of Library and Information Science at the University of Borås, told Newsweek.

However, GenAI “makes it easier, faster and probably more convincing. Together with a tool like Google Scholar, the reach of these publications increases enormously and at a much lower cost to bad actors, making evidence hacking a problem that is seeping further into society and probably into more and more domains,” added Haider.

Added to this, Google Scholar works in its favor; it “lacks the transparency and adherence to standards that usually characterize citation databases,” states the paper, noting that it also currently lacks search filters to distinguish between material type, publication status, or form of quality control (such as limiting the search to peer-reviewed material), leading to the significant presence of “gray material.”

Young woman in library with laptop.
Stock image of a young woman using a laptop in a university library. A new study from the Harvard Kennedy School Misinformation Review claimed that many ChatGPT-authored papers had made it into the Google Scholar…
Stock image of a young woman using a laptop in a university library. A new study from the Harvard Kennedy School Misinformation Review claimed that many ChatGPT-authored papers had made it into the Google Scholar platform.

Synthetic Exposition/Getty Images

Speaking to Newsweek, Haider said that one solution is that “Google Scholar needs to work more like other academic search systems and less like Google Search. At least in terms of what is included and what is not, there needs to be much more transparency and clearer criteria and greater control for users over what they want to exclude.

“At the very least, use the systems and databases already in place so that searchers can focus only on quality-checked research.”

The professor went on to tell Newsweek that one of the most surprising findings was that not only were a significant amount of these papers easy to find because they contained obvious phrases common to ChatGPT, but that “a few of those came from quite well-established journals that [serve as a guide for researchers to] check if a journal has proper safeguards and peer-review.”

“Another surprise, or probably wake-up call, was how fast and widely they had spread through the research communication infrastructure. Copies were in social media for research repositories, archives and so on. Even if they are removed from the source, they will remain online and Google Scholar will find them,” added Haider.

The lead author believes that this is just the tip of the iceberg in terms of what might be coming down the line as more people use GenAI tools. However, she also thinks that some of the papers detected in the study were so badly written that should have been caught with a simple proofread.

“The scale of the problem is very difficult to estimate and we haven’t done any calculations. According to other research, up to 1 percent of publications in 2023 may have used ChatGPT,” added the professor.

So how can the proliferation of this kind of junk science be stopped? “It is a question that requires the cooperation of different kinds of experts. Retracting a paper from the original source is probably not that complicated, but it will have spread to other places and Google Scholar will keep digging it up,” said Haider.

“The only responsible answer I can give is that this is a multidimensional problem that needs a multidimensional solution. But it is a real problem and we need to take it seriously.”

Do you have a tip on a science story that Newsweek should be covering? Do you have a question about GenAI? Let us know via science@newsweek.com.

References

Haider, J., Söderström, K. R., Ekström, B., & Rödl, M. (2024, September 3). GPT-fabricated scientific papers on Google Scholar: Key features, spread, and implications for preempting evidence manipulation. Harvard Kennedy School (HKS) Misinformation Review, 5(5).

Previous Post

Jets-49ers SGP: Our Favorite Same-Game Parlay for Monday Night Football

Next Post

Bet365 Bonus Code WEEK365: Get $200 Bonus or $1K First Bet for Jets-49ers

Related Posts

“Anchor It”, It’s Misleading To Think You Don’t Have To Anchor Your TV’s And Furniture
Don’t Mislead

“Anchor It”, It’s Misleading To Think You Don’t Have To Anchor Your TV’s And Furniture

March 16, 2026
That Viral CEO Big Arch Bite: A Masterclass in Trying Not to Mislead While Looking Uninspired
Don’t Mislead

That Viral CEO Big Arch Bite: A Masterclass in Trying Not to Mislead While Looking Uninspired

March 6, 2026
Vince McMahon Crash Footage Goes Viral, but the Misleading Commentary Goes Nuclear
Don’t Mislead

Vince McMahon Crash Footage Goes Viral, but the Misleading Commentary Goes Nuclear

March 1, 2026
Chuck Todd explains the FCC’s Equal‑Time Rule — and why the new media economy runs on grievances, not airtime.
Don’t Mislead

Chuck Todd explains the FCC’s Equal‑Time Rule — and why the new media economy runs on grievances, not airtime.

February 20, 2026
Dr. Hillary Cass — Social Media Is Over‑Labeling Kids Before They Even Understand Themselves
Don’t Mislead

Dr. Hillary Cass — Social Media Is Over‑Labeling Kids Before They Even Understand Themselves

February 19, 2026
Brian Entin: “The sheriff blocked the FBI — and sent DNA to Florida instead of Quantico”
Don’t Mislead

Brian Entin: “The sheriff blocked the FBI — and sent DNA to Florida instead of Quantico”

February 14, 2026
Next Post
Bet365 Bonus Code WEEK365: Get $200 Bonus or $1K First Bet for Jets-49ers

Bet365 Bonus Code WEEK365: Get $200 Bonus or $1K First Bet for Jets-49ers

Feds say leaders of white supremacist group solicited assassinations, attacks

Please login to join discussion
Misleading

Misleading is your trusted source for uncovering fake news, analyzing misinformation, and educating readers about deceptive media tactics. Join the fight for truth today!

TRENDING

“Anchor It”, It’s Misleading To Think You Don’t Have To Anchor Your TV’s And Furniture

LATEST

“Anchor It”, It’s Misleading To Think You Don’t Have To Anchor Your TV’s And Furniture

That Viral CEO Big Arch Bite: A Masterclass in Trying Not to Mislead While Looking Uninspired

Vince McMahon Crash Footage Goes Viral, but the Misleading Commentary Goes Nuclear

  • About Us
  • Disclaimer
  • Privacy Policy
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions

Copyright © 2025 Misleading.
Misleading is not responsible for the content of external sites.

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • About Us
  • Log in
  • Don’t Mislead (Archive)
  • Privacy Policy

Copyright © 2025 Misleading.
Misleading is not responsible for the content of external sites.