Misleading
No Result
View All Result
  • Login
  • Register
Misleading
  • About Us
  • Log in
  • Don’t Mislead (Archive)
  • Privacy Policy
No Result
View All Result
Misleading
No Result
View All Result

There are signs that AI systems resemble humans too much. Will this be a problem for you?

May 21, 2025
in Missleading
Reading Time: 4 mins read
0 0
A A
0
Share on FacebookShare on Twitter
Studiostoks/Shutterstock

What if a machine could read your feelings and intentions and write thoughtful, empathetic and perfectly timed replies — and seem to know exactly what you want to hear? You wouldn’t know it was artificial. What if it’s already here?

In a comprehensive meta-analysis published in Proceedings of the National Academy of Sciences we demonstrate that the most recent generation of chatbots powered by large language models can communicate as well or better than the majority of humans. These systems are now passing the Turing Test with a growing body of research.

We never expected the arrival of super communicators. Science fiction told us that AI would be all-knowing and highly rational, but lacking in humanity.

Here we are. Recent experiments have shown models like GPT-4 to outperform human in writing persuasively as well as empathetically. A second study showed that large language models excelled at assessing nuanced emotion within human-written messages.

LLMs excel at roleplay. They can adopt a variety of personas, and imitate nuanced linguistic characters. The ability of LLMs to deduce human intentions and beliefs from text is a major factor. LLMs are not socially aware or empathetic, but they can be very effective at mimicking.

These systems are called “anthropomorphic agent”. In the past, anthropomorphism was used to describe assigning human characteristics to non-human entities. LLMs, however, display many human-like characteristics, and so any calls to refrain from anthropomorphising LLMs are futile.

It is a historic moment when you can’t tell the difference between a human and an AI chatbot.

Nobody knows that you are an AI on the Internet

What does it mean? On one hand, LLMs promise that complex information can be made more accessible through chat interfaces by customizing messages to individual comprehension levels. It can be used in many different domains such as public health or legal services. The roleplaying abilities can be utilized in education to create Socratic teachers who ask students personalised questions and assist them with learning.

These systems are both seductive and enticing. AI companion apps are already used by millions of users every day. Many have spoken about the negative impacts of companion apps but anthropomorphic attraction has far greater implications.

Users are willing to share highly personal data with AI chatbots. Combine this with bots’ high persuasive abilities, and real concerns emerge.

A recent study by AI company Anthropic shows that its Claude 3 bot was most persuasive when it allowed to fabricate and deceive. AI chatbots are better at deception because they have no moral inhibitions.

This allows for mass manipulation, disinformation or highly effective sales techniques. What could be more powerful than a trusted friend casually recommending an item in conversation? ChatGPT has begun providing product recommendations as a response to questions from users. It is only a small step away from subtly integrating product recommendations into your conversations, without you asking.

What can you do?

While it is easy to say that regulations are needed, it can be difficult to implement them.

First, we need to make people aware of these capabilities. The EU AI Act requires disclosure. This will not suffice, due to the seductive nature of AI systems.

Second, we must better understand the anthropomorphic characteristics. LLM tests have so far measured “intelligence” (knowledge recall) and “human-likeness”, but none has yet to measure this. A test of this kind could require AI companies to disclose their anthropomorphic capabilities with a rating scale, and legislators can determine the acceptable levels of risk for different contexts and age categories.

It is important to act now. The social media example, where the industry was left largely unregulated, until it caused significant harm, shows that there’s a need for action. AI will likely amplify problems such as misinformation and disinformation or loneliness. Meta CEO Mark Zuckerberg already indicated that he’d like to replace the lack of human contact with AI friends.

It would be a mistake to rely on AI companies not to further humanise their systems. The opposite is true. OpenAI wants to make their systems more personable and engaging. They can customize your ChatGPT version with a particular “personality”. ChatGPT is generally more chatty and asks follow-up questions in order to continue the conversation. voice mode also adds a seductive appeal.

Agents that resemble humans can do a lot of good. Their persuasive ability can be used to promote good and bad causes, from battling conspiracy theories to encouraging users to donate and engage in other prosocial behaviors.

We need to develop a comprehensive agenda that covers the design, development, deployment, use, policy, and regulation of conversational agent. We shouldn’t change our systems when AI is able to push our buttons.

Jevin West is funded by the National Science Foundation and the Knight Foundation. The full list of funders and affiliated organizations can be found here: https://jevinwest.org/cv.html



Kai Riemer & Sandra Peter have not disclosed any relevant affiliations other than their academic appointment. They do not work, consult, own or receive funding from companies or organisations that would benefit from the article.

Previous Post

Report: Israel preparing to strike Iran’s nuclear facilities

Next Post

The FTC has filed charges against student loan debt relief scam operators. They have agreed to be permanently banned from the industry and to turn over assets to resolve these charges.

Related Posts

President Trump, Ya Really Don’t Need that Aircraft from Qatar, It’s Not A Good Look!
Don’t Mislead

President Trump, Ya Really Don’t Need that Aircraft from Qatar, It’s Not A Good Look!

May 29, 2025
Missleading

AI helps researchers detect disinformation using weaponized storytelling

May 29, 2025
Trump Nominates Matt Gaetz For Attorney General
Missleading

Trump Celebrates “Major Win” In Lawsuit Against Pulitzer Board

May 29, 2025
Missleading

What is AI slop? You are now seeing fake videos and photos in your social media feeds

May 28, 2025
The Misleading Reason Why President Biden Chose Not To Participate in Halftime or Pregame Interview of the  2024 Super Bowl
Don’t Mislead

The Misleading Reason Why President Biden Chose Not To Participate in Halftime or Pregame Interview of the 2024 Super Bowl

May 27, 2025
Veteran Washington Correspondent James Rosen Speaks about his Blackballing from the Biden Press Room
Don’t Mislead

Veteran Washington Correspondent James Rosen Speaks about his Blackballing from the Biden Press Room

May 27, 2025
Next Post

The FTC has filed charges against student loan debt relief scam operators. They have agreed to be permanently banned from the industry and to turn over assets to resolve these charges.

Don’t Mislead: Allegedly, Blake Lively Reached out to the Billion Dollar Pop Princess to Erase DM’s

Don't Mislead: Allegedly, Blake Lively Reached out to the Billion Dollar Pop Princess to Erase DM's

Please login to join discussion
Misleading

Misleading is your trusted source for uncovering fake news, analyzing misinformation, and educating readers about deceptive media tactics. Join the fight for truth today!

TRENDING

AI helps researchers detect disinformation using weaponized storytelling

Veteran Washington Correspondent James Rosen Speaks about his Blackballing from the Biden Press Room

What is AI slop? You are now seeing fake videos and photos in your social media feeds

LATEST

The California Interscholastic Federation (CIF) Has Implemented a New Rule in an Effort to Allow Transgender Students to Compete in High School Sports, Add Additional Medal Positions

President Trump, Ya Really Don’t Need that Aircraft from Qatar, It’s Not A Good Look!

AI helps researchers detect disinformation using weaponized storytelling

  • About Us
  • Disclaimer
  • Privacy Policy
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions

Copyright © 2025 Misleading.
Misleading is not responsible for the content of external sites.

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • About Us
  • Log in
  • Don’t Mislead (Archive)
  • Privacy Policy

Copyright © 2025 Misleading.
Misleading is not responsible for the content of external sites.