What if a machine could read your feelings and intentions and write thoughtful, empathetic and perfectly timed replies — and seem to know exactly what you want to hear? You wouldn’t know it was artificial. What if it’s already here?
In a comprehensive meta-analysis published in Proceedings of the National Academy of Sciences we demonstrate that the most recent generation of chatbots powered by large language models can communicate as well or better than the majority of humans. These systems are now passing the Turing Test with a growing body of research.
We never expected the arrival of super communicators. Science fiction told us that AI would be all-knowing and highly rational, but lacking in humanity.
Here we are. Recent experiments have shown models like GPT-4 to outperform human in writing persuasively as well as empathetically. A second study showed that large language models excelled at assessing nuanced emotion within human-written messages.
LLMs excel at roleplay. They can adopt a variety of personas, and imitate nuanced linguistic characters. The ability of LLMs to deduce human intentions and beliefs from text is a major factor. LLMs are not socially aware or empathetic, but they can be very effective at mimicking.
These systems are called “anthropomorphic agent”. In the past, anthropomorphism was used to describe assigning human characteristics to non-human entities. LLMs, however, display many human-like characteristics, and so any calls to refrain from anthropomorphising LLMs are futile.
It is a historic moment when you can’t tell the difference between a human and an AI chatbot.
Nobody knows that you are an AI on the Internet
What does it mean? On one hand, LLMs promise that complex information can be made more accessible through chat interfaces by customizing messages to individual comprehension levels. It can be used in many different domains such as public health or legal services. The roleplaying abilities can be utilized in education to create Socratic teachers who ask students personalised questions and assist them with learning.
These systems are both seductive and enticing. AI companion apps are already used by millions of users every day. Many have spoken about the negative impacts of companion apps but anthropomorphic attraction has far greater implications.
Users are willing to share highly personal data with AI chatbots. Combine this with bots’ high persuasive abilities, and real concerns emerge.
A recent study by AI company Anthropic shows that its Claude 3 bot was most persuasive when it allowed to fabricate and deceive. AI chatbots are better at deception because they have no moral inhibitions.
This allows for mass manipulation, disinformation or highly effective sales techniques. What could be more powerful than a trusted friend casually recommending an item in conversation? ChatGPT has begun providing product recommendations as a response to questions from users. It is only a small step away from subtly integrating product recommendations into your conversations, without you asking.
What can you do?
While it is easy to say that regulations are needed, it can be difficult to implement them.
First, we need to make people aware of these capabilities. The EU AI Act requires disclosure. This will not suffice, due to the seductive nature of AI systems.
Second, we must better understand the anthropomorphic characteristics. LLM tests have so far measured “intelligence” (knowledge recall) and “human-likeness”, but none has yet to measure this. A test of this kind could require AI companies to disclose their anthropomorphic capabilities with a rating scale, and legislators can determine the acceptable levels of risk for different contexts and age categories.
It is important to act now. The social media example, where the industry was left largely unregulated, until it caused significant harm, shows that there’s a need for action. AI will likely amplify problems such as misinformation and disinformation or loneliness. Meta CEO Mark Zuckerberg already indicated that he’d like to replace the lack of human contact with AI friends.
It would be a mistake to rely on AI companies not to further humanise their systems. The opposite is true. OpenAI wants to make their systems more personable and engaging. They can customize your ChatGPT version with a particular “personality”. ChatGPT is generally more chatty and asks follow-up questions in order to continue the conversation. voice mode also adds a seductive appeal.
Agents that resemble humans can do a lot of good. Their persuasive ability can be used to promote good and bad causes, from battling conspiracy theories to encouraging users to donate and engage in other prosocial behaviors.
We need to develop a comprehensive agenda that covers the design, development, deployment, use, policy, and regulation of conversational agent. We shouldn’t change our systems when AI is able to push our buttons.
Jevin West is funded by the National Science Foundation and the Knight Foundation. The full list of funders and affiliated organizations can be found here: https://jevinwest.org/cv.html
Kai Riemer & Sandra Peter have not disclosed any relevant affiliations other than their academic appointment. They do not work, consult, own or receive funding from companies or organisations that would benefit from the article.