Reddit received more than 900 comments and 6,200 votes for a question in May 2025. The post asked “[Am i the asshole] because I told my husband’s affair companion’s fiancee about their relationship?” The post’s popularity earned it a place on Reddit’s front page trending posts. What’s the problem? The problem was that it was probably written by artificial Intelligence (AI).
The post had some telltale AI signs, including the use of stock phrases (“[my wife’s] family are furious”), excessive quotation marks and a sketch of an unrealistic situation designed to create outrage, rather than reflect a real dilemma.
Reddit users repeatedly expressed frustration over the proliferation of such content.
AI-generated posts with high engagement on Reddit can be a good example of “AI slop”, which is cheap, low quality AI-generated material, shared and created by anyone, from low level influencers to coordinated influence operations.
AI-generated, low-quality news websites are appearing everywhere. AI images are also saturating social media platforms like Facebook. You might have seen images such as “shrimp-Jesus” on your own feeds.
Want more political coverage by academic experts? We provide you with a weekly update of the latest government developments and facts-checking claims.
Subscribe to our weekly political newsletter delivered every Friday.
AI-generated content can be cheap. The Nato StratCom Center of Excellence 2023 report found for a mere EUR10, or about PS8, you can buy tens of thousand fake views, likes and comments on all major social media platforms.
A study in 2024 revealed that a quarter (25%) of internet traffic was made up by “bad bots”. The bots that spread misinformation, steal data or scalp tickets are becoming more human-like.
The “enshittification” of the Web is a global problem. Online services have deteriorated over time, as tech companies prioritize profits over the user experience. AI-generated content represents just one part of the problem.
This content, which ranges from Reddit posts enraging readers to tearjerking videos of cats, is highly attention-grabbing. It’s lucrative for both platforms and slop-creators.
This tactic is called engagement bait. It’s a way to get people like, comment, and share a post, no matter how good it may be. You don’t even have to look for the content in order to see it.
A study examined how engagement bait such as images of cute babys wrapped in cabbage is recommended to users on social media even when they don’t follow AI-slop accounts or pages. These pages are often low-quality and promote fake or real products. They may be created to increase their followers in order to later sell the account for profit.
Meta, the parent company of Facebook, said in April it was cracking down “spammy content” that tried to “game Facebook’s algorithm to increase views”, without mentioning AI-generated material. Meta used AI-generated Facebook profiles, but has removed some of them.
What are the risks?
All of this could have grave consequences for political communication and democracy. AI can quickly and cheaply create false information about elections, which is difficult to distinguish from human-generated material. Researchers have identified as a campaign that will influence the US presidential election in 2024. The campaign is designed to promote Republican issues and target political opponents.
Before you assume that only Republicans are doing this, remember: bots can be just as biased than humans from all viewpoints. Rutgers University’s report found that Americans from all political perspectives rely on bots in order to promote their favorite candidates.
Researchers are not innocent either. Scientists at the University of Zurich have been caught utilizing AI-powered robots to post comments on Reddit in a research study on whether fake comments can influence people’s opinions. They did not disclose to Reddit moderators that the comments were fake.
Reddit has now considered taking legal action against this university. Chief legal officer of the company said, “What this University of Zurich Team did is deeply wrong both on a moral level and a legal one.”
AI driven operations are used by political operatives from authoritarian nations such as Russia and China to influence elections in the democratic world.
The effectiveness of these operations is still up for debate. One study concluded that Russia’s attempts at interfering in the 2016 US election through social media was a failure. Another determined it accurately predicted Trump’s polling numbers. These campaigns are getting more organised and sophisticated.
Even content that appears to be apolitical can have serious consequences. It’s sheer volume makes it difficult to access real news or human-generated content.
What can be done?
Humans and computers are finding it difficult to detect malicious AI content. Computer scientists recently identified a network of bots consisting of 1,100 fake X Twitter accounts that posted machine-generated content, mainly about cryptocurrency. The fake accounts interacted with each other by liking and retweeting. Botometer, a tool developed by the researchers to detect bots, failed to identify the accounts as fake.
If you know how to look, it’s relatively easy to detect the use of AI. This is especially true when content is formulaic and unapologetically false. It’s harder to detect AI in short-form content, such as Instagram comments or images of high quality. The technology that is used to create AI slop, rapidly improving.
We are keen observers of AI trends, and we have seen the spread of false information. We would like to conclude on a positive note by offering practical solutions to reduce AI’s potency or to detect AI slop. In reality, however, people are just jumping ship.
Social media users, dissatisfied by the AI-slop on traditional platforms , are joining online communities that accept only invites. The communities that we are seeking out often consist of individuals who share our views. This could lead to further fragmentation of the public sphere.
As sorting increases, social media may become a mindless entertainment platform, where bots interact with each other and humans watch. Platforms don’t wish to lose their users. However, they may push as much AI as the public will tolerate.
Labelling AI-generated Content, improved bot detection, and disclosure regulations are some of the possible technical solutions. However it is unclear how effective warnings such as these will be in practice.
Research is still in its infancy, but some research shows to be a promising tool for identifying deepfakes.
We are only just beginning to understand the scope of the problem. AI is likely to produce garbage if it’s trained on an “enshittified internet”.
Jon Roozenbeek received funding from the UK Cabinet Office and the US State Department. He also received funding through the ESRC, Google (the European Science Research Council), the American Psychological Association (the American Psychological Association), the US Centers for Disease Control as well as EU Horizon 2020.
Yara Yrychenko is funded by the Bill & Melinda Gates Foundation, and supported by the Alan Turing Institute Enrichment Programme.