Misleading
No Result
View All Result
  • Login
  • Register
Misleading
  • About Us
  • Log in
  • Don’t Mislead (Archive)
  • Privacy Policy
No Result
View All Result
Misleading
No Result
View All Result

AI is making elections weird: Lessons from a simulated war-game exercise

April 8, 2025
in Missleading
Reading Time: 6 mins read
0 0
A A
0
Share on FacebookShare on Twitter
A simulated exercise reveals much about the proliferation and circulation of AI-generated content. (Shutterstock)

On March 8, the Conservative campaign team released a video of Pierre Poilievre on social media that drew unusual questions from some viewers. To many, Poilievre’s French sounded a little too smooth, and his complexion looked a little too perfect. The video had what’s known as an “uncanny valley” effect, causing some to wonder if the Poilievre they were seeing was even real.

Before long, the comments section filled with speculation: was this video AI-generated? Even a Liberal Party video mocking Poilievre’s comments led followers to ask why the Conservatives’ video sounded “so dubbed” and whether it was made with AI.

The ability to discern real from fake is seriously in jeopardy.

Poilievre’s smooth video offers an early answer to an open question: How might generative AI affect our election cycle? Our research team at Concordia University created a simulation to experiment with this question.

From a deepfake Mark Carney to AI-assisted fact-checkers, our preliminary results suggest that generative AI is not quite going to break elections, but it is likely to make them weirder.

A war game, but for elections?

Our simulation continued our past work in developing games to explore the Canadian media system.

Red teaming is a type of exercise that allows organizations to simulate attacks on their critical digital infrastructures and processes. It involves two teams — the attacking red team and the defending blue team. These exercises can help uncover vulnerability points within systems or defences and practice ways of correcting them.

Red-teaming has become a major part of cybersecurity and AI development. Here, developers and organizations stress-test their software and digital systems to understand how hackers or other “bad actors” might try to manipulate or crash them.

Fraudulent Futures

Our simulation, called Fraudulent Futures, attempted to evaluate AI’s impact on Canada’s political information cycle.

Four days into the ongoing federal election campaign, we ran the first test. A group of ex-journalists, cybersecurity experts and graduate students were pitted against each other to see who could leverage free AI tools best to push their agenda in a simulated social media environment based on our past research.

Hosted on a private Mastodon server securely shielded from public eyes, our two-hour long simulation quickly descended into silence as players played out their different roles on our simulated servers. Some played far-right influencers, others monarchists to make noise or journalists to cover events online. Players and organizers alike learned about generative AI’s capacity to create disinformation, and the difficulties faced by stakeholders trying to combat it.

Players connected to the server through their laptops and familiarized themselves with the dozens of free AI tools at their disposal. Shortly after, we shared an incriminating voice clone of Carney, created with an easily accessible online AI tool.

The Red Team was instructed to amplify the disinformation, while the Blue Team was directed to verify its authenticity and, if they determined it to be fake, mitigate the harm.

The Blue Team began testing the audio through AI detection tools and tried to publicize it was a fake. But for the Red Team, this hardly mattered. Fact-checking posts were quickly drowned out by a constant slew of new memes and fake images of angry Canadian voters denouncing Carney.

Whether the Carney clip was a deepfake or not didn’t really matter. The fact that we couldn’t tell for sure was enough to fuel endless online attacks.

a collage showing a woman covering her ears against a purple background and yellow spirals emanating from her head
Easily available and free AI tools can be used to generate and promote misinformation at an overwhelming rate.
(Shutterstock)

Learning from an exercise

Our simulation purposefully exaggerated the information cycle. Yet the experience of trying to disrupt regular electoral processes was highly informative as a research method. Our research team found three major takeaways from the exercise:

1. Generative AI is easy to use for disruption

Many online AI tools claim to safeguard against generating content on elections and public figures. Despite those safeguards, players noted these tools would still generate political content.

The overall quality of the content produced was easy to distinguish as AI-generated. Yet, one of our players noted how simple it was “to generate and spam as much content as possible in order to muddy the waters on the digital landscape.”

2. AI detection tools won’t save us

AI detection tools can only go so far. They are rarely conclusive, and they may even take precedence over common sense. Players noted that even when they knew content was fake, they still felt they “needed to find the tool that would give the answer [they] want” to lend credibility to their interventions.

Most telling was how journalists on the Blue Team turned toward faulty detection tools over their own investigative work, a sign that users may be letting AI detection usurp journalistic skill.

With higher-quality content available in real-world situations, there might be a role for specialized AI detection tools in journalistic and election security processes — despite complex challenges — but these tools should not replace other investigative methods.

However, detection tools will likely only contribute to spreading uncertainty because of the lack of standards and confidence in their assessments.

3. Quality deepfakes are difficult to make

High-quality AI-generated content is achievable and has already caused many online and real-world harms and panics. However, our simulation helped confirm that quality deepfakes are difficult and time-consuming to make.

It is unlikely that the mass availability of generative AI will cause an overwhelming influx of high-quality deceptive content. These types of deepfakes will likely come from more organized, funded and specialized groups engaged in election interference.

Democracy in the age of AI

A major takeaway from our simulation was that the proliferation of AI slop and the stoking of uncertainty and distrust are easy to accomplish at a spam-like scale with freely accessible online tools and little to no prior knowledge or preparation.

Our red-teaming experiment was a first attempt to see how participants might use generative AI in elections. We’ll be working to improve and re-run the simulation to include the broader information cycle, with a particular eye towards better simulating Blue Team co-operation in the hopes of reflecting real-world efforts by journalists, election officials, political parties and others to uphold election integrity.

We anticipate that the Poilievre debate is just the beginning of a long string of incidents to come, where AI distorts our ability to discern the real from the fake. While everyone can play a role in combatting disinformation, hands-on experience and game-based media literacy have proven to be valuable tools. Our simulation proposes a new and engaging way to explore the impacts of AI on our media ecosystem.

The Conversation

Robert Marinov received funding from the Center for the Study of Democratic Citizenship and Concordia University’s Applied AI Institute for this research.

Colleen McCool received funding from the Center for the Study of Democratic Citizenship and Concordia University’s Applied AI Institute for this research.

Fenwick McKelvey receives funding from the Center for the Study of Democratic Citizenship. Research has been supported Concordia University’s Applied AI Institute and the Technoculture, Art and Games (TAG) centre at the Milieux Institute.

Roxanne Bisson receives funding from the Center for the Study of Democratic Citizenship and Concordia University’s Applied AI Institute for this research.

Previous Post

Advisor: Trump Could Trigger Greatest Windfall in U.S. History (May 3rd)

Next Post

PFAS in Apple Watch Bands

Related Posts

President Trump, Ya Really Don’t Need that Aircraft from Qatar, It’s Not A Good Look!
Don’t Mislead

President Trump, Ya Really Don’t Need that Aircraft from Qatar, It’s Not A Good Look!

May 29, 2025
Missleading

AI helps researchers detect disinformation using weaponized storytelling

May 29, 2025
Trump Nominates Matt Gaetz For Attorney General
Missleading

Trump Celebrates “Major Win” In Lawsuit Against Pulitzer Board

May 29, 2025
Missleading

What is AI slop? You are now seeing fake videos and photos in your social media feeds

May 28, 2025
The Misleading Reason Why President Biden Chose Not To Participate in Halftime or Pregame Interview of the  2024 Super Bowl
Don’t Mislead

The Misleading Reason Why President Biden Chose Not To Participate in Halftime or Pregame Interview of the 2024 Super Bowl

May 27, 2025
Veteran Washington Correspondent James Rosen Speaks about his Blackballing from the Biden Press Room
Don’t Mislead

Veteran Washington Correspondent James Rosen Speaks about his Blackballing from the Biden Press Room

May 27, 2025
Next Post
PFAS in Apple Watch Bands

PFAS in Apple Watch Bands

Being Forced to take Generic Meds over Name Brands

Being Forced to take Generic Meds over Name Brands

Please login to join discussion
Misleading

Misleading is your trusted source for uncovering fake news, analyzing misinformation, and educating readers about deceptive media tactics. Join the fight for truth today!

TRENDING

The California Interscholastic Federation (CIF) Has Implemented a New Rule in an Effort to Allow Transgender Students to Compete in High School Sports, Add Additional Medal Positions

President Trump, Ya Really Don’t Need that Aircraft from Qatar, It’s Not A Good Look!

Veteran Washington Correspondent James Rosen Speaks about his Blackballing from the Biden Press Room

LATEST

The California Interscholastic Federation (CIF) Has Implemented a New Rule in an Effort to Allow Transgender Students to Compete in High School Sports, Add Additional Medal Positions

President Trump, Ya Really Don’t Need that Aircraft from Qatar, It’s Not A Good Look!

AI helps researchers detect disinformation using weaponized storytelling

  • About Us
  • Disclaimer
  • Privacy Policy
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions

Copyright © 2025 Misleading.
Misleading is not responsible for the content of external sites.

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • About Us
  • Log in
  • Don’t Mislead (Archive)
  • Privacy Policy

Copyright © 2025 Misleading.
Misleading is not responsible for the content of external sites.