The truth is that people are not always driven by cold, hard data. Power and Familiarity are what determine the most important things to people. Stories, whether they are a heartfelt story, a personal testimonies or memes echoing familiar cultural stories, tend to stay with us. They move us and help shape our beliefs.
Storytelling is dangerous precisely because of this characteristic. Since the 1960s, foreign adversaries used narrative tactics to influence public opinion within the United States. social media platforms have added a new level of complexity and amplitude for these campaigns. The phenomenon attracted considerable public scrutiny when evidence of Russian entities influencing election-related materials on Facebook during the run-up to 2016 elections emerged.
Artificial intelligence, while exacerbates the problem is also becoming one of most powerful defenses to such manipulations. Researchers are using machine-learning techniques to analyse disinformation content.
We are training AI to go beyond surface-level language analysis in order to understand narrative structures, a href=”https://doi.org/10.1609/icwsm.v18i1.31343″>trace personas/a> a href=”https://doi.org/10.48550/arXiv.2406.05265″>and timelines/a> and ‘decode cultural references/a>. We train AI to go beyond the surface level of language analysis in order to understand narrative structure, trace characteras timelines, and decode culture references.
Misinformation vs. disinformation
In July 2024 the Department of Justice shut down a Kremlin backed operation which used almost a thousand false social media accounts in order to spread false narratives. These were not isolated incidents. These were not isolated incidents. They were part a larger campaign powered by AI.
Disinformation is fundamentally different from misinformation. Disinformation, on the other hand, is deliberately fabricated information that is shared to manipulate and mislead. In October 2024, a video purporting that a Pennsylvania election worker was tearing up mail in ballots marked Donald Trump swept platforms like X and Facebook.
The FBI tracked the video within days. However, not before the clip had been viewed millions of times. This example shows how foreign influence campaigns fabricate and amplify fabricated stories in order to manipulate U.S. political affairs and stir up divisions among Americans.
The human brain is wired to understand the world by telling stories. We learn to tell stories and use them as a way to understand complex information from a young age. Narratives help people feel, not just remember. They help us form emotional connections, and they influence our interpretation of social and political issues.
Stories can have a profound effect on human behavior and beliefs.
They are therefore very effective tools of persuasion and disinformation. A compelling story can overcome skepticism , and influence opinion more effectively , than a flood statistics. A story about saving a sea turtle that had a straw stuck in its nose can often do more to raise awareness about plastic pollution.
The narrative time, the cultural context of usernames and their use
AI tools can be used to identify when a narrative doesn’t make sense. They can use the timeline of how the narrator tells the story and the cultural details of where it takes place.
Narratives do not just refer to the content that users share, but also to the personas they create to tell their stories. Even social media handles can send persuasive signals. We developed a system to analyze usernames in order to deduce demographic and identity traits, such as gender, location and sentiment, and even personality when these cues are embedded within the handle. This work was presented in 2024, at the International Conference on Web and Social Media. It highlights how a short string of characters can indicate how users wish to be perceived by an audience.
A user who wants to be perceived as a journalist would choose @JamesBurnsNYT over something less formal like @JimB_NYC. The two handles may both suggest a New York-based male user, but only one has the credibility of an institution. Disinformation campaigns exploit this perception by creating handles that mimic real voices or affiliations.
A handle by itself cannot determine whether an account’s authenticity is real, but it can be used to assess overall authenticity. AI systems are able to better assess whether an identity has been created in order to gain trust, blend in with a community, or amplify persuasive content by interpreting usernames within the context of a broader narrative. This type of semantic interpretation helps to create a holistic approach to disinformation, which considers more than just the content but also who is saying it.
Also, stories do not always follow a chronological order. Social media posts can start with an event that is shocking, then flashback to previous moments and skim over important details.
Humans are used to this – they’re used fragmented stories. AI faces a challenge in determining the sequence of events from a narrative.
Our lab also develops methods to extract timelines. This teaches AI to identify events and understand their sequence, as well as map out how they are related to each other, even when the story is not told in a linear fashion.
AI systems are at risk of misinterpreting narratives if they do not have cultural awareness. Symbols and objects can carry a variety of meanings across cultures. The ability to use cultural nuances can be used by foreign adversaries to create messages that are more appealing to specific audiences. This increases the persuasive power and effectiveness of disinformation.
The following sentence is a good example. “The woman wearing the white dress felt joyous.” This phrase is a positive one in a Western context. In Asia, is often associated with mourning and death. This could make it feel uncomfortable or offensive.
It is critical that AI has this level of cultural literacy in order to detect disinformation which weaponizes symbols and sentiments within communities. Our research has shown that training AI with diverse cultural narratives increases its sensitivity for such distinctions.
Who is the beneficiary of narrative-aware AI technology?
AI tools that can identify narratives are able to help analysts identify influence campaigns and emotionally charged stories that spread unusually quickly. AI tools could be used to analyze large volumes of posts on social media to identify similar storylines, map out persuasive narrative arcs and flag coordinated social media activity. The intelligence services could then take countermeasures immediately.
Moreover, crisis response agencies can quickly identify damaging narratives such as false claims of emergency during natural disasters. These tools could be used by social media platforms to route high-risk material for human review, without censorship. Researchers and educators can also benefit from tracking the evolution of a story across communities. This will make narrative analysis more rigorous, and easier to share.
These technologies can be used by ordinary users as well. AI tools can flag social media posts as disinformation in real-time, so that readers are able to be skeptical and counter falsehoods.
AI is increasingly playing a role in monitoring online content and interpreting it. Its ability to comprehend storytelling, beyond the traditional semantic analysis, has become crucial. We are developing systems that will uncover hidden patterns, decode culture signals, and trace narrative timelines in order to show how disinformation is spread.
Azwad Anjum Islam has received funding from Defense Advanced Research Projects Agency.