The British Columbia Wildfire Service , in one of the very first of its kind, has warned residents of viral, AI generated fake wildfire images that are circulating online. comments on social media indicate that some viewers did not know the images were fake.
These incidents will rise as more advanced generative AI tools (genAI) become widely available. Digital disinformation, which spreads confusion and panic, can be harmful during emergencies when people are under stress and require reliable information.
The vulnerability to disinformation is due to people’s tendency to use mental shortcuts in stressful situations. This facilitates the spread of disinformation and its acceptance . Content that is emotional and sensational is often shared more on social media.
According to our research on emergency management and response, AI-generated false information during an emergency can disrupt disaster response efforts and cause real harm.
Spreading misinformation
The reasons people create, share and accept disinformation in emergencies are diverse and complex. Individuals may spread and generate disinformation for many reasons. Selfdetermination Theory classifies motivations into intrinsic (related to an inherent interest in creating or sharing) and extrinsic (including financial gain or public recognition).
Disinformation can be created for a variety of reasons. There are many reasons for creating disinformation, including: personal, political or commercial gain, prestige, belief and enjoyment.
Disinformation is spread by people who are unable to make informed decisions, distrust information from other sources, want to entertain, help others, or promote themselves.
Accepting disinformation can be affected by a decreased capacity to analyse information, political affiliations and fixed beliefs.
Misinformation harms
The harms that can be caused by misinformation and disinformation are of varying severity. They can be classified into direct, indirect and short-term or long-term.
These threats can come in many different forms.
In an emergency, it is crucial to have access to reliable information on hazards and threats. The combination of disinformation and poor information collection, processing, and understanding can result in more casualties and damage to property. vulnerable groups are disproportionately affected by misinformation.
CBC News reports AI-generated images of fires circulating throughout British Columbia.
When people receive information about risk or threat, they check it with their vertical networks (governments, emergency management agencies, and trusted media) as well as horizontal ones (friends and family). The more complicated the information is, the longer and more difficult it will be to confirm and validate.
As genAI becomes more advanced, it will be harder to distinguish between AI-generated and real information.
Debunking disinformation
Disinformation can disrupt emergency communications. Clear communication is essential to public security and safety during emergencies. How people respond to these situations depends on their level of information, their knowledge, their emotional response to risk, and the ability to gather information.
The need for clear messages , diverse channels of communication and credible sources is heightened by the disinformation.
Verification requires official sources, but the volume of information is making it increasingly difficult to verify accuracy. Public health agencies, for instance, raised concerns about misinformation and deception during the COVID-19 Pandemic.
Learn more about coronavirus misinformation spread through messaging apps and emails
Digital misinformation spread during disasters may lead to an incorrect allocation of resources, conflicting behaviour and actions by the public, and delayed emergency response. Misinformation may also result in unnecessarily or late evacuations.
Disaster management teams are faced with a double challenge in such situations: the primary crisis and the secondary problems created by misinformation.
Counteracting disinformation
The research reveals that there are significant gaps in skills and strategies used by emergency management agencies to counter misinformation. These agencies must focus on detecting, verifying and mitigating the creation, dissemination and acceptance of misinformation.
This complex issue requires coordinated efforts across policy, technologies and public engagement.
-
Fostering a critical awareness culture: It is important to educate the public and especially the younger generations about the dangers associated with misinformation, AI generated content, and other forms of artificial intelligence. Media literacy campaigns, schools programs, and community workshops equip people with skills to verify information, question sources, and recognize manipulation.
-
Clarity in AI-generated news policies: By establishing and enforcing clear policies regarding how news agencies utilize AI-generated images, they can avoid visual misinformation eroding the public’s trust. This could include disclaimers that are mandatory, editorial oversight, and provenance tracking.
-
In times of crisis, platforms should be strengthened for fact-checking, metadata analysis, and rapid fact-checking. Limiting the spread of misinformation can be achieved by requiring platforms to flag, rank or remove content that is demonstrably false. It is important to develop strategies to encourage people to question content they encounter on social media.
-
Section 181 ( Criminal Code ) of Canada’s Criminal Code provides clear legal consequences for the deliberate creation and dissemination of false information. Such provisions can be used to deter, especially in the case of deliberate misinformation campaigns.
In addition, emergency management and public awareness should include identifying, combating, and reporting misinformation.
AI is rapidly changing the way information is generated and shared in times of crisis. This can increase fear in emergencies, cause misdirection of resources, and undermine trust just when clarity is needed. To ensure AI is a tool of resilience and not chaos, it’s important to build safeguards via education, policy, fact checking, and accountability.
Maleknaz Nayebi is funded by NSERC.
Ali Asgary has not disclosed any relevant affiliations other than their academic appointment.