What happens when politicians spread false or toxic messages on the internet? We found evidence to suggest that U.S. legislators could increase their public visibility or decrease it when they share unverified claims, or use uncivil language in times of high tension. This raises the question of how social media platforms influence public opinion, and whether they reward certain behaviors.
My team and I are computational scientists. We build tools to analyze political communication in social media. Our latest study looked at the messages that made U.S. State legislators standout online between 2020 and 2021, a period marked by the pandemic and the 2020 election. We looked at two types of harmful material: information with low credibility and offensive language, such as insults and extreme statements. We evaluated their impact based upon how many people liked, shared, or commented on a particular post on Facebook, X and, at that time, Twitter.
In our study, we found that the harmful content was linked to an increased visibility of posters. The effects can vary. Republicans who post low-credibility content are more likely to get online attention than Democrats. Posting uncivil material generally reduces visibility, especially for legislators at the ideological extremes.
Why it matters
Facebook, X and other social media platforms have become a major stage for political debates and persuasion. Politicians use social media to reach voters, promote agendas, rally support and attack rivals. Some of them get more attention than others.
Research has shown that false content is spread faster and reaches a wider audience compared to factual information. Content that makes users angry or emotionally is often pushed higher in feeds by platform algorithms. In the meantime, uncivil language can cause divisions or make people doubt democratic processes.
Our findings suggest that platform algorithms may unintentionally reward divisive or misleading behavior. Our findings raise concern that platform algorithms could unintentionally encourage divisive or false behavior.
In recent years, political misinformation has increased.
It can be difficult for voters to locate reliable information when harmful content is used as a strategy to make politicians stand out.
What we did
In 2020 and 2021, we collected nearly 4 million Tweets and half-a-million Facebook posts from more than 6,500 U.S. State legislators. We used machine-learning techniques to determine causal relations between content and visibility.
We could compare posts with almost identical content, except for the fact that one post had harmful material and the other did not. We could measure the difference between how many people saw or shared those posts to estimate how much visibility they gained or lost solely due to that harmful content.
What other research is going on?
The majority of research on harmful content focuses on national figures and social media influencers. The study focused on state legislators who have a significant impact on state laws, such as those governing education, health, and public safety, but receive less coverage in the media and less fact-checking.
State legislators are often able to escape scrutiny. This allows misinformation and toxic material to spread unchecked. It is therefore important to know their online activities.
What’s Next
We will continue to analyze the data we collect in order to see if the patterns that we observed during the high-stress years of 2020-2021 are still present. Are platforms and audiences rewarding low-credibility content, or is this effect temporary?
We will also examine the impact of changes to moderation policies, such as X a shift towards less oversight and Facebook an end to human fact-checking on what is seen and shared. We also want to understand how people respond to harmful posts. Do they like them, share them out of anger, or try to correct them.
This research line can be used to help create smarter platforms, better digital literacy initiatives, and stronger protections of healthy political conversations.
The research brief provides a quick overview of interesting academic work.
Yu Ru Lin receives funding through external funding agencies, such as the National Science Foundation.