Opinion_

Do Twitter bots spread vaccine misinformation? It's not that simple

7 October 2020
Most health misinformation online comes from real people
The key to dealing with misinformation is to refocus research and attention to communities, not bots, writes Associate Professor Adam Dunn.

Discussion of online misinformation in politics and public health often focuses on the role of bots, organised disinformation campaigns and “fake news”. A closer look at what typical users see and engage with about vaccines reveals that for most Twitter users, bots and anti-vaccine content make up a tiny proportion of their information diet.

Having studied how vaccine information spreads on social media for several years, I think we should refocus our efforts on helping the consumers of misinformation rather than blaming the producers. The key to dealing with misinformation is to understand what makes it important in the communities where it is concentrated.

Vaccine-critical Twitter

In our latest study, published in the American Journal of Public Health, we looked at how people see and engage with vaccine information on Twitter. We showed that while people often see vaccine content, not much of it is critical and almost none comes from bots.

While some other research has counted how much anti-vaccine content is posted on social media, we went a step further and estimated the composition of what people saw and measured what they engaged with. To do this we monitored a set of 53,000 typical Twitter users from the United States. Connecting lists of whom they follow with more than 20 million vaccine-related tweets posted from 2017 to 2019, we were able to track what they were likely to see and what they passed on.

In those three years, a typical Twitter user in the US may have seen 727 vaccine-related tweets. Just 26 of those tweets would have been critical of vaccines, and none would have come from a bot.

While it was relatively infrequent, nearly 37% of users posted or retweeted vaccine content at least once in the three years. Only 4.5% of users ever retweeted vaccine-critical content and 2.1% of users retweeted vaccine content posted by a bot.

For 5.8% of users in the study, vaccine-critical tweets made up most of the vaccine-related content they might have seen on Twitter in those three years. This group was more likely to engage with vaccine content in general and more likely to retweet vaccine-critical content.

Studying people, not posts

Many social media analyses about misinformation are based on counting the number of posts that match a set of keywords or hashtags, or how many users have joined public groups. Analyses like these are relatively easy to do.

However, these numbers alone don’t tell you anything about the impact of the posts or groups. A tweet from an account with no followers or a blog post on a website that no one visits is not the same as a major news article, a conversation with a trusted community member, or advice from a doctor.

Information consumption is hard to observe at scale. My team and I have been doing this for many years, and we have developed some useful tools in the process.

In 2015 we found that a Twitter user’s first tweet about HPV vaccines is more likely to be critical if they follow people who post critical content. In 2017, we found lower rates of HPV vaccine uptake across the US were associated with more exposure to certain negative topics on Twitter.

A study published in Science in 2019 used a similar approach and found fake news about the 2016 US election made up 6% of relevant news consumption. That study, like ours, found engagement with fake news was concentrated in a tiny proportion of the population.

I also think analyses focused on posts are popular because it is convenient to be able to blame “others”, including organised disinformation campaigns from foreign governments or reality TV hosts, even when the results don’t support the conclusion. But people prone to passing along misinformation don’t live under bridges eating goats and hobbits. They are just people.

Resisting health misinformation online

When researchers move beyond counting posts to learn why people participate in communities, we can find new ways to empower people with tools to help them resist misinformation. Social media platforms can also find new ways to add friction to sharing any posts that have been flagged as potentially harmful.

While there are unresolved challenges, the individual and social psychology of debunking misinformation is a mature field. Evidence-based guides on debunking conspiracy theories in online communities are available. Focusing on the places where people encounter misinformation will help to better connect data science and behavioural research.

Connecting these fields will help us understand what makes misinformation salient instead of just common in certain communities, and to decide when debunking it is worthwhile. This is important because we need to prioritise cases where there is potential for harm. It is also important because calling out misinformation can unintentionally help it gain traction when it might otherwise fade away.

Vaccination rates remain a problem in places where there are higher rates of vaccine hesitancy and refusal, and are at higher risk of outbreaks. So let’s focus on ways to give people in vulnerable populations the tools they need to protect themselves against harmful information.


This article was first published on The Conversation. It was written by Associate Professor Adam Dunn,  head of Biomedical Informatics and Digital Health in the School of Medical Sciences, Faculty of Medicine and Health at the University of Sydney.

Related articles