Twitter bots, Russian trolls pollute vaccine ‘debate’ online
Click Here to Manage Email Alerts
Bots, Russian trolls and so-called “content polluters” are significantly more likely to tweet about vaccination than the average Twitter user, exaggerating the magnitude of the vaccination “debate” in the United States, according to newly published findings.
The effect, researchers said, is that public consensus on vaccines — generally supportive — is eroded.
“Survey data generated by the Pew Research Center [suggest] that the American public is, by and large, very convinced of the benefits of vaccines, and believes that they’re safe and effective. The opinions of those who oppose vaccination are actually quite fringe,” David A. Broniatowski, PhD, assistant professor in the department of engineering management and systems engineering at George Washington University in Washington, D.C., told Infectious Disease News.
“However, if you look on Twitter, you’ll see a much larger antivaccine contingent,” he said. “We wanted to understand why that’s the case. Is it really true that people are concerned about vaccines, or is it possible that some portion of them might be malicious actors who are engaged in sending antivaccine information for a range of other reasons?”
For the study, Broniatowski and colleagues evaluated nearly 1.8 million tweets pertaining to vaccines that were posted between July 2014 and September 2017. They estimated the probability that Twitter users were bots and compared the percentage of polarizing and antivaccine tweets between user types. They analyzed content from a Twitter hashtag #VaccinateUS that has been linked to Russian troll activity.
They found a significantly higher likelihood of tweeting about vaccines among Russian trolls and content polluters vs. the average Twitter user. Some of the Russian troll accounts have been implicated by Congress or NBC News in interfering with the 2016 election. Of the tweets posted by these accounts, 43% were provaccine, 38% were antivaccine and 19% were neutral, they found.
“Obviously, there are people on Twitter who legitimately believe that vaccines are harmful,” Broniatowski said. “That being said, we also found evidence of bots and trolls. These are malicious accounts that are using antivaccine and provaccine content, and just content about vaccines in general, in order to advance any one of a number of different agendas.”
Tweets generated by Russian trolls tended to be very polarized and made strong emotional appeals regarding freedom and democracy, according to the researchers. Additionally, they introduced viewpoints that had not been seen in the wider Twitter debate, such as appeals to God and animal welfare perspectives. Many focused on undermining the U.S. government. For example, this tweet: “At first our government creates diseases then it creates #vaccines. what’s next?! #VaccinateUS.”
The motives of the persons responsible are not entirely clear and likely vary. For example, Broniatowski said in the case of some content polluters, the goal seemed to be to spread spam or links to websites with computer viruses, phishing or other kinds of malware.
“Antivaccine content may be one of the ways that they get followers so they can spread more spam, or it may be a way to motivate people to click,” he said.
Broniatkowski said some of the bot accounts may express sincere antivaccine or provaccine views but exist within the framework of a “botnet.”
“A botnet is simply a very large number of bot accounts that are linked together into some kind of network,” he said. “The user will then put together a library of messages or content, and then all of the different accounts on the botnet will blast out that content and make it look as though there are many people who agree on that issue.”
Cyborgs — “accounts nominally controlled by human users that are, on occasion, taken over by bots or otherwise exhibit bot-like or malicious behavior,” according to the study — comprise a “middle ground” of bot activity, Broniatowski said. These accounts may sporadically express enough authenticity for them to be taken seriously, he said.
“This type of account is dangerous, because we can’t really tell if it is a bot or a human,” he said. “If the account sometimes looks like it is run by a sincere person, it can make it seem as though tens of thousands of people are sharing the same antivaccine message. That creates the impression of a grassroots opposition to vaccination that doesn’t really exist.”
The means of addressing bot and troll accounts depends largely on the type and agenda of the account. Broniatowski emphasized that trying to refute an argument promulgated by a bot account often serves only to legitimize that account and give it additional publicity.
“You’re basically legitimizing an account that isn’t legitimate, and they’re just going to troll back. It gives them more of a platform,” he said.
In these cases, Broniatowski said a better course of action might be to send a tweet that reveals the account as a bot account. In their study, Broniatowski and colleagues conclude that, “Directly confronting vaccine skeptics enables bots to legitimize the vaccine debate. More research is needed to determine how best to combat bot-driven content.”
“It’s really important to make sure that any communicating we do is done in the appropriate context,” Broniatowski said. “We need to know the context, we need to know who is sending the message, and we need to know our audience. Only then does it make sense to craft a response.” – by Jennifer Byrne
Disclosures: Broniatowski reports no relevant disclosures.