April 1, 2021

COVID-19 vaccine disinformation online: An introduction

by EU DisinfoLab Researcher Maria Giovanna Sessa

Anti-vaxx claims are not new, including campaigning against a vaccine bill in the US. However, as vaccination became the main hope to put an end to the SARS-CoV-2 pandemic, its development also offered an historic opportunity to anti-vaxxers. Concerns over the health risks linked to rapidly created COVID-19 vaccines, the politicisation of the pandemic, and distrust in official sources caused a surge in disinformation on the topic. At the same time, the pandemic consolidated the use of social media as a means to stay in touch during a period of social isolation and to gather information, which helped vaccine disinformation spread virally across the globe. In fact, as the gap between demand for and supply of information widened (termed a ‘data deficit’ by First Draft), people became more prone to fill the gaps in their knowledge with misinformation.

To illustrate the accelerating growth of vaccine-related disinformation on a global scale, we examined the presence of  the term “vaccine” in the IFCN CoronaVirusFacts Alliance Database. During December 2020 alone, when the European vaccination campaign first launched, 128 fact-checking articles debunking vaccine disinformation were published, spanning 16 countries (and 4 continents). 

Amongst them, 74 of these disinformation claims (58%) first spread through Facebook, 12 (9%) were transmitted on Twitter, 6 (5%) on Instagram, and 2 (2%) on YouTube. Another 12 debunks (9%) simply indicated social media as the disinformation source. The remaining disinformation claims were either transmitted via online news outlets, websites, and blogs (13 items, 10%), or via instant messaging apps – such as WhatsApp or Telegram (9 items, 7% together).

Therefore even though vaccine-related disinformation is present on multiple platforms, Facebook is by far the preferred platform for hosting, spreading, and amplifying disinformation on vaccines. The scale of the problem is evidenced further in a research paper published in Nature in May 2020. The paper explored distrust in scientific expertise regarding vaccinations on Facebook. The researchers retrieved a sample of one hundred million users who have expressed opinions towards vaccination –  pro-, against, or undecided. Interestingly, even though Facebook users identified by the researchers as having positive opinions on vaccines exceed those identified as having negative ones, researchers found that anti-vaccine content had higher overall virality, including among “undecided communities”. Thanks to this greater community engagement and the propensity of Facebook’s content recommendation algorithm to promote such polarising content, vaccine disinformation enjoyed increased visibility overall.

Further evidence is provided in a First Draft report on COVID-19 vaccine narratives published in November 2020. It identified Facebook Pages as the biggest drivers of interactions for this type of content. Together, Facebook Pages and Facebook Groups amounted to 52.5% of their social media-drawn sample. Therefore we then chose to  focus our research on Zuckerberg’s social network.

Facebook’s moderation challenge

As we explain in a previous blog post, Facebook relies on third-party fact-checkers to verify suspected COVID-19 misinformation and label confirmed misinformative posts. Content making false claims about COVID-19 treatments is removed, and users who interacted with these posts are directed towards authoritative information from the WHO and other health authorities.

On 13 October 2020, Facebook announced “new steps as part of our continued work to help support vaccine efforts”. The policy encompassed three kinds of actions: the creation of a flu vaccine information campaign; the prohibition of ads that discourage people from getting vaccines; and the collaboration with global health partners on educational campaigns to expand immunisation rates. Within days of the new measure, The Verge found that ads from anti-vaccination accounts were still running on the platform and anti-vaxx Facebook Groups continued to be permitted. Moreover, the social media policy still allowed “organic or unpaid content discouraging vaccination”. Furthermore, ads advocating for or against regulation, or government policies on vaccines were still authorised, with the condition that they disclosed who was financing them.

On 3 December 2020, Facebook published a policy ad update specifically for COVID-19 vaccines, as the company declared its intention to remove disinformation that has been verified by health experts, as part of the platform’s efforts to moderate COVID-19 misinformation on Facebook and Instagram. According to Facebook:

“This could include false claims about the safety, efficacy, ingredients or side effects of the vaccines. For example, we will remove false claims that COVID-19 vaccines contain microchips, or anything else that isn’t on the official vaccine ingredient list. We will also remove conspiracy theories about COVID-19 vaccines that we know today are false: like specific populations are being used without their consent to test the vaccine’s safety.”

Facebook also reaffirmed its commitment to regular policy updates, aligned with the directions of public health authorities, and promoting their COVID-19 Information Center. On 18 December 2020, Facebook pledged to ban all content that exploits the pandemic for monetary gain. 

Although these policies can be seen as a positive step, they nevertheless fail to limit effectively the diffusion of pervasive vaccine disinformation on the platform. 

Moreover, some of Facebook’s content moderation actions resulted in wrongly removed authentic content. For instance, Kaleigh Rogers recalls the wrongful removal of “Vaccines Exposed”, a vaccine education Page that engaged vaccine hesitant users in a debate.

Due to the limited success of its earlier actions, on 8 February 2021, Facebook announced the unprecedented decision to ban all vaccine-related misinformation from the platform. The policy update included an extremely detailed list of the prohibited content connecting COVID-19 to vaccines.

Disinformation exploits Facebook’s participatory structure, which makes it the ideal milieu for anti-vaxx proponents to spread the vaccine falsehoods. Advertisements allow for precise audience micro-targeting, meaning anti-vaxxers can target potentially vaccine-hesitant users and provide custom-made false information to reinforce their fear and doubts. In contrast, the authoritative information provided by the platform is standardised, not personalised. Meanwhile, engagement is fundamental for the purpose of monetisation, which is more easily grabbed by emotionally divisive, polarising topics, like the sensationalist stories that convey vaccine misinformation. Consequently, the anti-vaxx community is at a clear advantage in terms of reach, virality, and engagement as compared to the fairly limited impact of Facebook’s current moderation efforts.

Here, it is important to note that the array of privacy settings and standards on Facebook present practical and ethical challenges to researchers and fact-checkers. Facebook has public pages, public groups, private groups, and user accounts which can be private or public. Anti-vaxx misinformation spreads through all of these spaces. Currently, fact-checkers and disinformation researchers avoid private spaces out of respect for the platforms’ terms of service and for user privacy (many rely instead on tip lines). At the same time, these private spaces are conducive to knowledge exchange and discussion for online communities. The privacy setting suggests that these are more intimate, trustworthy, or “meaningful” spaces, away from strangers or prying eyes. Yet Facebook currently has no limit on the number of people in a private group. Many groups have thousands of users, which may be misleading for those who believe they are expressing themselves in a private space as privacy is generally understood. Facebook private groups are also curated by Facebook: users can promote a private group, which the platform will then recommend to users based on box-checked criteria. This double standard hinders Facebook’s ability to address anti-vaccine claims. Privacy and health safety need not be opposed. Facebook must find better ways to safeguard users’ privacy and safety from vaccine misinformation. It was perhaps towards this goal that Facebook announced in a recent update that “Pages, Groups, profiles, and Instagram accounts that repeatedly post misinformation related to COVID-19, vaccines, and health may face restrictions, including (but not limited to) reduced distribution, removal from recommendations, or removal from our site”. 

Dangerous, ill-intended, and useless: disinformative narratives on COVID-19 vaccines 

In this section, we proceed to analyse the status of the 74 IFCN fact-checked disinformation claims that spread widely through numerous Facebook posts in December 2020. The debunked content spread in ten different languages, i.e. English, French, Georgian, Mandarin, Portuguese, Russian, Spanish, Tagalog, Turkish, and Ukrainian.

While some fact-checking articles contained the link to original disinformation posts, in other cases we had to retrieve the posts ourselves, based on the screenshots or quotes provided in the debunked article. All these posts were collected on 24 March 2021, over a month after Facebook’s latest policy update. In detail, we were able to collect live Facebook posts for 74% of the fact-checked claims (55 out of the 74 debunks). No live Facebook posts could be identified for the remaining 26% of the debunks (19 posts in total). We assume that is either the result of enforcement of Facebook’s content moderation policies or of their actions to promote reliable information sources for particular searches. In any case, it could be argued that Facebook’s new policy on vaccine misinformation was successful in preventing us from accessing disinformation in only 26% of the cases.

Of the 55 existing claims, 34 of them were flagged as misinformative by Facebook, which then directs users to a fact-checking article. It is unclear, though, to what extent such a labelling is a sufficient deterrent for vaccine hesitant users, who are still exposed to and can continue to access harmful anti-vaccination information. 

Even more concerning are the 21 claims for which live posts are present on Facebook, without any indication that they contain false information. Moreover, it is possible that non-fact-checked versions of flagged or removed claims have evaded the content moderation process and are still available on the platform.

With respect to the topics of these claims, at the aggregate level these can be summarized as follows:

“The vaccine is dangerous” (69%) 

Claims that the vaccine is dangerous are the most widespread, tackling these sub-claims:

  • The COVID-19 vaccine was developed too fast, compared to vaccines for other diseases, such as HIV and cancer, whose cure is still unknown.
  • The COVID-19 vaccine alters human DNA.
  • The COVID-19 vaccine has major side effects (e.g. encephalitis, health attacks, or infertility), or even infects people with HIV. One narrative also revived the long-standing myth that vaccines contain foetal cells. Other fact-checked disinformation goes as far as maintaining that patients died after receiving a vaccine dose. 

Based on these findings, we chose to focus on this sub-category to present three stories that are spreading internationally.

We looked at the fabricated narrative around Tiffany Dover, the American nurse who fainted after receiving the vaccine. As the medical professional fainted due to a vasovagal reaction after publicly receiving the vaccine, countless sources of disinformation falsely claimed that she passed out – some alleging that she died – due to the vaccine.

The decontextualized and partly false news that six volunteers died after receiving the Pfizer vaccine: During the trial, two fatalities occurred in the group that was given the vaccine, and four in the group that was given the placebo. Of the two victims who received the vaccine: one had a heart attack over two months after receiving the second dose. The other person, who died three day after the first dose, suffered from obesity and a pre-existing form of atherosclerosis.

The unproven claim that recipients of the Pfizer COVID-19 vaccine developed Bell’s palsy: Besides, partial face paralysis is considered a temporary condition that is not necessarily a side effect of the vaccine.

“The vaccine is part of an evil plan” (28%)

This type of narrative connects anti-establishment views to anti-vaxx stances. Vaccines are seen as the product of ill-intentioned and profit-oriented elites, and thereby rejecting the vaccine is a way to oppose their agenda. In particular, this sort of disinformation takes the form of:

  • Disinformation about the distribution of the vaccine, and the type of vaccine distributed. For instance, the false claim was spread that the COVID-19 vaccine will be mandatory. Moreover, users in some posts raise unsubstantiated concerns over their country’s alleged choice either to use or not to use the vaccine developed in China.
  • A number of hoaxes hypothesise that the vaccine is part of a deep state conspiracy, starring Big Pharma companies, Bill Gates, and George Soros, to profit from the pandemic antidote, or even to reduce the world population. According to disinformation transmitters, evidence of this is that politicians have pretended to get vaccinated in front of the cameras. The dystopian claim that microchips are inoculated through the vaccine for the purpose of social control is also present.
“The vaccine is useless to cure COVID-19” (3%)

Finally, a residual narrative claims that the pandemic is over – drawing into denialism – and thus there is no need for a vaccine.

Conclusion

Our empirical analysis of COVID-19 vaccine-related disinformation narratives examined claims spread on Facebook in December 2020 and debunked by the IFCN CoronaVirusFacts Alliance Database. A month and a half after Facebook’s decision to ban all vaccine-related misinformation from the platform, we could still find posts promoting 74% of these false claims (some fact-checked, some unlabelled), even though they are in a clear breach of their policy. This clearly demonstrates a lack of effective, consistent enforcement of self-defined content moderation measures, as well as the limitations of the platform’s general approach to this challenge. 

Our research suggests that content moderation alone is not sufficient to eradicate vaccine misinformation. While content moderation intervenes after the content has been published to remove posts that violate policies, content moderation does not address the deeper sources of this misinformation exposed by the identified narrative strategies. These narratives are diffused through multiple posts, networks of groups, pages, and accounts. Research has shown that there are limits to the effectiveness of fact-checking when it comes to anti-vaxx claims, as it is unlikely that people who are hesitant or opposed to vaccines, or those with predisposition to believe in conspiracy theories, will clear their doubts or change their minds.

We also discuss the challenges of the platform design for researchers and fact-checkers. The tools that the Facebook platform makes available to anti-vaxxers wishing to promote their content are far more powerful than the tools available to fact-checkers and disinformation researchers correcting these claims ex-postThe barriers to data access for independent researchers like us make it nearly impossible to fully measure the issue at scale and to quantify the reach of disinformation. Therefore, we join our colleagues’ call for quantifiable and verifiable action. Facebook and major social media platform companies need to be more transparent about the quantity of harmful content hosted on their platforms, and to measure the effectiveness of their efforts to remove such false information.

Platforms have a responsibility to protect users: the claim that they are “too big to moderate” or that their conduct defends freedom of expression is unacceptable when it comes to health-threatening claims that contribute to the infodemic. Facebook must find ways to protect users’ privacy while protecting them from unsafe health misinformation, including on ‘private’ pages and groups. Our freedom of expression is fully compatible with our privacy, and our right to access accurate health information.