November 21, 2022

By Kalina Bontcheva, University of Sheffield

The COVID-19 pandemic and related infodemic uncovered a wide range of weaknesses in their policies and actions towards tackling viral and harmful misinformation on COVID-19, which ranged widely from false cures to anti-vax narratives. 

Specifically, our research (available from our COVID-19 Resource Hub) in the past two years uncovered a number of issues:

  • Failure of platforms’ content moderation actions, in breach of their own policies, where we reviewed Facebook, YouTube, Twitter, Facebook Messenger, and WhatsApp. 
  • Inadequate enforcement of platform policies in smaller countries and non-English languages, where we studied France, Bulgaria, and the Philippines, as well as making cross-country/language comparisons.
  • Use of crowdfunding platforms in funding conspiracy theories and other disinformation.

We also analysed and compared platform policies along the same dimensions, e.g., compare content moderation and labelling policies across platforms, policies for the promotion of trustworthy content, etc. 

The start of the war in Ukraine in Feb 2022 changed the disinformation tide away from COVID-19 towards Ukraine-related false narratives and conspiracy theories. Curiously, our research and that of EDMO Belux showed that many known anti-COVID-19 and anti-vax groups and accounts quickly refocused towards Ukraine-related misinformation. 

In response, online platforms changed their policies to limit the spread of Ukraine-related misinformation. Nevertheless, evidence emerged quickly that these already-known issues have yet to be rectified. 

Now, let us discuss these in more detail before concluding with some recommendations. 

Failure of platforms’ content moderation algorithms and processes

Even though platforms quickly updated their policies to help counter both COVID-19 and Ukraine-related misinformation, their content moderation actions did not enforce these sufficiently strictly. 

All leading platforms have had these issues, with examples including, but not limited to:

While one could argue that in the case of COVID-19, platforms were insufficiently prepared to act at scale and quickly, the same cannot be said for Ukraine-related misinformation

It could also be argued that COVID-19 misinformation was at least initially harder to moderate, especially through automated means, due to the rapidly evolving and changing scientific understanding of the disease and its potential cures. In contrast, false narratives on the latter cover much more familiar and easier-to-verify political topics. 

In effect, internet communication companies act as definers, judges, and enforcers of freedom of expression on their services. However, the sheer volume of the infodemic led the major platforms to implement automated content moderation. Due to their opaque nature and in-built bias, they urgently require transparent and accountable monitoring.

Our research, both for COVID-19 and Ukraine-related misinformation, also showed that disinformation is by its nature a cross-platform issue, with content hosted across multiple places such as websites, YouTube videos, tweets linking to these, etc. Hence, a content moderation failure on one platform frequently enables false claims and narratives to spread unabated across other platforms. Therefore, we cannot emphasise enough the urgent need to move away from a platform-per-platform moderation approach and focus instead on addressing disinformation as a systemic issue.

Inadequate policy enforcement in smaller countries and languages 

Even more concerning is that platforms do not invest equally in policy enforcement actions across different countries and languages. 

We gathered evidence of unequal enforcement on the European level by Facebook, Google, TikTok, and Twitter, based on their monthly COVID-19 disinformation monitoring reports. In particular, the platforms largely ignore smaller or Eastern European countries, while a top tier of 5 countries benefits from a balanced mix of policy actions. These “priority” countries tended to have large populations, some English speakers, or existing regulatory power.

One weakness is overreliance by platforms such as Facebook on local, IFCN-certified fact-checkers as the “independent oracles”. The problem is acute in smaller countries, such as Bulgaria, where we demonstrated the unbridled proliferation of anti-vaccine content on Facebook, partly because, until very recently, Facebook did not have an independent fact-checking partner. 

However, even where those exist, they tend to be underfunded and understaffed for the disproportionately higher volumes of disinformation circulating in these countries. For instance, in 2022, Meta/Facebook partnered with AFP Proveri in Bulgaria, which currently employs only one full-time fact-checker. However, the tech company failed to implement more systematic and effective measures in the country, leading to widespread Ukraine-related disinformation on Bulgarian Facebook.

Moreover, the resourcing issues mean that the volume and breadth of fact-checking in smaller countries are severely limited, which, in the absence of independent, country-specific research, gives the false impression that platforms are limiting exposure to disinformation in such countries far more effectively than they are.

On the technological side of content moderation, the platforms currently lack translation capabilities to address disinformation in minor languages and countries. Moreover, relying more on non-English or non-Western European CSOs and experts can help them implement better, linguistically, and culturally sensitive content moderation practices. 

Last but not least, countering disinformation in smaller countries and languages is often particularly difficult due to local politicians and/or parties spreading foreign or local propaganda, poor media independence, and low levels of media literacy. 

The problems are global, spreading far beyond the EU. Some particularly poignant examples come from the Philippines, where we worked with the IFCN-accredited fact-checker Rappler to study disinformation on YouTube, Facebook Messenger, and WhatsApp. For instance, although Facebook Messenger has slowly been rolling out features to curb the spread of misinformation, these features were not available in the Philippines. The effectiveness of the platforms’ actions will likely be undermined further by the efforts of the Philippine authorities to shut down Rappler (as well as some other independent news sites).

Use of crowdfunding platforms in funding conspiracy theories and other disinformation

The profitability of disinformation is well-known, thanks to advertising revenue earned from the large traffic volumes that deceptive pages tend to attract.

All major platforms and signatories of the Code of Practice on Disinformation are working towards enforcing their policies in terms of prohibiting ad-funded disinformation. Nevertheless, current efforts are still incomplete as argued recently in an EDMO task force report where we also contributed.

Unfortunately, disinformation actors also find new, less-studied and regulated funding sources. In particular, we investigated how crowdfunding platforms (namely Kickstarter, Patreon, GoFundMe, Indiegogo, and Tipeee) are harnessed to monetise disinformation or conspiratorial narratives indirectly and directly. Even though some crowdfunding platforms have taken action against COVID-19 disinformation and scams, we found these insufficiently broad and not enforced consistently. Moreover, we demonstrate how conspiracists and fringe movements use crowdfunding platforms to raise money even when their content has been demonetised or removed elsewhere.

Conclusions and Recommendations

Since 2016, numerous planned political events, such as elections and expected crises (COVID-19 and the war in Ukraine) have been negatively impacted by the largely unbridled spread of disinformation. As one crisis or event gives way to the next, shortcomings in platforms’ counter-disinformation responses have remained pretty much constant, with few lessons learnt. 

For starters, there is an urgent need to improve platform transparency and accountability. 

A key way of achieving this is for the platforms to provide vetted independent researchers and journalists with better access to data, so they can monitor the cross-platform and cross-lingual spread of disinformation and evaluate the effectiveness of platform policies and actions. 

In addition, we propose a stronger focus on the following:

Enabling independent studies on how disinformation is “distributed” across various types of platforms, traditional news sites, and different types of social media platforms; 

  • Widening the remit to include crowdfunding platforms and closed messaging spaces like Telegram; 
  • Detailed daily reporting from each platform, with data broken down on a per EU country and per language basis and provided in a standardized, structured format;
  • A more structured approach to monitoring enforcement of policies and regulatory obligations with a permanent body, similar to the Election Integrity Partnership (EIP) set up in the US to monitor the Presidential Elections.

We provide further details on these recommendations and possible implementation measures in the position we submitted to the European Commission regarding the new EU-wide Code of Practice on Disinformation. In the context of our involvement in the EDMO task force on disinformation, this was followed by the publication of 10 detailed recommendations in June 2022.