January 18, 2021

by Maria Giovanna Sessa, Researcher at EU DisinfoLab

This blogpost is part of the EU DisinfoLab Covid19 hub, an online resource designed to provide research and analysis from the community working on disinformation. These resources highlight how the Covid19 crisis has an impact on the spread of disinformation and policy responses.

  • Following our earlier research on the loopholes in the enforcement of Facebook policies, this blogpost focuses on content moderation shortcomings on Twitter and YouTube. In particular, we show that disinformative claims that wrongly link COVID-19 and 5G technology often remain undetected. Focusing on these two major platforms allows us to examine in depth the spread and management of 5G-related harmful content in the context of the pandemic.
  • We also highlight that while content moderation efforts are continually improving and scaling up, they still remain insufficient and sometimes excessively slow. This leads to false information remaining on these platforms for months before it gets labelled or removed.
  • Content moderation is a delicate balancing act, frequently resulting in unintended errors. For instance, insufficiently mature  automated moderation algorithms can be counterproductive, as in the case of Twitter adding a fact-checking label to all tweets mentioning 5G and coronavirus, even though they were not in violation of their company policy.
  • We highlight the international and cross-platform reach of disinformation. Although our primary focus is on English-speaking content, we also observed 5G-coronavirus hoaxes from 20 countries on 4 continents, transmitted and amplified through a number of social networks. 
  • Narrative-wise, misinformation about the health dangers of new technology is old news. This is why it is critical to deal effectively with all false information,  since it has the capacity to reappear and cause harm months or years later.

As the COVID-19 pandemic unfolded, Twitter and YouTube vowed to remove content deemed  to be in breach of their policy guidelines. Both platforms pledged to keep the community engaged with reliable and real-time information during these critical times. In particular, the effort to reduce the spread of harmful disinformation included removal of tweets and videos that link 5G technology to the coronavirus.

Unfortunately, even though these policy updates began to be enforced in April 2020, researchers at the EU DisinfoLab have continued to identify numerous tweets and videos from various Twitter accounts and YouTube channels, which accuse 5G networks of either causing or amplifying the symptoms of COVID-19. While some of the content appears to have been fact-checked, labelled or even removed over time, a significant number of posts did not display any outwardly visible signs that action has been taken by the respective platform to enforce their own policy. As a result, numerous tweets and videos continue to reinforce and propagate the 5G-coronavirus conspiracy, in open violation of the social network’s policies.


Moderation challenge

In mid-2019, a global deployment of 5G (the fifth generation technology standard for cellular networks) began, amid speculations about possible adverse effects on human health, despite the lack of corroborating scientific evidence. The contemporaneous spread of COVID-19 added further fuel  to online conspiracies that claimed that the new respiratory disease was caused by the new technology and that the virus was either a hoax or a mere consequence of 5G.

In order to track these conspiracies on a global scale, we searched for the term “5G” in the IFCN CoronaVirusFacts Alliance Database. Between the end of January to December 2020, 144 fact-checking articles spanning 20 countries (and 4 continents) were collected. They all debunked false information about COVID-19 and the new technology. In this sample, 134 items of disinformation (93%) explicitly suggested a causal link between 5G and the pandemic. In particular, Twitter and YouTube were the originating platforms of 5G-related disinformation in 15 (10%) and 25 (17%) cases respectively.


5G misinformation on Twitter and YouTube

On April 6th 2020, an interview with David Icke was live-streamed on YouTube, in which the English conspiracy theorist falsely claimed a connection between the 5G mobile network and the health crisis. The platform was criticised for removing the content only after the session ended, as the nearly 65.000 viewers brought revenue via the Super Chat function, the monetisation tool that allows users to pay in order to pin comments on live streams.

YouTube later reinforced its guidelines, as the footage was found in breach of the platform’s COVID-19 medical misinformation policy that prohibits videos from making medically unsubstantiated allegations, such as “claims that COVID-19 is caused by radiation from 5G networks”.

On April 22nd, Twitter also updated its guidelines on “unverified claims that have the potential to incite people to action, could lead to the destruction or damage of critical infrastructure, or cause widespread panic/social unrest”. Policy violations that fall into this category include, amongst other content,  5G-conspiracy theories.

However, a grey area still remains on both social networks which  allows dubious content to proliferate. In particular, Twitter decided not to act “on every tweet that contains incomplete or disputed information about COVID-19”, but to prioritise the removal of potentially harmful content. A similar stance was already put in place by YouTube, which consists in reducing recommendations “for borderline content that could misinform users in harmful ways” even though it is allowed to remain available on the site. 

These loopholes allowed the conspiracy to continue spreading through the two platforms, with unfortunate real life consequences, such as the targeting of telecommunications employees and infrastructures. This example demonstrates clearly the complexity of assessing the impact of disinformation on the safety of a person, a group of citizens, and the general public.

A year after the emergence of the 5G and COVID-19 conspiracy theories, we set out to investigate their incidence and impact on YouTube and Twitter. We found that, given the sheer number of users, the measures set out by the platforms do not currently allow for effective moderation of all published content on 5G and COVID-19. In particular, we analysed a randomly collected sample of 5G-related posts published after the two companies updated their guidelines in  April 2020, the majority of which suggested some sort of connection with COVID-19. We chose to focus on YouTube videos that have obtained at least 1.000 views since their publication, while no special criterion was applied to Twitter, where we analysed a number of explicitly anti-5G accounts.


Twitter: neglecting highly visible disinformation 

Since the start of 2020, Twitter started labelling tweets (also retroactively); at first those containing synthetic and manipulated media, and later, tweets containing unverified claims, disputed claims or misleading information related to COVID-19.

The labels redirect users to either a Twitter-curated page or an external authoritative source that provides additional information on the claims made in the tweet, as in the screenshot below. Moreover, the warning “some or all of the content shared in this Tweet conflicts with guidance from public health experts regarding COVID-19” is applied a priori, that is to say before the user proceeds to view the tweet itself. The link provides additional information on the topic of the tweet, but does not explicitly point out whether the latter is true or false, making it more of an advisory tag rather than actual fact-checking.

The company states that it is working with a number of “trusted partners” “to identify content that is likely to result in offline harm”, but there do not appear to be any independent fact-checkers among these trusted partners. In other words, not only has the verification process remained obscure, but also the inaccuracies and limitations of the platform’s labelling policy have become patently clear to its users, as the ironic tweet below demonstrates. There are also other cases where Twitter has apologised for erroneously labelling as disinformation  a series of tweets that mentioned 5G and coronavirus together, irrespective of their non-conspiratorial nature.

Conversely, Twitter also continues to fail to label high-visibility tweets that have been propagating actual disinformation on COVID-19 and 5G technology over the past months. Recurrent false claims included:

  • The belief that COVID-19 symptoms are actually electromagnetic radiation poisoning caused by 5G frequencies builds upon a theory that was postulated in the early 20th century by Rudolph Steiner. According to Steiner’s theory historical epidemics such as the Spanish flu were caused by technological progress that weakened the immune system. This conspiracy theory (circulating in the form of an online article and a YouTube video that has since been removed) has been debunked by Italian fact-checkers. In the past, the narrative was also used to make unsubstantiated claims about the dangers of microwave ovens and Wi-Fi connections
  • The direct or indirect experience of individuals who claim that 5G radiation is responsible for COVID-19 symptoms, as well as the warning of alleged insiders who confirm the correlation between the new technology and the pandemic.

Moreover, it is often suggested that the pandemic containment measures are part of an evil plan by the deep state to discreetly install 5G antennas away from the public eye.

Even though the content moderation process is being continuously improved, it is not comprehensive and often remains slow. To illustrate this, David Icke’s Twitter account was only suspended over COVID-19 disinformation in November 2020 despite, among other things, the mediatic deletion of his conspiratorial Youtube video about 5G and COVID-19. Nevertheless, various anti-5G accounts, showcasing coronavirus-related conspiracy content in their biography, escape the platform’s detection mechanism and continue to operate.


YouTube: fact-check information panels are not enough

In late April 2020, YouTube began to add information panels from authoritative sources (e.g. government agencies, health ministries and other medical institutions), which are either verified signatories of the International Fact-Checking Network’s Code of Principles or are designated as “an authoritative publisher” on the topic. These information panels should appear at the top of search results or under a video, in a number of countries, providing background context “related to topics that are prone to misinformation”, from moon landing to COVID-19, and will appear regardless of the stance taken by the video.

YouTube also relies on fact-check information panels, currently available in Brazil, Germany, India, the United Kingdom and the United States. These panels appear exclusively at the top of the research results, which means that if users reach a video through an external link or by clicking on related links, they will not be shown such panels. The publisher needs to respect the platform’s Community Guidelines, to ensure traceability and transparency, as well as to attach ClaimReview markup to fact-check articles, which allows search engines to easily recognise content as verified. 

For example, the screenshot below displays an information panel in the Italian version of YouTube that redirects users to the Ministry of Health’s official website.

When users search for a specific claim,  fact-checking panels might label them as true, partly true or false. In this case, the fact-check contains the name of the publishing source, the date, the fact-checked claim, a link to the article and an excerpt indicating the main finding. Fact-checking panels tend to appear “mainly if the search terms are clearly seeking information about the accuracy of a claim” YouTube explains on their website However, these might not surface due to consideration on the “relevance and recency” of the search or because a suitable fact-checking article is missing.

Concerning the 5G conspiracy, YouTube declared its commitment to quickly remove those videos in breach of the guidelines, “promoting medically unsubstantiated methods to prevent the coronavirus in place of seeking medical treatment”. Nonetheless, YouTube openly admitted that not all reported videos are being removed: so-called “borderline content” is allowed to remain on the platform with some limitations. For instance, these videos are removed from search results, receive fewer recommendations, and creators are deprived of advertising revenue deriving from said content.

Still, web creators whose content has been forcibly demonetised can still find ways around the restrictions. One workaround is to include a donation link in the video description, as the one displayed below that redirects users to a Patreon page. We also identified a promotional video selling radiation protection devices, where customers could even get a 5% discount using the coupon code “5gawareness”.

Even though content that disputes the existence and transmission of COVID-19 (including the 5G-related conspiracies) violates YouTube’s policies, we nevertheless found multiple videos (with over 1.000 views) that make an explicit connection between COVID-19 and the new mobile technology, and thus should have been removed.

Since September 2020 when we collected a sample of 24 English-language videos (posted after the platform’s policy update in April 2020), 4 of them (17%) have been removed for violating YouTube’s Terms of Service, or made unavailable as a consequence of the account’s termination. The video went as far as denying that COVID-19 and the outbreak of a Kawasaki-like disease are caused by a virus, claiming instead they are caused  by electromagnetic radiation.

Therefore, even though the content was ultimately deleted, it had been available for over four months. Moreover, similar content is still available on the platform, such as this video that protests the installation of 5G antennas. The clickbait title actually refers to a U.S. Food and Drug Administration document declassified in 1972, which lists the symptoms of microwave radiation. The speaker proceeds to ask “and interesting enough – let’s look at some of these things – do they sound a little bit like maybe covid symptoms? So there might be a correlation” (8.58–9.06). Although a COVID-19 information panel has been added to the video, this does not seem nearly enough given the content. 

Lastly, we also studied and tried to  quantify the cross-platform transmission of disinformation videos originating from YouTube. For this purpose, we took the above mentioned 5G-related English-speaking videos and observed that even this small sample accumulated over half a million views on YouTube until early January 2021. As for the amplification loop, retrieving the data from CrowdTangle, the 20 videos that still exist at the time of writing (January 11th, 2021) attracted over 86.000 Facebook interactions, around 15.000 reactions, 10.000 comments and 8.000 shares. 


Due diligence for highly visible disinformation

The EU’s Digital Services Act proposes to regulate illegal content, but it does not define what is illegal online. From a platform perspective, the DSA would harmonise “due diligence” baseline obligations for stakeholders like YouTube and Twitter, requiring them to improve transparency, to conduct regular reporting and assess risks, etc. The DSA would also clarify the circumstances when they would be exempt from legal liability. Ideally, these new conditions, along with more specific Codes of Conduct, would give platforms clearer guidance on when and how to deal with content like disinformation. 

Is platform self-reporting a sufficient strategy to reduce the quantity and reach of disinformation? Should platforms be able to assess risk for themselves when it comes to dangerous conspiracy theories and health-related disinformation? Will the DSA be able to ensure the independence of the new system of “trusted flaggers”? Are voluntary Codes of Conduct the best way forward? How might co-regulatory Codes of Conduct need to be revised to become more effective and comprehensive? 

From a civil society and user perspective, the fact that disinformation was able to be shared so widely before being addressed, or effectively addressed, leads us to ask: what obligations should be in place based, not just based on the size of the platform, but based on the reach of the content? The idea of imposing thresholds for highly visible content is not new, but does not seem to have made its way explicitly into the regulation. This would also require platforms to provide more specific verifiable data, such as audience reach and clickthrough criteria.

Finally, from the perspective of a research-based NGO, it is not yet clear how the EU’s new strategy will make our monitoring of the disinformation threat significantly easier. Disinformation data is generally reported on and collected platform by platform, but disinformation researchers need a more holistic view of the platform landscape to better observe cross-platform transmission and phenomena. The findings of our research clearly demonstrate the cross-media reach of disinformation and the urgent need for social media companies to work together and coordinate better their counter-disinformation efforts. 

The regulatory debate on the DSA has just begun and there will be time to properly scrutinise the feasibility of the new rules. In the meantime, platforms that truly wish to reduce disinformation on their services must be more vigilant to the ways their policies can be subverted. They need to continually assess the effectiveness of policies, and the timiness and comprehensiveness of enforcement.