by Claire Pershan, EU DisinfoLab Policy Coordinator
This article appeared originally in March, in the Crisis Response Journal, Volume 16.1. It was published as part of a wider feature on disinformation and emergency management, which included comment pieces from Lord Toby Harris, Chair of the National Preparedness Commission, and Mami Mizutori, UN Special Representative of the Secretary-General for Disaster Risk Reduction in the UNDRR. You can subscribe to the Crisis Response Journal and receive quarterly editions of the publication here.
Social media platforms provide internet users with ways to share content, much of it created by users themselves (user-generated content), with audiences across the world. From a technical perspective, all internet users are considered equal and have access to the same distribution toolbox, whether you are a public health authority, an fashion influencer or a fact-checker. Recent years have revealed that much of the content that circulates on social media is inaccurate, decontextualized, or misleading, if not overtly divisive, hateful, and disinformative. The salience of harmful content over fact-based, civic, or conscientious content is often the result of platform design, which is ripe for manipulation by malicious actors. Different platforms operate differently , but here we are referring to the handful of platforms that we, and EU and North American policy makers, generally talk about when we talk about social media (Facebook, YouTube, Twitter, TikTok). New platforms will raise new challenges.
To sort and rank this growing amount of data, most social media platforms curate content using algorithms that are optimized for engagement, and engaging content, it turns out, is often of the harmful variety. Most platforms also generate revenue from advertising, which benefits from users’ attention remaining on the screen and encourages increasingly personalised content. This curation strategy is not optimised for disseminating facts, building civic discourse, or, it seems as we look at platforms’ response to Covid-19, responding to societies’ most pressing challenges.
Currently the lack of regulation over social media (the largely ‘self-regulatory’ approach) has led to a condition where they are both crucial in the information distribution ecosystem, and a threat to that very same ecosystem. This was most recently demonstrated by Facebook’s Australian news ban – Facebook’s response to Australia’s new media directive. On February 17, Facebook took the decision to restrict access to news from Australian publishers (though it seems this ‘nucear option’ was a temporary bargaining strategy and the company has since come to a deal with the Australian government). Facebook pages of Australian emergency services were taken down as collateral damage in this sudden ban, leaving the Australian emergency services unable to publish public information for Australian citizens through the platform.
Many have seen this coming. The establishment of many disinformation-focused civil society initiatives around 2016 suggests broad awareness at least by this time. While working at Google, Guillaume Chaslot realised that YouTube’s algorithm was actively promoting disinformation. He looked at topics where he could clearly distinguish facts from fictions, for instance flat earth conspiracies, and found that the algorithm was surfacing more flat earth videos than round earth videos. Frustrated at Google’s inaction, Guillaume left to found AlgoTransparency and raise awareness of this foundational design flaw. YouTube eventually adjusted their algorithm to address some of the tendencies Guillaume had exposed, notably the amplification of terrorist content, but the company has not made more fundamental changes and has still not addressed disinformation related to other topics, including flat earth content.
In April of 2020, while the first wave of the Coronavirus was breaking over Europe and much of the world, a pseudo-documentary video went viral over social media platforms. Viewed by tens of millions on Facebook alone, “Plandemic” alleged that US government scientists were responsible for the virus and encouraged mistrust in government and public health officials. Plandemic is one of many such videos that have circulated in the last year, amassing millions of views.
As part of its research activities, EU DisinfoLab conducts regular monitoring of salient disinformation narratives, has found that Covid-19 denialist content, anti-vaccine content, rumours about false-cures, anti-government conspiracy theories and other disinformation narratives seem to follow the curve of the virus and government response measures. The World Health Organisation has described this phenomenon as an infodemic: “an overabundance of information, both online and offline. It includes deliberate attempts to disseminate wrong information to undermine the public health response and advance alternative agendas of groups or individuals.”
The spread of false information during the pandemic has been exacerbated by the tendencies of social media to promote engaging user-generated content over fact-based or scientifically-informed content, and to do so at unprecedented velocity and scale. This is due both to intentional design features, like frictionless sharing, context collapse, and the tendency towards virality, as well as loopholes in platform design, failures in policy enforcement, and crossplatform effects that actors can exploit. For example, research has shown how successfully the anti-vaccine movement has been able to leverage the design of social media promote anti-vaccine narratives both extensively and strategically. Illustrating with EU DisinfoLab’s research into political foreign interference campaigns, Executive Director Alexandre Alaphilippe has explained how actors can effectively combine “active measures” (content production and dissemination, paid and coordinated amplification) with “the passive ecosystem” of the social web (ad tech, algorithmic recommendation systems, monetization platforms). Platforms also facilitate financial support, for example through crowdfunding services.
Outsourcers of truth
In the pre-Covid days, social media platforms generally stressed their position as conduits of information, with a limited role and responsibility regarding the content that appeared on their services. Neither publishers nor journalists nor government authorities, platforms have not wished to be seen as the ‘arbiters of truth’. More accurately, though, platforms are the outsourcers of truth. Platforms like Facebook and Google outsource content moderation to thousands of content moderators around the world, who sift through posts with the help of advanced algorithms. Content moderation, like Trust and Safety more broadly, is a growing industry. When many sectors of the workforce were sent home last spring, so too were content moderators, which forced platforms to rely more heavily on automated moderation. This had unfortunate results, like false positives (overly-censored content) and false negatives (undetected harmful content). Algorithmic capabilities are improving, but as long as billions of posts, comments and replies are uploaded every day, across the world’s languages and cultural contexts, this will remain a game of cat and mouse. We are also likely to see continued conflicts between platforms and governments, platforms and platforms, platforms and historians, counter-terrorism efforts and human rights defenders, and just about any interest groups you can name.
But the pandemic and infodemic have heightened the stakes and changed the rules of the game. Unsurprisingly, social media usage has increased with more people at home and online. Major platforms have found themselves in the role of first responders, responsible for removing what is in some cases life-threatening health misinformation, as well as for disseminating essential updates about the sanitary situation, government measures and advice from health professionals. Once again, vaccines are a clear example of this new double responsibility: platforms are tasked with simultaneously removing false information that creates vaccine hesitancy while disseminating data about vaccine functioning, delivery, and other preventative health information.
This role is not one social media platforms were designed to perform, at least not on their own. But the past year has seen platforms react in unprecedented ways, honing a playbook of responses to false and harmful information – in particular connected to the coronavirus and public health, but also in relation to other societal issues, like election and civic integrity, hate speech and harassment.
Fact-checking surged in relevance during the pandemic. Like other elements of content moderation, fact-checking is generally outsourced to third parties or performed through partnerships with newsrooms and independent organisations. Unfortunately, fact-checking is far from a silver bullet for social media’s information challenges, particularly in relation to a novel virus where uncertainty is inherent. How can you fact-check a subject that scientists are still researching and debating? As EU DisinfoLab’s research has shown, this difficulty is compounded in relation to scientifically debatable claims, like the benefits of hydroxychloroquine. Our research into Twitter and YouTube has exposed how easily platforms’ approach to disinformation can be circumvented, in part because fact-checking policies operate on a claim-by-claim basis: this means new posts with the same essential content can evade detection and continue to spread. Meanwhile on Facebook, despite the company’s extensive third party fact-checking program, disinformation continues to circulate and even go viral, in particular through conversations on Facebook Messenger and WhatsApp. Messaging apps remain relatively neglected by fact-checkers, even though their use soared since the outbreak of the pandemic – Facebook reported that total messaging in “countries hit hardest by the virus” increased by more than 50%.
The spread of disinformation in private messaging spaces is a serious concern for fact-checkers who cannot easily survey these environments, and mainly rely on tiplines. These are referred to as private messaging services, but this description can be misleading: on Facebook Messenger, you can message up to 150 people at once.
A primary strategy employed by platforms has been to redirect users to “authoritative sources” like the World Health Organisation and local health authorities. Platforms promote this content on homepages or banners, in dedicated panels or hubs, or through advertising space (donating advertising credit to health authorities, for instance). Google for example highlights authoritative content in its search engine when people look for information about Covid-19. Regarding their actions in Europe, Facebook emphasizes their efforts to disseminate authoritative content: “The coronavirus pandemic has reinforced the importance of collaboration across borders and within our communities. We’re proud to partner with all 27 European governments, the World Health Organization and the European Center for Disease Control to support their relief efforts and ensure everyone has access to accurate information”. However promising an approach may be, authoritative content remains an incomplete response to disinformation; it is not uniformly available or proven effective across languages, member states, or platform services. The design of messaging services is particularly challenging for authoritative content, because it creates a struggle for legitimacy between governments and health professions on one side, and family and friends on the other.
Critically, few of the measures taken by platforms fundamentally challenge their design or business model. As mentioned, their first actions were to provide free advertisement to a series of stakeholders, which essentially consists of accelerating information distribution from these sources. A different path for platforms would be to implement speed limits to the distribution model. There have been experiments with various kinds of “friction” – prompting people to reflect before sharing content, pausing and imposing fact-checking on highly viral content, down-ranking or temporarily hiding content, limiting the number of times content can be shared at once, etc. For services that pride themselves on their frictionless user experience and instant access to infinite content, friction seems a more radical move than merely offering up ad space or adding fact-checked labels to content. Not yet a dominant platform response, it does suggest a more viable approach to the platforms’ systemic disinformation challenges.
In their efforts to address platform disinformation during the Covid-19 infodemic, European institutions have called on platforms to be more accountable for disinformation on their services. In particular, they have encouraged platforms to promote information from European health authorities and to increase fact-checking efforts and the prominence of fact-checked content. The effectiveness of the EU’s self-regulatory strategy has been seriously questioned, but the upcoming EU Digital Services Act promises to impose a regulatory framework on at least some aspects of this problem. The Digital Services Act proposal even includes a provision on “Crisis Protocols” for the largest social media platforms to address “extraordinary circumstances affecting public security or public health”.
It would have been much more difficult to imagine such an obligation on social media platforms before Covid-19, and yet at the start of 2021, with governments around the world clutching their regulatory pens, this seems a logical evolution. Once so hesitant to be arbiters of truth, social media platforms have come to terms with the responsibility that stems from their position in our information environments.
You can download the PDF of this article as it appeared in the Crisis Response Journal here.