January 14, 2022

Disinformation experts need effective complaint and redress mechanisms. How else will we be heard when social media platforms make mistakes?

Inaction or partial action by social media platforms on content that violates their own terms of service enables the spread of disinformation.

At the moment, we do not have consistent and effective ways to seek redress in content moderation. Different platforms have different processes – some have none at all. But overall, when we identify disinformation that violates a platform’s rules, the burden rests on us as users to find a way to contact platform representatives, to stir up media attention and ‘name and shame’ the platform, or, in some cases, to hire a lawyer. This situation is untenable if we wish to truly tackle disinformation online. We need a functioning, formalised redress mechanism that lets us notify social media platforms of infringements to their own terms and conditions and appeal wrongful platform decisions.

The Digital Services Act (DSA), the EU’s draft law on internet safety and accountability, presents a window of opportunity to improve complaint and redress mechanisms, giving disinformation experts – and all users – a mechanism to ensure platform accountability and the right to be heard when platforms make mistakes. The text is in its final phases of discussion in the European Parliament. However, in the Parliament’s draft text, access to the DSA’s complaint and redress mechanisms (Art. 17 – 18) only extends to those who have had content taken down unfairly, not to those whose reports are unfairly ignored by platforms. Among other users, this oversight would fail disinformation researchers and fact-checkers, who frequently notify platforms of content that has not been removed – and especially in non-English languages. The DSA seeks to create a gold standard in platform regulation and user empowerment. This limitation could lower the bar and in many cases leave us worse off than we are now.

As researchers specialised in identifying disinformation campaigns, we feel it is relevant to share our own experiences notifying social media platforms to influence operations on their services.  Over the past two years, we have worked on over 10 OSINT (open source intelligence) investigations to expose tactics and strategies used by a wide range of actors to disseminate disinformation online. Our findings were published openly on our website, covered by major media outlets, and shared directly with contacts within the social media platforms concerned. In September 2021, we released a summary of the responses we had received from social media platforms after our notifications. The inconsistent responses we have received – and sometimes the lack of any response at all – highlight the need for a functional redress system.

NB: Following the release of this publication in September, some platforms did take additional actions and remove the content and accounts we had identified. However, we feel this belated action in response to public pressure rather proves our point and emphasizes the need for a functional redress system.

Influence operations can be subtle and difficult to pinpoint, which is why our expertise is valuable to platforms who may overlook them. To give a flavour of our work in the disinformation space: most of our investigations into influence operations identify CIB, or Coordinated Inauthentic Behaviour. CIB includes, inter alia, accounts hiding their location and real identity, the artificial amplification of external URLs through spam behaviours, efforts to artificially influence organic conversation through the use of multiple accounts, the use of pictures and names that do not exist or that belong to other people, and the use of intentionally misleading profile information. CIB may not be illegal per se (it is difficult to fit this behaviour into offline legal frameworks) but major social media platforms now have robust policies against CIB. and view it as a clear violation of their terms and conditions and a manipulation of their services. Meanwhile, rights defenders and defenders of election integrity understand CIB as a threat to fundamental rights like the freedom of expression and civic participation.

Platforms’ efforts to address influence operations and CIB are a work in progress, and our investigations provide valuable remedy when they make mistakes or fail to take appropriate action. For example, in December 2019, we published an investigation, also covered by Le Monde, about fake French-speaking media outlets managed from Ukraine. Following our investigation, the Facebook pages impersonating French politicians were quickly removed. However, no measures were taken against the Facebook accounts heavily spamming Facebook groups with polarizing political content and disinformation. Today, unfortunately, we can still find accounts tied to the Ukrainian network regularly amplifying COVID-19 disinformation debunked by French fact-checkers on Facebook groups. You can read our full investigation into this network here, and our full summary publication from September here.

From our experiences alerting platforms to infringements of their services in the disinformation space, we have the following key takeaways.

Key take-aways

  • Black-box decision-making prevents researchers and other users from understanding platform logic. When our notifications are ignored and our appeals unanswered, we do not gain any meaningful information on how platforms act to counter the activities of disinformation actors.
  • Inconsistencies prevail in how the largest platforms responded to our findings on disinformation networks, ranging from no response to partial action taken behind closed doors. In some cases, the platforms (Facebook, YouTube or Twitter) removed only part of the assets linked to a unique network, leaving to groups and actors spreading disinformation enough room to continue their manipulative activities.
  • Unequal enforcement continues across the EU based on language and region. For the same information operation linked to Russian actors, we noticed a large discrepancy related to the language: all the English-speaking assets were taken down while the French-speaking assets remain online.
  • Policy infringements go unanswered. In many cases, our findings show the infringements of platform policies on Coordinated Inauthentic Behaviour (CIB). However the actions taken by social media platforms based on our reports have been, at best, extremely limited, at its worst, inexistent.

How can the Digital Services Act solve this problem?

As disinformation experts, we should not have to contact Politico or the Associated Press whenever there is a need to take down a malicious network. There should be a stable legal environment where platforms are obliged to respond to users when they wrongfully decide to act – or wrongfully choose not to act – against these networks, which violate their stated terms and conditions.

The EU’s draft Digital Services Act (DSA) provides a window of opportunity to improve accountability for platform decision-making, allowing disinformation experts a fair chance at tackling disinformation.  Of course, disinformation researchers are not the only ones who stand to lose if the Digital Services Act fails to implement a balanced notice and action mechanism. The appeals of victims of hate speech and online violence who wish for this violent content (content which violates platforms’ terms of service) to be removed, would be similarly shut out of the DSA’s redress mechanism. It is essential that the DSA set a gold standard in redress, not just for disinformation researchers or domain experts, but for all users. For more information about the importance of user redress mechanisms, see an Open Letter signed by over 60 civil society organisations, think tanks, researchers, experts and service providers.

In order to establish consistent, functional complaint and redress mechanisms, we wish for the current version of Parliament text to be amended in the following ways.

  • Ensure a balanced complaint-handling system (Art. 17). This system must enable users to notify platforms when they have not acted on content that infringements the law or their terms and conditions. The Parliament’s draft text must be amended, since the current iteration of Art.17 opens the internal complaint handling system only to users whose content has been removed, disabled or otherwise restricted, and not when platforms fail to react to notifications or decide not to remove content that is illegal or violates their terms and conditions. Article 17 (1) needs simply to be amended to include the words “whether or not”, to allow users to seek redress against wrongful actions and inactions by the platforms.
  • Ensure access to out-of-court dispute settlement bodies to all users (Art. 18). Users must be able to take further action against platforms when platforms choose not to enforce the rules that users abide by as defined in the terms of service. The above modification to Article 17 should reduce any ambiguity in the text and ensure this equal access.