April 1, 2021

This position paper was submitted to the European Commission on March 31st as EU DisinfoLab’s contribution to the Commission’s second call for feedback on their proposed Digital Services Act.

Executive Summary 

The Digital Services Act (DSA) proposal is a step forward in the EU’s approach to creating a safer online space. The harmonisation of regulatory oversight and introduction of due diligence obligations for online platforms will create a stronger incentive structure for companies to tackle illegal and harmful content. However, the DSA does not do enough to tackle disinformation.

Disinformation is not merely a content moderation issue for social media platforms, but a challenge endemic to many digital services. While the current regulation appears to understand disinformation-related risks as ‘intentional manipulation’ or ‘exploitation’ of the service, research continues to show that disinformation is linked to the design and innate characteristics of these services (virality, velocity, content optimisation, network effects), including their interplay with one another and with other internet services. By focusing so heavily on the largest actors, the DSA risks overlooking harmful content on smaller, fringe, alternative, and emerging services, as well as the ways in which these services are systematically combined and abused in disinformation campaigns.

Our key points: 

  • Bolster Risk Assessments. The Commission should consider altering Article 26 (“risk assessments for systemic risks”) to further specify the risks related to disinformation along with a framework for assessing systemic risks related to disinformation, and provide further obligations for all services faced with such risks.
  • Expand “Know Your Business Customer” Obligations. The Commission should consider expanding the “Know Your Business Customer” (KYBC) rules to cover a wider range of digital services.
  • More nuanced understanding of “vetted researchers”. Over the past five years, the field of disinformation research has grown beyond the university lab, the DSA risks excluding a large number of competent researchers by giving a monopoly of the “vetted researcher” status to academics.

Our observations centre around five areas:

  1. The nature of disinformation 
  2. Clarifying significant systemic risks
  3. A framework for assessing systemic risk for disinformation
  4. Broader KYBC requirements 
  5. A role for vetted disinformation experts 

The Nature of Disinformation

We fear the current draft of the DSA does not fully grasp the nature of online disinformation, particularly disinformation campaigns. Disinformation is a cross-platform phenomenon that frequently includes illegal content and cybercrime. We appreciate the necessity of the DSA to limit itself to illegal content. That said, as EU DisinfoLab observed in our initial response to the Commission’s consultation, disinformation represents a particular challenge to the regulation’s efforts to distinguish between illegal and harmful content. Depending on the context, disinformation can be harmful but legal (misleading information, conspiracy theories, rumours) or illegal (voter suppression, foreign interference, defamation). We are also increasingly seeing disinformation tactics being combined as part of the cyberattack delivery package, necessitating a separate conversation, beyond the DSA, about how to counter disinformation as a cyberthreat.

Disinformation is an inherently distributed and multi-surface phenomenon. Disinformation campaigns leverage a network of assets across social media platforms, while making use of networking infrastructure and routing services, and multiple levels of the internet stack. The EU DisinfoLab’s investigations have tried to show  how social media platforms often serve as gateways and amplifiers of disinformation websites, and how small and large services can be used together. Meanwhile it is well documented that malign actors can circumvent content moderation systems and leverage inevitable flaws or loopholes in policies and enforcement, which is why disinformation can never be fully addressed through content moderation alone. Indeed, research shows that content moderation can have the perverse effect of pushing disinformation onto less moderated, less resourced services. For this reason, we as disinformation experts have particular confidence in the ability of ex-ante and consultative obligations to tackle disinformation. 

Clarifying significant systemic risks

We view the establishment of new categories of digital services as a positive step, ie. the Very Large Online Platforms (VLOPs, or services reaching 45 million of active monthly users in the EU, or 10% of the EU population) subject to a set of enhanced rules. We also generally endorse the DSA’s tiered approach, which reflects the aspects of amplification and reach which are critical to disinformation campaigns, and we agree not all services should be subject to the same obligations, since a plurality of spaces is necessary for the integrity of our information environment. However, we fear that the DSA’s failure to acknowledge the multi-faceted and multi-platform dimension of disinformation will hinder its objectives. Take, for example, the social network Clubhouse, which saw a surge in users and a parallel surge in disinformation, but would not be considered a VLOP by the DSA. Similarly, the use of crowdfunding services to fund dangerous conspiracy theories suggests that some smaller services must be held to higher standards. In its current form, the DSA risks overlooking disinformation on smaller, fringe, alternative, and emerging services, as well as the ways in which digital services are systematically combined in disinformation campaigns. 

We fully support the greater obligations placed on VLOPS to assess and address “any significant systemic risks stemming from the functioning and use made of their services.” We understand that Article 26 proposes only a minimum (not a comprehensive), list of these risks, and similarly that the reference to “content moderation systems, recommender systems and systems for selecting and displaying advertisement” in the second paragraph presents only a partial list. Given the nature of disinformation, Article 26 requires further specification of the risks related to disinformation, to encourage VLOPS – and other relevant services – to “identify, analyse and assess” risks and to “put in place reasonable, proportionate and effective mitigation measures”. These actions should prioritise ex-ante approaches (like KYBC obligations), demonstrate adaptability to the precise risks and stakeholders (Codes of Conduct), and rely on relevant expertise (consultative obligations like Trusted Flaggers, Data Sharing requirements).

A framework for assessing systemic risk for disinformation

We are sensitive to the need to differentiate platforms according to their size with regard to their obligations, but such differentiation must not lead to an evasion of responsibility, or to a situation where malign actors abuse smaller or less-resourced services that have lower security standards as a consequence of the DSA requirements.  We propose that, in addition to size, further criteria be used to determine which services are at a systemic risk of hosting or amplifying disinformation, and should therefore be subject to relevant due diligence obligations. These criteria should be decided in consultation with civil society disinformation experts, and could take into account the following, a minima:

  • Design: does the design lend itself to misuse by malign actors, either by allowing unmoderated file sharing, direct access to the dark web, heightened levels of anonymity, etc.
  • Amplification: does the platform amplify content or rely on automated recommendation systems such that it can be leveraged by malign actors.
  • Terms of Service: does the platform fail to account for disinformation in its terms of service
  • Capacity: does the platform have sufficient staff (and communication within staff) to take action on disinformation, in all relevant languages and contexts.
  • Financial Transactions: does the platform facilitate financial transactions which could be used to finance disinformation campaigns.
  • Audience: does the platform leverage trusted relationships or institutions and thereby put a certain audience at heightened risk of disinformation (for instance, a neighborhood platform which could lend itself to astroturfing, an education-technology platform).

As these risks are evolving rapidly, it will be important for the Digital Service Coordinators (DSC), tasked with reviewing and updating the framework, to maintain close dialogue and regular contact with the community of disinformation experts. 

Additional changes to the DSA’s approach to risk assessments that would help create a safe online environment, include deleting the word “illegal” from recital 58, when discussing how VLOPs should deploy the necessary means to mitigate the systemic risks identified in their risk assessment. As the Commission already acknowledges in the DSA, algorithmic recommender systems have a foreseeable negative effect on the protection of public health and civic discourse by amplifying legal but harmful content, not just illegal content.

A service that is found or considers itself to be at systemic risk of disinformation on the basis of determined criteria should then be subject to new obligations, including obligations that the DSA currently only reserves for VLOPS. In our view the following obligations are the most relevant to addressing, mitigating, and discouraging disinformation:

  • KYBC
  • Trusted flaggers
  • Data sharing with authorities and researchers

We see these as ex ante obligations, which address disinformation that relies on the design and functioning of services, rather than obligations that mainly deal with illegal disinformation content ex post

Broader KYBC Requirements

Currently there are insufficient transparency requirements to hinder malign actors from placing ads, buying domain names or setting up fake accounts. We are pleased to see the DSA adopt “Know Your Business Customer” (KYBC) obligations for certain online services to make online business transactions more transparent and trustworthy (Article 22). However, we believe the application of the obligation merely to ‘traders’ is overly restrictive. The DSA must go further and apply “Know Your Customer” requirements across a broader range of transactions facilitated by online platforms and service providers. Such transactions should include, a minima, the purchase of advertisements, the purchase of domains, or the establishment of crowdfunding pages, for example. 

On the basis of our research, we find the presence of financial transactions alone puts services at systemic risk of abuse for the purpose of disinformation. We therefore suggest KYBC become a cumulative obligation applying to all intermediary services, hosting services,  online platforms, and very large platforms that facilitate financial transactions and meet one other criteria putting them at systemic risk for disinformation. 

Knowledge of their transacting user or customer’s identity is not only useful to understanding the nature of the customer’s activities and assessing the risks associated with that user or customer. KYBC also permits disinformation researchers and relevant authorities to ’follow-the-money’ as a way to investigate and attribute disinformation campaigns. Meanwhile, advancements in KYBC technology (automated KYC solutions) continue to make this vetting process less onerous and more streamlined, allowing for its broader implementation, and its alignment with privacy protections. 

Trusted Flaggers

Effective detection of sophisticated information manipulations can hardly rely only on yearly self-assessments. Currently, if a well-established European organisation uncovers a high-impact disinformation operation with convincing evidence, there is no guarantee action will be taken. The Commission is right to reinforce the role of “trusted flaggers” (Article 19), this system could benefit from ensuring that the criteria for awarding a “trusted flagger” status to an organisation takes into account the particularities of disinformation research, for example a knowledge of web forensics and Open Source Intelligence (OSINT) techniques, and a capacity to uncover Information Operations run by state-backed actors in multiple languages. These “trusted flaggers” must also be capable of reaching agreement with digital services not to remove active disinformation campaigns without coordinating with the experts researching those campaigns, as well as reach agreements for archiving to facilitate research. Safeguards must be in place to ensure the independence of these trusted flaggers from capture and to prevent abuse of this status, whether by platforms or by governments. A final approval of trusted flagger accreditation at the EU level, for instance through the foreseen European Board for Digital Services (Article 47), could serve as an appropriate safeguard in this respect.  

Data Access

We are pleased to see that the Commission responded to civil society’s calls for legally binding data access frameworks and public access to advertisements repositories. However, the new rules pose overly restrictive criteria needed for “vetted researchers”, narrowing the scope to university academics (Article 31). While we fully support improved access for academics, this legislation should acknowledge that the disinformation community has grown beyond the realm of the university research lab and now includes a variety of different actors: journalists, educators, web developers, fact-checkers, digital forensics experts, and open-source investigators. While a rigorous vetting process should certainly be in place, the solution is not necessarily to limit access to university academics.

The regulation should allow vetted disinformation experts reinforced access to platform data. The precise vetting process must be determined by lawmakers, but pivotal criteria for the provision of this status could include: expertise (subject matter and linguistic expertise), competence for the purpose, representation of the public interest, and independence.